What's new

Apple M3

better to keep everything in close proximity
There's nothing to say it would have to be one bus or one cable, though, right? Or for that matter that they use an existing protocol. Without knowing anything about this, you'd think it would be possible just to replace the circuit board traces with wire (probably with a signal boost).

If everyone complains about the weight, I suspect they'll make it lighter.
 
IMO there is only one “killer app“ needed to win this game, and that is form factor. Whoever is first to come up with the Warby Parkers that bear something like AVP functionality becomes the first mover. If I were at Apple, at this point most of my R&D energy would be in wireless bandwidth and power. I’d be designing a chipset to enable the broadest possible short distance signal transmission via a customized multiplex Bluetooth protocol or similar; for the headset most efficient possible transceiver, low energy translucent displays, inductively chargeable. Doesn’t need to match the immersion factor of the AVP. Just be good enough for during the day AR.

PS, iPads did replace laptops or, rather, they do over time. Look at the CPUs/GPUs. The iPad Pro is equivalent to yesterday’s MacBook Air with a touchscreen instead of a keyboard. It’s an ergonomic decision primarily. Form factor.
 
Until I can plug an Apple Vision headset directly into my MacBook for power and zero latency monitor extension, I’ll be holding off on getting one. I’m very excited about the future potential of all this tech though, especially after trying AVP myself.
 
PS, iPads did replace laptops or, rather, they do over time.
We have a couple of iPads at home (including a Pro) and they definitely do not replace laptops in 90% of use cases.

The iPad is great for a number of tasks but it's terrible as a general computing device. A big reason is the OS itself.

I suspect it will be the same with the Vision Pro. It will be awesome for a limited number of use cases and mediocre or just bad for the rest.

Until I can plug an Apple Vision headset directly into my MacBook for power and zero latency monitor extension, I’ll be holding off on getting one. I’m very excited about the future potential of all this tech though, especially after trying AVP myself.
Having a portable Mac virtual environment with any number of monitors, zero latency, HDR, ultra high def, etc is really the most appealing use case for me. I'm sure many digital nomads would pay anything to have that.
 
We have a couple of iPads at home (including a Pro) and they definitely do not replace laptops in 90% of use cases.

The iPad is great for a number of tasks but it's terrible as a general computing device. A big reason is the OS itself.

I suspect it will be the same with the Vision Pro. It will be awesome for a limited number of use cases and mediocre or just bad for the rest.
Sure, same hardware here. But notice how, first, the hardware increasingly overlaps and, second, the use cases increasingly do so as well.

Care to bet that in 5 years there won't be something called an iPad that can run a full version of VS Code and Logic, while the "general purpose" laptops will stay ahead on ML and AR-linked use cases?
 
TB5 goes up to 120Gbps. I'm guessing unidrectional since it provides 80Gbps bi-directional.

There have been headsets with belt-mounted processors - I think the ill-fated, massively-hyped Magic Leap headset had one. Given where Apple wants to get with this, as quickly as possible, even the battery pack and cable is not ideal.

IMO there is only one “killer app“ needed to win this game, and that is form factor. Whoever is first to come up with the Warby Parkers that bear something like AVP functionality becomes the first mover. If I were at Apple, at this point most of my R&D energy would be in wireless bandwidth and power. I’d be designing a chipset to enable the broadest possible short distance signal transmission via a customized multiplex Bluetooth protocol or similar; for the headset most efficient possible transceiver, low energy translucent displays, inductively chargeable. Doesn’t need to match the immersion factor of the AVP. Just be good enough for during the day AR.
This is absolutely where Apple wants to be, as quickly as possible. It's what they were working on for years before deciding that the tech wasn't close enough and changing course to Vision Pro to get a product into the market and start iterating based on what they find out.

We have a couple of iPads at home (including a Pro) and they definitely do not replace laptops in 90% of use cases.

The iPad is great for a number of tasks but it's terrible as a general computing device. A big reason is the OS itself.

I suspect it will be the same with the Vision Pro. It will be awesome for a limited number of use cases and mediocre or just bad for the rest.
The iPad is always potentially a replacement for just about anything - not too many years ago, I thought the future of creative workflows would be an iPad connecting to local network data, with the user sitting in a comfortable chair instead of chained to a desk. The big tech players all went to the cloud instead of local, and then Apple Silicon made working with a Mac much more pleasant from a noise and heat perspective, and that possible future evaporated (at least in my case).

If iPadOS had improved faster, maybe I'd feel different, but touch/pencil requires affordances and makes precision difficult (and even an iPad with a trackpad still has things scaled for touch), and for the work I do most often, those are the things I need, so being at my (low-tech sit/stand) desk is still the best option.

With Vision Pro, precision won't be an issue - literally anything you look at can be selected instantly - but how that will actually feel in practice remains to be seen. Some have already noticed that only being able to interact with what you're actively looking at takes a lot of getting used to, and text input of any length is going to rely on either dictation (which apparently works very well, at least for shorter things) or connecting a bluetooth keyboard.

While they already have effectively latency-free remote control of a Mac in place (even over non-exotic wifi), it's limited to one 4k-equivalent display, and you have to deal with constant context switching, with the Mac still using Mac input methodology (keyboard, trackpad, click and drag, etc), while everything else uses the native VisionOS inputs. It will be interesting to see how people adjust over time.
 
Care to bet that in 5 years there won't be something called an iPad that can run a full version of VS Code and Logic, while the "general purpose" laptops will stay ahead on ML and AR-linked use cases?
Whether an iPad ever runs a development environment is entirely up to Apple - to this point, they have shown no interest in doing that (beyond Swift Playgrounds). I've already seen one developer lament that even the Vision Pro is a device to be developed for, when it could revolutionize development if it was something you could develop on (infinite canvas for windows, etc), but that seems like an entirely different product at this point.

A developer really needs a non-sandboxed environment, and at this point, the Mac is the only one that Apple is offering.
 
Sure, same hardware here. But notice how, first, the hardware increasingly overlaps and, second, the use cases increasingly do so as well.

Care to bet that in 5 years there won't be something called an iPad that can run a full version of VS Code and Logic, while the "general purpose" laptops will stay ahead on ML and AR-linked use cases?
I don't know.

I would have agreed with you 10 years ago but over time Apple has shown it's not interested in making the iPad a general purpose device.

Sure you can run a code editor but you need more than that to develop software.

The iPad is always potentially a replacement for just about anything
Potentially, yes.

But I think we would be both extremely surprised if Apple allowed macOS (or something similar) to run on an iPad :)
 
There have been headsets with belt-mounted processors - I think the ill-fated, massively-hyped Magic Leap headset had one
The head of the company sneered at keyboards as "legacy devices."

That was when I knew his product was bullshit.

We have a couple of iPads at home (including a Pro) and they definitely do not replace laptops in 90% of use cases.
For me it's the other way around: I all but stopped using my iPad when I got an 11" MacBook Air! Why do I need an iPad when I can carry around a real computer the same size?

But I'm not sure that means the Vision Pro will follow the same path.
 
That was a seat-of-the-pants hunch from late last night, but you inspired me to look up the actual resolution (~3800x3000, so between 4k and 5k per eye) and calculate things out. The Vision Pro will top out at either 90 or 96Hz (the latter for 24fps media), so without compression (unknown at this point), 32-bit-per-pixel video data alone comes out to 61-65Gb/s, or 50%+ more than TB4/USB4's 40Gb/s.

I'm sure there is some compression going on, but the video data isn't all that would need to move back and forth.
Appreciate it's all back-of-envelope, but the calculation isn't that straightforward. Luckily, the standards have done it for us. I was assuming UHBR20 tunnelled over the USB4 (I'm assuming v2.0 here, BTW) fabric - not alt mode - since, as you say, there's other data that need to move. The DP2.1 standard says that'll give you 92Hz for 8K uncompressed (what gets called "32-bit colour" is really 24bits/pixel) so that seems in the right area.

That's just to illustrate practicality: in reality you wouldn't completely fill the bus with video, but you also wouldn't send the data uncompressed... and by definition, the two panels are very highly correlated. (In even-more-real reality :) I'd expect to send a model and render the panels cheaply in situ.)

While it might be technically possible to do this over USB4, it would be a very heavy lift - better to keep everything in close proximity.
Not sure I get this argument. If the data fits in the spec, then it's done - it's not like the cable will get tired... unlike the human carrying hot/heavy electronics on its head. Human perception is incredibly slow compared to the electronics, which is why the Apple SoC isn't a good analogy: its components are interacting at low latency and high-bandwidth, while a person... doesn't. [Edit: although, I like to think I have my moments... ;)]
 
There is more money to be made by keeping the devices separate.
The more people that can get away with using just an iPad, the lower are the sales of macbooks.
Never underestimate the cynicism of large corporations.

One company that could have merged the two form factors was Microsoft, because they didn't have a large hardware market to lose.
Windows 8 had a great idea behind it but the implementation killed off any chance it had I suspect.
 
Not sure I get this argument
Again again again: why would they have to use *any* spec? The data flows over circuit board traces, so why can't it flow down a wire instead?

Yeah there would be some additional latency, but it doesn't seem like it would matter for an I/O device (meaning the headset) the same way it does with memory being next to a processor.

There is more money to be made by keeping the devices separate.
The more people that can get away with using just an iPad, the lower are the sales of macbooks.
Never underestimate the cynicism of large corporations.
I dunno. Is there more money to be made that way?
 
I use these devices:

Android phone - 6.5" screen.
IPad mini - 8" screen.
Laptop - 14" screen.
Desktop - 32" screen.

They are all used very regularly except for the laptop.
But when I need it, it's because it is easily the best device for the task.

The AVP isn't on my radar any time soon.
 
Potentially, yes.

But I think we would be both extremely surprised if Apple allowed macOS (or something similar) to run on an iPad :)
This has been put out there as a potential (there's that word again) differentiator for the iPad Pro line - let them run macOS as a virtual machine. The chips can do it; all you'd need is the will and the ability to weather the wailing and gnashing of teeth over the price of a 13" iPad Pro with M3, 24GB of RAM, and 2TB of storage.
 
I dunno. Is there more money to be made that way?
Potentially there certainly is, but it depends what specs you are looking at.
As someone that only has one Apple device, a lowly base iPad Mini 5th gen, I'm not the target market.
But for people that have an iPad Pro and a Macbook Pro, there is room for a loss to Apple if these types of users consolidate to one device.

I was being somewhat tongue in cheek with regard to the cynicism comment because that seems to be Apple's default position.
But in this context, I think there are very good reasons for keeping the two lines separate.
Not that there's any chance of my iPad Mini replacing my laptop.
If anything, a folding phone will replace my tablet, not that I am even vaguely considering this right now.
 
Again again again: why would they have to use *any* spec? The data flows over circuit board traces, so why can't it flow down a wire instead?

Yeah there would be some additional latency, but it doesn't seem like it would matter for an I/O device (meaning the headset) the same way it does with memory being next to a processor.
Nobody really invents their own spec for this kind of thing any more - there are industry standards that have come into being for a reason (largely cost and part sourcing - nobody wants to reinvent every wheel and pay someone to make it for them). You can do whatever you want on a circuit board within your available PCI lanes and memory busses, but once data leaves that environment over a port that connects to a wire, you're limited by what standards are available to carry that data, as there are chipsets needed to handle it and cable that needs to be specced.

Also, latency is a big deal with headsets - the farther you get from real-time response, the greater the chance that users will quickly get nauseous.
 
Nobody really invents their own spec for this kind of thing any more - there are industry standards that have come into being for a reason (largely cost and part sourcing - nobody wants to reinvent every wheel and pay someone to make it for them). You can do whatever you want on a circuit board within your available PCI lanes and memory busses, but once data leaves that environment over a port that connects to a wire, you're limited by what standards are available to carry that data, as there are chipsets needed to handle it and cable that needs to be specced.
That makes even less sense to me. The data is going over short wires already. They don't need no stinkin' spec to extend them. Capacitance in the cable could be an issue, but that's just a matter of boosting the signal - hardly a major engineering problem.

Of course they don't want to add costs, but if the device's weight turns out to tank the product, what's a cable compared to every other custom component they had to invent? It doesn't even have to be detachable.


Also, latency is a big deal with headsets - the farther you get from real-time response, the greater the chance that users will quickly get nauseous.
We're talking about half the speed of light, aren't we? Electricity travels down a wire at about 3-1/2 times around the earth in a second.

Again, I don't get it!
 
Again again again: why would they have to use *any* spec? The data flows over circuit board traces, so why can't it flow down a wire instead?
You need *a* specification, otherwise you're just exchanging noises on a wire... they could design their own, but I just reached for an obvious one that would do as a proof-of-concept. (There are also half-way options: DisplayPort can be run at higher data rates than I gave, e.g., if you fiddle with details.)

It's actually really hard to take data off circuit board traces and onto a wire for lots of reasons: you could go down the Wikipedia rabbit-hole at clock skew, for example, and we'll see you in a few weeks :)
 
Top Bottom