Holographic Video Calls

RockRock8

Member
Joined
Dec 6, 2025
Posts
19
youtube .com/watch?v=KbZfbqHeJNU


Fresh off my scathing critique of Mark Gurman (which I assure you doesn't even begin to encompass the issues his work has), I will now move onto critiquing the next false narrative constantly pushed by him and others recently: "Apple has no clue about AI."

First, "AI" is a complete made up term as it's being used today. What is it actually?

People are often using it to describe machine learning, specifically neural networks, specifically transformer models. These are often used as chatbots to give the appearance of intelligence (the language fallacy: if it speaks it therefore must be intelligent).

Transformer models are a new, highly specialized form of neural network, and while this isn't the focus of what I'm writing, I needed to clarify that before I continue

Apple may not yet excel at transformer models (but are catching up, according to their latest releases), but what they do excel at is machine learning, which is arguably far more important to technology and its role in our lives than only transformer models. That is because it encompasses a broad range of "intelligent" models, not only transformers.

This brings me to the very underrated and understated and oft-undiscussed (up to now) brand new UI: spatial. The hallmark of it being blending digital content into your real world. Not only is this UI only possible through machine learning, the experiences it enables are simply impossible on any other interface (GUI, etc).

One of the hallmark first features of spatial is the ability to do holographic video chats.

Simply put, you are no longer limited by distance when you want to be in the same room as your friends and family. Your friends and family can appear in the same room as you, even though you're thousands of miles away

By using the array of sensors Apple custom-designed and engineered, Apple 3D scans your face. This scanning process already uses machine learning to coordinate sensors and tell users what they need to do (by voice direction).

It goes further. Unlike what people normally think, Apple does not create an "avatar" of you necessarily -- meaning, they do not scan your face and then determine a cartoon likeness of you.

Instead, in a world's first, they scan your face and upper torso using LiDAR, high definition cameras, and Face ID on device; then, using a sophisticated machine learning technique of over 12 highly advanced neural networks unified and working in tandem to fuel a photo rendering technique Gaussian Splatting , it digitally recreates you in 3 dimensions as if you were really there. A real hologram.

It goes further. Unlike Star Wars and virtually every other TV/movie concept where holographic video calls are possible, this does not have any artifacts. There is no "line scanning," no visual glitches, no odd coloring. It is you, simply represented as if you were really in front of your friends and family

Not only is this possible, but it is completely created on device, never sent to any servers for processing, completely private, and accomplishes this feat in mere seconds.

Lastly, a wide array of cameras, lasers, and sensors on the outside and inside constantly track every single movement you do. If you smile, it can see that. If you raise your eyebrows, it can see that. If you laugh with your teeth showing, it can see that. If you stick your tongue out, it can see that.
If you stand up, it can sense that. If you sit down, it can sense that. If you walk around, it can sense that. If you jump up and down, it can sense that. Everything you do it sees and senses accurately.

Finally, Using even more machine learning, it translates this computer vision and sensor fusion into real actions performed with your hologram. So if you smile, frown, laugh, smirk, jump, and move around, it replicates this exactly as if you were doing this in the real world

Sensor fusion and Computer vision is done and interpreted within milliseconds, transmitted over the internet, and shown to other people in what people have described as an 'instant, lag free experience; it feels like you're really in the room with them.'

The result is something that has never been accomplished before for consumers (or even enterprise): the ability to conduct holographic video calls, to be in the same room as your friends and family even if you aren't.

All of this is accomplished with a product that's easy to use as tapping, available in the Apple Store today, and is at $3,500.
This feature has never been available before to anyone: consumers, enterprise, governments, military, etc. and yet it's so simple and easy to use, you simply just start a FaceTime call.

Apple is not behind in machine learning (or "AI"). It merely chooses to focus on and make products that enhance and enrich people's lives, not destroys the earth and creativity

No one else has done this before. If anything has come close, it's always been restricted to labs, and never shipped to consumers. It never was done on device, nor as simple and easy to use like this. A world first.

A chatbot that tells you whatever you want to hear, or the ability to connect with your friends and family in ways literally never before possible.

What do you choose?


The first video above has the world's first holographic interview with Apple executives (in depth). I recommend turning the video quality up if not set automatically.

The second video below is from another YouTuber showing off what it's like to use by himself and with his friends and coworkers. Turn on subtitles too.

A reviewer uses it with his friends:

youtube .com/watch?v=zChhRT-y08Q&t=760s

Set up:

twitter .com/M1Astra/status/1724539857752519028

I welcome all discussion: technical discussions around Apple machine learning related to this, discussion about the feature, utility, etc.
 
Last edited:
That’s a lot! I am hoping that after I get my eye surgeries this month that things like this will be useful to me. I haven’t even tried Vision Pro because my vision is so terrible.
 
That’s a lot! I am hoping that after I get my eye surgeries this month that things like this will be useful to me. I haven’t even tried Vision Pro because my vision is so terrible.
I'm sorry to hear you need eye surgery. Hopefully it's not too invasive for you!

As a side note, Apple supports prism in prescriptions. They updated the OS to add support for prisms, which I also believe is an industry first.

What do you think of the videos, if you have watched them?
(Apple interview is longest at 20 minutes, second is shortest at 5 minutes, third is a brief set up video demonstrating how quick it is to get this advanced UX)
 
I'm sorry to hear you need eye surgery. Hopefully it's not too invasive for you!

As a side note, Apple supports prism in prescriptions. They updated the OS to add support for prisms, which I also believe is an industry first.

What do you think of the videos, if you have watched them?
(Apple interview is longest at 20 minutes, second is shortest at 5 minutes, third is a brief set up video demonstrating how quick it is to get this advanced UX)
I feel like we live in Star Trek. I just wish FaceTime (and this new stuff) was more prevalent in business. I spend a lot of time on Zoom and Teams calls, and this all just shows it could be so much better.
 
I feel like we live in Star Trek. I just wish FaceTime (and this new stuff) was more prevalent in business. I spend a lot of time on Zoom and Teams calls, and this all just shows it could be so much better.
Yeah. And this is only just the beginning of it all. The rapid improvement in such a short time is remarkable.

I've also read from multiple reviewers that it actually looks better when you use it, because the resolution looks even higher and skin texture is more accurate, and obviously it's 3D when you use it, not just 2D like in the video we watched.
 
Yeah. And this is only just the beginning of it all. The rapid improvement in such a short time is remarkable.

I've also read from multiple reviewers that it actually looks better when you use it, because the resolution looks even higher and skin texture is more accurate, and obviously it's 3D when you use it, not just 2D like in the video we watched.
I hope they figure out how to get the hardware much smaller and lighter. It’s the kind of thing that tempts me purely from a tinkering perspective - i’d have great fun writing apps for it, etc. - but I’m guessing I wouldn’t get enough use out of it to justify the price. Would be so nice for work, if everyone else used them too.
 
I hope they figure out how to get the hardware much smaller and lighter. It’s the kind of thing that tempts me purely from a tinkering perspective - i’d have great fun writing apps for it, etc. - but I’m guessing I wouldn’t get enough use out of it to justify the price. Would be so nice for work, if everyone else used them too.
Funnily enough when I was looking up reviews of the new Dual Knit band, I found this review by a lawyer (I believe you're a lawyer?):

iphonejd. com/iphone_jd/2025/11/review-dual-knit-band-superior-band-for-the-vision-pro.html

Out of all the reviews both professional and user I've read of the new band, I think I've only read 2 that said they prefer something else that Apple has shipped or that they bought. (It's also been sold out since launch)

Comfort seems much improved, not so much so that you're not wearing something (obviously), but enough that people can put it on for awhile.
 
Funnily enough when I was looking up reviews of the new Dual Knit band, I found this review by a lawyer (I believe you're a lawyer?):

iphonejd. com/iphone_jd/2025/11/review-dual-knit-band-superior-band-for-the-vision-pro.html

Out of all the reviews both professional and user I've read of the new band, I think I've only read 2 that said they prefer something else that Apple has shipped or that they bought. (It's also been sold out since launch)

Comfort seems much improved, not so much so that you're not wearing something (obviously), but enough that people can put it on for awhile.
I was featured on iphonejd once (back in the day when I wrote apps for lawyers). I was a CPU designer before I became a lawyer.

I think the main issue is that because you wear it and it’s heavy and has these straps and yadda yadda, there’s a friction involved. It’s not like an ipad that you just pick up and put down. I think that the magic will truly happen if they ever get most of the Vision Pro’s functionality into something that is much closer to regular eyeglasses in form factor.
 
I was featured on iphonejd once (back in the day when I wrote apps for lawyers). I was a CPU designer before I became a lawyer.

I think the main issue is that because you wear it and it’s heavy and has these straps and yadda yadda, there’s a friction involved. It’s not like an ipad that you just pick up and put down. I think that the magic will truly happen if they ever get most of the Vision Pro’s functionality into something that is much closer to regular eyeglasses in form factor.
Oh wow that's funny, I didn't know that!

I understand that.

It's unlikely to happen soon. If it did, expect it from Apple. But it's unlikely to happen soon. I'll use Facebook as an example (relevant given recent stuff)

Facebook's glasses that have a monocular display (like Google glass) are not just incapable compared to Apple's displays and OS, but actually how the displays are designed in that product forces you to change perspective from the display to real life.

Practically what this means (from trusted reviewers I've read/watched who actually have both products) is that you cannot do two things at once. You're forced to either engage with the content on the display, and not have depth perception or pay attention to real life; or you have depth perception and can look around you correctly, but you lose the benefit of the display.

Facebook has zero engineering capabilities to even pull something like this off.

To put this into further context, Facebook spent $100 billion on their headset efforts (including metaverse).

Meanwhile, Apple produced first ever holographic communication (unlike stupid cartoon avatars we've seen from Facebook), not to mention the entire OS and product and displays for only $7 billion.
 
Google has released their attempt of Personas to market with their bad copycat. From what I can find, they only have 2D mode, no spatial FaceTime.

youtube .com/watch?v=Q3-8nwiLC98


Personas and Spatial Personas look remarkably better to me, particularly spatial personas.

Brian Tong's interview with Apple in spatial FaceTime:

youtube .com/watch?v=KbZfbqHeJNU

It's pretty clear from the moment she moves her face Google scrapped whatever they worked on initially and had to start over when Apple updated spatial personas in this most recent OS version lol

Spatial FaceTime has been out for nearly as long as their original launch of everything, so that it took this long is... entertaining (to me). I can't begin to describe all the features and details Google missed and are behind on, but suffice to say Google has not even remotely matched the quality of interaction to the reliability and feature set of FaceTime.

In other news, Apple updated their (machine learning based) spatial accessories input with Muse stylus, updated their (machine learning based) Travel Mode to include cars and busses, not only planes and trains today
 
Last edited:
In other news, Apple updated their (machine learning based) spatial accessories input with Muse stylus, updated their (machine learning based) Travel Mode to include cars and busses, not only planes and trains today
I think it would take a special kind of bravery to wear one of these on a bus.
 
I think it would take a special kind of bravery to wear one of these on a bus.
I know some people felt the same about AirPods Max

The side benefit is that I've read it's way more stable overall since adding support for buses and cars. So it's a win all the way around no matter where you choose to use it lol
 
I guess I could be called an Apple Vision Pro enthusiast. I bought mine not long after release in 2024, and I use it for a few hours almost every day for work and play. I've flown with it on multiple occasions – it's a great way to pass the time – but I haven't tried it in a car or train yet.

Personas are head and shoulders above the way they were previously. (Yes, that was intentional.) I've done a couple Zoom meetings with mine, including one where I didn't tell the other participants that they were looking at a render. They claimed they didn't know until I told them, but honestly, it's not that difficult, especially when the person uses their hands. Connecting with another person one-on-one, I think I'd still rather use conventional video.
 
I guess I could be called an Apple Vision Pro enthusiast. I bought mine not long after release in 2024, and I use it for a few hours almost every day for work and play. I've flown with it on multiple occasions – it's a great way to pass the time – but I haven't tried it in a car or train yet.

Personas are head and shoulders above the way they were previously. (Yes, that was intentional.) I've done a couple Zoom meetings with mine, including one where I didn't tell the other participants that they were looking at a render. They claimed they didn't know until I told them, but honestly, it's not that difficult, especially when the person uses their hands. Connecting with another person one-on-one, I think I'd still rather use conventional video.
Thanks for sharing your experience.

I think regarding use cases of spatial FaceTime versus regular FaceTime, in a possible world where everyone has access to spatial FaceTime, if I want to hang out with friends or family, I'd choose spatial FaceTime. If I have something serious to discuss or someone is not feeling well, that's a use case for regular FaceTime.

Spatial personas are advanced enough to be authentic to you, which makes hanging out a lot of fun and makes it feel like you're really are together even if you aren't (this is based on stuff I've read from other people). And of course, if someone is just making a quick call on FaceTime to ask you something, spatial personas aren't some cartoon, so it doesn't seem stupid to just answer the FaceTime call on device, as opposed to something like Zuckerberg's metaverse avatars.

That's my opinion. What do you think of that lol
 
I hope they figure out how to get the hardware much smaller and lighter. It’s the kind of thing that tempts me purely from a tinkering perspective - i’d have great fun writing apps for it, etc. - but I’m guessing I wouldn’t get enough use out of it to justify the price. Would be so nice for work, if everyone else used them too.
Of course I don't know your specific situation Cliff - other than what you mention about needing eye surgery - but I've honestly found my APV not that heavy (helped by inclusion of an upper strap). In fact, after the first three weeks or so I hardly noticed the weight. I take that as a positive psychological side-effect of my increasing engagement with the experience of using it.

For me it has done what I anticipated and hoped it might: that 'infinite' canvas is hard to beat! I can park documents and graphs or pictures I only need to refer to occasionally above or below my main eye-line - or to the side or behind me - then with a glance snap them into focus and easily bring then into my main workspace if needed.

Increasingly, I use its virtual Mac feature to connect to either of my Macs, mostly for Xcode. So far I've been too busy to port more than a couple of my apps to it - nothing designed for its immersive capabilities yet - but they look good, to the extent it's something of a let-down coming back to my main (40-inch) monitor.

Which is to say I think you would have fun writing apps for it.

On a typical day I review and edit a lot of different kinds of information and I think this is the APV's super-power. Or one of them? I love it.
 
Back
Top