May 7 “Let Loose” Event - new iPads

I was wondering if it might improve the Apple pencil experience, people pay a lot of money to add textured covers to their iPads to get a more paper-like writing surface.



I was wondering the same, not clear. The only thing I can go on is that they described the CPU as "brand new" but the GPU as "building off the M3". So I lean towards a brand new CPU design, but that could just be video fluff or maybe a brand new design but still 9-wide decode. We won't know until launch.
I assume this is using the same cores we will see in the A18, which are likely new. I doubt they are wider, but they may have deeper reorder buffers and more flexibility in pipeline assignments and what is allowed to run in parallel. Add some more memory bandwidth and maybe some v9 support into the stew. Probably not a clean sheet redesign.
 
I was wondering if it might improve the Apple pencil experience, people pay a lot of money to add textured covers to their iPads to get a more paper-like writing surface.
That's a good point. Plus, given that it's meant to be touched, it might be a different type of nanotexture from that used in the ASD and XDR.
 
I was wondering the same, not clear. The only thing I can go on is that they described the CPU as "brand new" but the GPU as "building off the M3". So I lean towards a brand new CPU design, but that could just be video fluff or maybe a brand new design but still 9-wide decode. We won't know until launch.

It would be shocking if they were able to deliver a new u-architecture within half a year. The CPU is likely the same, maybe with some balancing tweaks to take better advantage of the wider backend (It doesn’t seem that M3 gets too much out of it). Would be interesting to see. I’m most curious about the new “ML accelerators” though which could mean redesigned AMX units.
 
By the way, I think by far the most impressive tech in the new iPads is the display. What a brilliant and crazy solution. I can’t even fathom the complexity that must be behind aligning and synchronizing the two panels.
 
It would be shocking if they were able to deliver a new u-architecture within half a year. The CPU is likely the same, maybe with some balancing tweaks to take better advantage of the wider backend (It doesn’t seem that M3 gets too much out of it). Would be interesting to see. I’m most curious about the new “ML accelerators” though which could mean redesigned AMX units.

at this point i believe they have multiple teams working in parallel, so it very well could be a new microarchitecture. I wouldn’t expect extreme differences, though, as they likely try to leverage things where they can. Baby steps taken more often.
 
It’s LPDDR5X I think.
Found this from Ryan Smith at Anandtech:

"M4 Memory: Adopting Faster LPDDR5X

Last, but certainly not least, the M4 SoC is also getting a notable improvement in its memory capabilities. Given the memory bandwidth figures Apple is quoting for the M4 – 120GB/second – all signs point to them finally adopting LPDDR5X for their new SoC.

The mid-generation update to the LPDDR5 standard, LPDDR5X allows for higher memory clockspeeds than LPDDR5, which topped out at 6400 MT/second. While LPDDR5X is available at speeds up to 8533 MT/second right now (and faster speeds to come), based on Apple’s 120GB/second figure for the M4, this puts the memory clockspeed at roughly LPDDR5X-7700.

Since the M4 is going into an iPad first, for the moment we don’t have proper idea of its maximum memory capacity. The M3 could house up to 24GB of memory, and while it’s highly unlikely Apple has regressed here, there’s also no sign whether they’ve been able to increase it to 32GB, either. In the meantime, the iPads Pro will all either come with 8GB or 16GB of RAM, depending on the specific model."


1715110819867.png
 
We don’t know yet, but if as suspected, the M4 gpu is a very minor upgrade, then it will be the third generation without a significant performance improvement in raw compute or raster. Certainly the M3 saw a big change in architecture with ray tracing, dynamic cache and mesh shaders, but we may be on the third iteration of mid 40000 geekbench scores.

Anyone concerned about this?
 
We don’t know yet, but if as suspected, the M4 gpu is a very minor upgrade, then it will be the third generation without a significant performance improvement in raw compute or raster. Certainly the M3 saw a big change in architecture with ray tracing, dynamic cache and mesh shaders, but we may be on the third iteration of mid 40000 geekbench scores.

Anyone concerned about this?

Im not. I think Apples progress in the GPU department has been excellent and that they are moving towards a very powerful architecture. I also believe we might see M5 earlier than some might think.

What does surprise me is that Apple pushes forward ML using new NPU and AMX units, but not the GPU. Could still come in the next generation, interesting nevertheless.
 
We don’t know yet, but if as suspected, the M4 gpu is a very minor upgrade, then it will be the third generation without a significant performance improvement in raw compute or raster. Certainly the M3 saw a big change in architecture with ray tracing, dynamic cache and mesh shaders, but we may be on the third iteration of mid 40000 geekbench scores.

Anyone concerned about this?
On the contrary, lately the addition of new features has been, if anything, more impressive than having raw performance numbers go up.

And a few years from now, having all their devices of the last N years with extensive feature support will be more useful to lure game studios in than the raw power of the phone (within reason). It's not economically feasible to build for the latest phones only, not even for the latest iPhones. And having to support old GPUs with missing features is painful.
 
On the contrary, lately the addition of new features has been, if anything, more impressive than having raw performance numbers go up.

And a few years from now, having all their devices of the last N years with extensive feature support will be more useful to lure game studios in than the raw power of the phone (within reason). It's not economically feasible to build for the latest phones only, not even for the latest iPhones. And having to support old GPUs with missing features is painful.
yeah, the massive architectural improvements are far more important that improving performance. It’s easier to add more GPU cores and speed up their clocks than it is to completely change their architecture. And the changes they have made to reduce system power impact of using the GPUs will allow them to beef up the clocks/parallelism without a crazy power budget.
 
Ugh ... I'm exhausted so of course I stupidly responded to trolls on Anandtech and got stuck arguing that the no the M3 Max CPU really doesn't draw 100 watts by itself.


Basically it's a wall power measurement on Cinebench R15, which yes gets to 93W (external screen). I'm trying to patiently explain that wall power measurements can vastly inflate power draw compared to software measurements like powermetrics (Mac) and HWinfo (PC) so comparing a wall power measurement (93W) with a eco mode rating of 65W (not even any kind of measurement) for the 7950X is nonsense and when measured at the wall on eco power the 7950X is, guess what?, higher than the highest power measurement I've ever seen for the M3 Max! Never mind that comparing the 7950X to the M3 Max even at comparable power is nonsense because one is a bigger chip with vastly more threads that doesn't go into laptops, not even in eco mode, and the M3 Max does pretty damn well even with that. You know how a threadripper pro at 105W would beat the pants off of a 7950X at 105W?

Sorry rant over.
 
Ugh ... I'm exhausted so of course I stupidly responded to trolls on Anandtech and got stuck arguing that the no the M3 Max CPU really doesn't draw 100 watts by itself.


Basically it's a wall power measurement on Cinebench R15, which yes gets to 93W (external screen). I'm trying to patiently explain that wall power measurements can vastly inflate power draw compared to software measurements like powermetrics (Mac) and HWinfo (PC) so comparing a wall power measurement (93W) with a eco mode rating of 65W (not even any kind of measurement) for the 7950X is nonsense and when measured at the wall on eco power the 7950X is, guess what?, higher than the highest power measurement I've ever seen for the M3 Max! Never mind that comparing the 7950X to the M3 Max even at comparable power is nonsense because one is a bigger chip with vastly more threads that doesn't go into laptops, not even in eco mode, and the M3 Max does pretty damn well even with that. You know how a threadripper pro at 105W would beat the pants off of a 7950X at 105W?

Sorry rant over.
Oh man been there. Not on Anandtech, I think I posted about my surprise how bad they were there. It’s such a tough line for me, engage and get stressed, ignore and suffer the nagging feeling that I have to wade in!

Edit. Damn just read some of it. That Terry person. The irresistible combination of condescension and ignorance. Hard to resist. Just ignore it and get some sleep. You can’t help them.

Edit 2. Now I see Maynard is joining in. Ahhhhhhhhhh
 
Oh man been there. Not on Anandtech, I think I posted about my surprise how bad they were there. It’s such a tough line for me, engage and get stressed, ignore and suffer the nagging feeling that I have to wade in!

Edit. Damn just read some of it. That Terry person. The irresistible combination of condescension and ignorance. Hard to resist. Just ignore it and get some sleep. You can’t help them.

Edit 2. Now I see Maynard is joining in. Ahhhhhhhhhh
Thanks. To be fair the Pro Apple troll lemurbutton was being an ass - he wasn’t completely wrong but his comment was needless flamebait and that’s not first time he’s done that. He consistently deliberately provokes other trolls. Most of the time I ignore it all relatively easily but this time I’m so tired his antagonist trolls provoked me by spouting even worse idiocy. Were I moderator over there I’d nuke the whole thing. Possibly literally, just to make sure.
 
Thanks. To be fair the Pro Apple troll lemurbutton was being an ass - he wasn’t completely wrong but his comment was needless flamebait and that’s not first time he’s done that. He consistently deliberately provokes other trolls. Most of the time I ignore it all relatively easily but this time I’m so tired his antagonist trolls provoked me by spouting even worse idiocy. Were I moderator over there I’d nuke the whole thing. Possibly literally, just to make sure.
I didn’t read the entire thread, just a few posts from your link and had to stop after I felt my blood pressure raise dangerously!
 
I have so many questions after today's announcement - mainly about the OLED screen itself. I've flip flopped back and forth on whether I like the idea of an OLED screen for a tablet/laptop sized device when the risk of burn-in is constantly in the back of my mind.
Yes - I know that there are technologies built into modern OLED tv's like slight pixel shifting, dimming of logos etc... to reduce the impact of burn in. However OLED at larger screen sizes also comes with other trade offs like issues with VRR and dimming after extended periods of time (e.g. watching a sports game). I'm interested to know if Apple solved these issues. I still really like the mini-led approach on the M2 ipad and M1 ipad.
Looking forward to a good tear down!

On a side note, I couldn't help but get even greater wishful pangs for a dual booting iPad OS and Mac OS experience with the magic keyboard. If Apple offered that, (similar to samsung dex) I'd have my order in in a heart beat. Maybe WWDC will be good to me this year :D
 
Oh man been there. Not on Anandtech, I think I posted about my surprise how bad they were there. It’s such a tough line for me, engage and get stressed, ignore and suffer the nagging feeling that I have to wade in!

Edit. Damn just read some of it. That Terry person. The irresistible combination of condescension and ignorance. Hard to resist. Just ignore it and get some sleep. You can’t help them.

Edit 2. Now I see Maynard is joining in. Ahhhhhhhhhh
I jumped in early too. Mainly just the one guy. But he’s stubborn.
 
Back
Top