May 7 “Let Loose” Event - new iPads

Was wondering if I was seeing things, but apparently not.


When you use the new pencil and pick different tools, the ipad renders a 3-D shadow, taking into account your pencil orientation and angle and rotation, the tool you are using, etc. It’s pretty wild. Subtle and not distracting, but quite the attention to detail.
 
If i were designing a CPU where, for some dumb reason, the cores were sufficiently heterogeneous that only certain instructions could execute on certain cores, and if I couldn’t get the OS maker’s to deal with it properly in software, I guess there are a couple things I could do to deal with it. Maybe most likely would be to detect the illegal instruction during decoding and trap it myself, sending a message to a core-scheduling block that lives outside any core and coordinates shifting things between cores. I’d make sure cores are virtualized, so that the OS can only request certain properties (e.g. “priority/speed”) when issuing threads, but cannot rely on the CPU actually picking any specific core. The CPU would dynamically move threads to cores that can handle them. There would be a performance penalty, approximately equivalent to 2x a branch misprediction, each time a thread had to move, but any given thread would presumably only move once (because if it has the illegal instruction in its instruction stream once, you have to assume it will happen again). You’d essentially flush the pipelines like in a branch mispredict, then write out the register file and program counter to the new core.
I think the first part is key, Intel painted itself into a corner with AVX512. Seems to me that the small cores were never going to be able to support it. It's ironic that Apple, who ships its own OS, went with a more flexible ISA in this regard so their P and E cores share the instruction set. I can imagine Apple finding a way to support different ISA extensions per core (with some developer help, like GCD's priorities) if they needed to do so, but it's interesting that they didn't.

Or, if not going with the software support rule, Apple would also have the upper hand if they ever needed to trap certain instructions in the decoder to send them to specific threads like you describe. Current Apple Silicon processors don't depend as much on E cores to provide the performance of massively parallel workloads. Intel, however, as @mr_roboto mentioned above, does need their "Efficiency" cores for those kinds of tasks. So you would end up having to decide whether you want your massively parallel program running only in P cores (with AVX512) or in P and E cores (without AVX512). Or create N_P threads with AVX512 support and N_E threads without it and hoping the CPU reschedules everything without too much performance losses. Not ideal in any scenario.
 
When you use the new pencil and pick different tools, the ipad renders a 3-D shadow, taking into account your pencil orientation and angle and rotation, the tool you are using, etc. It’s pretty wild. Subtle and not distracting, but quite the attention to detail.
I wonder if third party apps also have access to this or if it's not a public API. Hmmm 👀
 
Not just me then! Phew.

I think the dismissal of the M4 results often boils down to…”I don’t like this”.
Pc people have been crying about iOS geekbench results for a decade at least living in denial that the Apple SOCs were actually good.

The m1 proved them wrong. For anyone who actually used iOS devices this was no surprise.
 
Last edited:
I wonder if third party apps also have access to this or if it's not a public API. Hmmm 👀
i’m hoping it’s tied to the new SDKs for the pop-up tool palette, but I haven’t see anything about it yet. So far nothing that I have fully supported the new pencil other than Apple’s stuff.
 
Had an eventful apple support day. Brought my old ipad pro in because of the bad USB port, and because they don’t have any (it’s an M1 - I think i mistakenly said it was an M2 somewhere in this thread), they‘re ordering one and will get it to me in a few days. No idea what I’ll do with it. I told her I’d accept a new M2 ipad air instead, but she was unamused.

Then I noticed i couldn’t place outgoing telephone calls (using facetime) on my new ipad. So i tried using chat support, but messages to Apple wouldn’t deliver. I found out imessage is having a partial outage, and it’s affecting the Apple support account (but not my account apparently). Got a human being on the phone and she had me toggle facetime and imessage off, reboot, and back on, and then outgoing calls worked. I should have realized that during initial setup I never saw the “do you want to set this up for calls? do you want to set your 911 location?” stuff. The transfer of my old ipad to my new ipad was a little bumpy because the new one gave up when the old one’s icloud backup was taking so long, so something might have gotten accidentally skipped.
 
Here is the post in the Kraken thread over on ars in which an iPhone (8+) scores better than an i7, raising some iBrows over there. jump to the end of the thread for a couple M4 iPad scores.

Kraken (linked in the OP of that thread) is a JS-based processor speed test, so it is cross-platform, but it is also browser-dependent: some browsers generate higher scores than others on the same device.
 
Here is the post in the Kraken thread over on ars in which an iPhone (8+) scores better than an i7, raising some iBrows over there. jump to the end of the thread for a couple M4 iPad scores.

Kraken (linked in the OP of that thread) is a JS-based processor speed test, so it is cross-platform, but it is also browser-dependent: some browsers generate higher scores than others on the same device.
Yeh the PC excuse is often “oh but iOS is heavily optimised and doesn’t need to support a heap of hardware”

Which is a bullshit excuse of course as its not like every driver is loaded for all hardware on a PC with a subset of it.
 
Yeh the PC excuse is often “oh but iOS is heavily optimised and doesn’t need to support a heap of hardware”

Which is a bullshit excuse of course as its not like every driver is loaded for all hardware on a PC with a subset of it.
well, even if it wasn’t bullshit, i don’t see why that’s a bad thing. I mean, if i have a certain job I need to do, and I can choose between a “heavily optimized” device and operating system that doesn’t need to “support a heap of hardware” versus an unoptimized operating system that has the advantage of being able to run on lots of hardware i don’t even own, why wouldn’t i buy the former?
 
Yeh the PC excuse is often “oh but iOS is heavily optimised and doesn’t need to support a heap of hardware”

Which is a bullshit excuse of course as its not like every driver is loaded for all hardware on a PC with a subset of it.
Does additional hardware being supported cause the OS to slow down even tho not all hardware are installed. Drivers will not get installed or loaded if the hardware is not present.

Case in point; Linux support tons of legacy hardware, but it's benchmark results usually is better compared to Windows.
 
i’m hoping it’s tied to the new SDKs for the pop-up tool palette, but I haven’t see anything about it yet. So far nothing that I have fully supported the new pencil other than Apple’s stuff.
Probably find out at WWDC - if there’s something they’ll probably release a developer video
 
well, even if it wasn’t bullshit, i don’t see why that’s a bad thing. I mean, if i have a certain job I need to do, and I can choose between a “heavily optimized” device and operating system that doesn’t need to “support a heap of hardware” versus an unoptimized operating system that has the advantage of being able to run on lots of hardware i don’t even own, why wouldn’t i buy the former?

Exactly. But its freqently used as a PC fanboy coping mechanism to justify their hardware expense.

At the end of the day i don’t care if the hardware is 10x faster if the end result when software is taken into account means that it isn’t any faster in the real world applications i use.
 
Documentation for apple pencil pro already exists. So far I‘ve found the squeeze stuff here: https://developer.apple.com/documentation/applepencil/handling-squeezes-from-apple-pencil
And this might be how they figure out how to draw the shadow:

  • Leverage the hover pose of Apple Pencil to support more complex interactions in response to a double tap or squeeze. Information about the hover pose — such as azimuth, altitude, and hover distance — is available when a person holds a supported model of Apple Pencil close to the screen during a double tap or squeeze. In SwiftUI, use PencilHoverPose. In UIKit, use UIPencilHoverPose.
 
Sometimes I use the pencil for entering text, and it works quite well with my worst chickenscratch, but the interface is somewhat less than practical. I have an app called "penquills" (which is now broken) that offers some handy gestures for things like <cr> and tab, which would be a great addition to the pencil's native text input repertoire, sometimes the little floater does not show up or is hard to spot. If it worked little better (and offered me a case-switch gesture), I would use it all the time for text entry.
 
Sometimes I use the pencil for entering text, and it works quite well with my worst chickenscratch, but the interface is somewhat less than practical. I have an app called "penquills" (which is now broken) that offers some handy gestures for things like <cr> and tab, which would be a great addition to the pencil's native text input repertoire, sometimes the little floater does not show up or is hard to spot. If it worked little better (and offered me a case-switch gesture), I would use it all the time for text entry.
Using that feature right now to write this.

one thing that has hit me now 3 times with the new ipad magic keyboard is the right-most function key, above the delete key, is a “lock” key that brings you to the lock screen. I keep hitting it by mistake.

the magnifying glass key (above the 4) seems to be an alias to cmd-space, and i don’t feel like that’s a very useful addition. Those are my only quibbles so far.
 
I think the first part is key, Intel painted itself into a corner with AVX512. Seems to me that the small cores were never going to be able to support it. It's ironic that Apple, who ships its own OS, went with a more flexible ISA in this regard so their P and E cores share the instruction set. I can imagine Apple finding a way to support different ISA extensions per core (with some developer help, like GCD's priorities) if they needed to do so, but it's interesting that they didn't.

I think Apple just followed the approach that ARM had been using with big.LITTLE for years: Design the performance and efficiency cores together with the same ISA.

Intel on the other hand combined Core-i and Atom cores to quickly bring a product to the market, and I guess they are still using Atom for the E cores instead of designing a proper efficiency core that has the same ISA as the performance core.
 
Sometimes I use the pencil for entering text, and it works quite well with my worst chickenscratch, but the interface is somewhat less than practical.

Most of you probably already know this joke, but I cannot resist...

1715923711842.png
 
I think Apple just followed the approach that ARM had been using with big.LITTLE for years: Design the performance and efficiency cores together with the same ISA.

Intel on the other hand combined Core-i and Atom cores to quickly bring a product to the market, and I guess they are still using Atom for the E cores instead of designing a proper efficiency core that has the same ISA as the performance core.
As @Nycturne writes below, especially during the early days of big.Little, not every Android maker got that right and had heterogeneous ISAs on the cores with the efficiency cores missing features causing problems. My memory is that eventually ARM exercised greater control over the SOC building process by changing their licensing to stop that. Basically they said if you want to license our cores and mesh you have to follow certain rules about what combinations you can build and you also have to license every part. That also tamped down on some of the other shenanigans that IP-licensing SOC makers were doing and of course make sure ARM got a bit more of the pie.
I think I get what you are trying to say here, but this is more a fundamental issue of a process being moved mid-execution between two cores with different capabilities, and not being aware of the change in capabilities. So end result is if you cached the availability of AVX512, get moved to the smaller core without it during a preemptive switch, and then tried to use the instructions you just asked about before the switch occurred, you just crash. And that process state is not something the OS can go fix.

There are ways to do it, but the more I think about it, the more of a mess it becomes. It's just easier to keep the cores instruction compatible than trying to get folks to fix their code to not only be responding to signals that the core type has changed, but also making any state that is generated from checking CPU feature bits per-thread, rather than per-process. Intel's not the only one that faced this problem. There were Android devices that had missing capabilities on the efficiency cores which would cause certain processes to crash.
 
Last edited:
Nothing special about AVX512, no. But that leaves the question about Intel's current philosophy.

I am not sure they have a particular philosophy. They ran out of die and power budget, so they slapped on some atom cores on to stay competitive in benchmarks agains AMD. Of course, these were designed for a completely different purpose and probably under different management. I wouldn't be surprised if nobody enough thought about the AVX situation until very late in the process.
 
Back
Top