Thread: iPhone 15 / Apple Watch 9 Event

I'm not too sure about the 120mm focal length. Unless it turns out to be really awesome, I think I'd have preferred a good quality medium zoom.

One thing I'm looking forward (photography wise) is the flavor of processing they add on top of the images. Hope it's more natural than the processing mine (13 Pro) does.

I have no interest in that focal length, either.

But... I read there's a new image stabilization system - which I suspect is to help with that focal length. If that also happens to improve the current and already decent image stabilization of the 14PM (which I like a lot for low light shots), that would be sweet.
 
I have no interest in that focal length, either.

But... I read there's a new image stabilization system - which I suspect is to help with that focal length. If that also happens to improve the current and already decent image stabilization of the 14PM (which I like a lot for low light shots), that would be sweet.
I believe the new image stabilization system is only for the 120mm lens… which probably won’t perform well in low light.
 
Not iPhone of Watch related, but the updated usb-c case for the AirPods Pro 2 seems to have update AirPods Pro as well as the case. They will enable lossless support when paired with the Vision Pro.
 

Attachments

  • IMG_0025.jpeg
    IMG_0025.jpeg
    69 KB · Views: 26
Not iPhone of Watch related, but the updated usb-c case for the AirPods Pro 2 seems to have update AirPods Pro as well as the case. They will enable lossless support when paired with the Vision Pro.
Where the hell are the new AirPods Max?
 
They say it maintains and aperture of f/2.8, if that's the case it should handle low light pretty well.
Is the sensor size the same? If I remember correctly, a constant aperture number means that the amount of light *per unit of surface area* of the image circle hasn’t changed, but if the sensor is smaller less light is captured in total. Like when the aperture f number went down on the iPhone 13 Pro -> iPhone 14 Pro, but the lens still captured more light because the image circle was bigger (to accommodate a bigger sensor).

Though to be fair I’d be surprised if the sensor were significantly smaller, so maybe it’s not as bad as I feared in low light.
 
Is the sensor size the same? If I remember correctly, a constant aperture number means that the amount of light *per unit of surface area* of the image circle hasn’t changed, but if the sensor is smaller less light is captured in total. Like when the aperture f number went down on the iPhone 13 Pro -> iPhone 14 Pro, but the lens still captured more light because the image circle was bigger (to accommodate a bigger sensor).

Though to be fair I’d be surprised if the sensor were significantly smaller, so maybe it’s not as bad as I feared in low light.
What they do with smaller sensors (like on my drone for example) is increase the noise cancellation to compensate as that's usually the biggest issue with low light, it's gotten so good in recent years that it makes a huge difference. I have no idea if this will be the case here or not but it would make sense that they'll likely optimize with software to some degree, though how that will play out in RAW output I can't answer.
 
Apologies if this was already posted, but I just grabbed this from the video
1694555633373.png

Is anyone more knowledgeable able to say any more about this branch prediction and wider decode and execution engine.
 
Is the sensor size the same? If I remember correctly, a constant aperture number means that the amount of light *per unit of surface area* of the image circle hasn’t changed, but if the sensor is smaller less light is captured in total. Like when the aperture f number went down on the iPhone 13 Pro -> iPhone 14 Pro, but the lens still captured more light because the image circle was bigger (to accommodate a bigger sensor).

Though to be fair I’d be surprised if the sensor were significantly smaller, so maybe it’s not as bad as I feared in low light.
Sensor is bigger than last year’s sensor.
 
Apologies if this was already posted, but I just grabbed this from the video
View attachment 25863
Is anyone more knowledgeable able to say any more about this branch prediction and wider decode and execution engine.

We can’t say much yet, because this doesn’t tell us too much.

That said, improved branch prediction, generally speaking, prevents pipeline stalls that occur when there is a conditional branch in your code. (For example: if x<20 do something). When this happens, modern CPUs guess whether the branch will be taken or not. The alternative would be to wait until the CPU determines if x<20, but that could take awhile, and, in the meantime, it’s better to guess what code comes next and start conditionally executing it. If you guess wrong, you flush everything that occurred after the branch, and start over. That means you did work for no reason (bad for power) and you could have been executing useful instructions but you weren’t (slowing down execution).

Typically, branch predictors get it right over 90% of the time. But improving the prediction, as long as you don’t increase complexity too much, can improvre overall performance and power.

Wider decode and execution engines likely means each core has more ALUs operating in parallel, and the instruction scheduler can issue more instructions in parallel. This means the CPU can, on average, do more work per clock cycle (assuming it can keep all the ALUs busy). The old cores were already extraordinarily wide, but if they are wider you would expect more work to be done each clock cycle. The benefit gets smaller and smaller as you get wider and wider, though, because it gets harder to find instructions that can execute in parallel. For example, if A = B+C and D=A+F, you have to do the A= instruction BEFORE THE D= instruction - you can’t do them in parallel.
 
We can’t say much yet, because this doesn’t tell us too much.

That said, improved branch prediction, generally speaking, prevents pipeline stalls that occur when there is a conditional branch in your code. (For example: if x<20 do something). When this happens, modern CPUs guess whether the branch will be taken or not. The alternative would be to wait until the CPU determines if x<20, but that could take awhile, and, in the meantime, it’s better to guess what code comes next and start conditionally executing it. If you guess wrong, you flush everything that occurred after the branch, and start over. That means you did work for no reason (bad for power) and you could have been executing useful instructions but you weren’t (slowing down execution).

Typically, branch predictors get it right over 90% of the time. But improving the prediction, as long as you don’t increase complexity too much, can improvre overall performance and power.

Wider decode and execution engines likely means each core has more ALUs operating in parallel, and the instruction scheduler can issue more instructions in parallel. This means the CPU can, on average, do more work per clock cycle (assuming it can keep all the ALUs busy). The old cores were already extraordinarily wide, but if they are wider you would expect more work to be done each clock cycle. The benefit gets smaller and smaller as you get wider and wider, though, because it gets harder to find instructions that can execute in parallel. For example, if A = B+C and D=A+F, you have to do the A= instruction BEFORE THE D= instruction - you can’t do them in parallel.
Fascinating. Many thanks.
 
Sensor is bigger than last year’s sensor.
For the telephoto too? Oh, I missed that part. We'll see then, but in any case none of the 120mm photos on the iPhone website look too promising to me. I thought it was due to sensor size, must be something else.

Wider decode and execution engines likely means each core has more ALUs operating in parallel, and the instruction scheduler can issue more instructions in parallel. This means the CPU can, on average, do more work per clock cycle (assuming it can keep all the ALUs busy). The old cores were already extraordinarily wide, but if they are wider you would expect more work to be done each clock cycle. The benefit gets smaller and smaller as you get wider and wider, though, because it gets harder to find instructions that can execute in parallel. For example, if A = B+C and D=A+F, you have to do the A= instruction BEFORE THE D= instruction - you can’t do them in parallel.
What can be done to try to keep wider cores busy? I guess looking further ahead in the instruction stream is a way, but are there other ways that are less obvious?
 
What can be done to try to keep wider cores busy? I guess looking further ahead in the instruction stream is a way, but are there other ways that are less obvious?
Compiler optimizations can go pretty far. E.g. even a simple loop unroll can increase utilization, and pairing that with better branch prediction seems like a winning combination to me. Maybe you do a little unnecessary work at the end of the loop, but you amortize the cost over the whole thing and that becomes vanishing. Of course, designing your algorithms to make it easier on the compiler helps too, and I’d expect Apple’s frameworks to fully take advantage.
 
Dan Barbera has a nice hands-on initial impressions video review of the 15 and 15 P/M iPhones at the other place. He seems pretty plussed on the weight reduction and camera sensors. A more in-depth review will soon follow.
 
My pre-order:
Apple Watch Series 9 GPS, 45mm Silver Aluminum Case with Midnight Sport Loop.
Pre-ordered the Apple Watch Series 9 Stainless Steel. Disappointed that they’ve discontinued the titanium edition but silver SS is probably fine. Also ordered the USB-C AirPods Pro 2. My current pair’s battery is on its last legs.

I might get an iPhone 15 Pro but right now my iPhone pro 12 seems fine. I just put $2800 into my old, beat up car. Maybe next month.
 
Trying to summarise my thinking on the A17. Feel free to correct them/tear them apart.

So we know that going from N5 to N3 yields a roughly 15% performance increase. We also know that Apple claims a 10% increase for the A17 vs the A16. Given there is also a new architecture for the cpu, it seems odd that the increase is less than (I) expected. I would have thought 20–30% was realistic

The only conclusion I can come to is that the A17 is clocked lower than the A16. Correct? They don’t have much competition on the cpu front for the iPhone, so they went for efficiency or more power to the NPU/GPU?
 
Pre-ordered the Apple Watch Series 9 Stainless Steel. Disappointed that they’ve discontinued the titanium edition but silver SS is probably fine. Also ordered the USB-C AirPods Pro 2. My current pair’s battery is on its last legs.

I might get an iPhone 15 Pro but right now my iPhone pro 12 seems fine. I just put $2800 into my old, beat up car. Maybe next month.
I thought about the Series 9, but my S6 is still going strong. I’m also hoping Apple adds BP and/or temperature sensors in the next year or two.
 
Back
Top