Thread: iPhone 15 / Apple Watch 9 Event

I believe this first diagram illustrates the picture you have of how modern digital cameras work. We agree (I hope) that, for a given level of outside illumination, the total photon flow onto the respective light circles is determined only by the f-value, which accounts for both the lens aperture and the angle the scene subtends. [I'm ignoring light loss due to the camera lens material, etc.]

We don’t agree here, as I have pointed out. Total photons captured is a function of aperture (area). F-ratio allows us to determine photons per unit area. Photons per unit area determines your SNR (i.e. dictates brightness / exposure times).

If you use a lens designed for a large-sensor camera on one with a small sensor, the following is indeed what happens, corresponding to your description: You have the same photon flow/unit area, so the smaller sensor captures less light. But that's obvously not how you want to design a camera, since you're throwing away all that precious light.
View attachment 25897


Wikipedia has a good description of this lens mismatch: "Lenses produced for 35 mm film cameras may mount well on the digital bodies [of small-sensor cameras], but the larger image circle of the 35 mm system lens allows unwanted light into the camera body, and the smaller size of the image sensor compared to 35 mm film format results in cropping of the image. [Emphasis mine.] This latter effect is known as field-of-view crop."

Ah, but the smaller sensor in this example still has the same SNR! It is equally as bright. That’s the point of the exercise. Despite all the wasted light, all I’ve lost is FOV.

Instead, in a properly-designed digital camera, the lens/sensor system should be configured to give a light circle that corresponds to the sensor size, as illustrated by the diagram below. Thus smaller sensors don't crop any more than larger ones do (assuming constant sensor aspect ratio) (ignoring specific design decisions that might alter this slightly).

Yes, you should design lenses appropriate for the sensors, but it’s ultimately irrelevant for the discussion at hand as we get to the second example.

If these two cameras have the same f-value, it's again the case that the total photon flow onto their respective light circles is the same. But now the light circle for the small-sensor camera is focused on a smaller area, to properly correspond to its sensor size, which means the total light falling onto both sensors is (about*) the same (and the photon flow/unit area is higher for the small sensor, since the same amount of light is focused onto a smaller area).

View attachment 25898

Designing the camera-lens system so the light circle size matches the sensor size enables cameras with small sensors to perform far better in low-light situations.

Except, what have you actually created in this diagram, if we assume f-ratio is equal? If I assume I’m not using baffling or other techniques to discard light for the smaller image circle, I’ve probably created two systems with the same effective FOV. So in the small sensor case the actual focal length is shorter. This is important, because it means the aperture is smaller for the same f-ratio. In other words, the total photons captured of an extended object is less for the smaller image circle, but as it covers a smaller area, you wind up with the same light per unit area at the same f-ratio. If you keep aperture constant, the lens for the smaller sensor is going to be noticeably faster due to the smaller focal length (faster f-ratio). This is because the image circle is itself a function of aperture and focal length. Short focal lengths converge the light rays more sharply, which has the effect of shrinking the image circle. Shrinking the aperture does much the same if focal length is held constant. However, two light cones with the same f-ratio always have the same shape from the aperture to the focal plane. And so to vary the image circle size is to vary the focal length so long as f-ratio is constant still leaving us with the same light per unit area.

You can’t just “concentrate more light onto a smaller circle” without also affecting the f-ratio. And so a small, bright, image circle is going to show up as a faster f-ratio.

This is why lenses for smaller sensors are generally more compact for the same use. The smaller image circle means I can use shorter focal lengths and smaller apertures for the same FOV, while maintaining similar f-ratios. It also means you can get faster f-ratios with more manageable apertures.

Now, this isn’t the only way to set an image circle, but other ways of controlling the image circle aren’t exactly relevant for the discussion of brightness/SNR.

Another example here is using a focal reducer on a telescope. It makes the focal ratio faster at the expense of the size of the image circle. The light per unit area increases due to the concentration of the light captured and the f-ratio drops as a result because the focal length drops while aperture remains fixed. This works precisely because aperture is fixed in this scenario.

Historically, larger sensors have been where you go for better low light performance, BTW. But that’s more because the size of the “wells” (pixels) matters when trying to get signal above the sensor noise. Although Sony’s modern sensor line, especially now that back-lit sensors are more common, tend to use the same pixel size across the various sizes, so there is no distinct advantage at the sensor alone (although camera body still plays a role). The A7S in particular is known for low-light prowess because in the early models it used sensors with unusually large pixels. More recently it uses 2x2 binning for the same effect. But I think you are the first person I’ve ever seen claim that smaller sensors are actually better for low light, let alone “far better”.
 
If these two cameras have the same f-value, it's again the case that the total photon flow onto their respective light circles is the same. But now the light circle for the small-sensor camera is focused on a smaller area, to properly correspond to its sensor size, which means the total light falling onto both sensors is (about*) the same (and the photon flow/unit area is higher for the small sensor, since the same amount of light is focused onto a smaller area).

View attachment 25898

So, here's the thing: are the two sensors capturing roughly the same image? Is is just the same image, but over a different image circle? I mean something like this (tes image of photographing an iMac):

Screenshot 2023-09-14 at 09.47.33.png


If if is, both optical systems must have the same field of view (or they wouldn't be capturing the same thing, as in the image above). In order to have the same field of view, the lens for the smaller sensor must be using a shorter focal length (which is why phones, with very small sensors, often have focal lengths under 10mm, even for their "zoom" lenses).

And if we agree that the smaller sensor is using a camera with a shorter focal length, to preserve the field of view constant, (let's say, 35mm lens for the large sensor, 17mm lens for the small sensor) then one of the following can be true:

- The f number is the same, let's say f/2.8 => Then, the two lenses have different aperture diameters. The lens for the large sensor has an aperture diameter of D = f / N = 35 / 2.8 = 12.5 mm, while the lens for the smaller sensor has an aperture diameter of D' = 17 / 2.8 = 6.07mm. The lens for the smaller sensor has a smaller aperture diameter, because it needs to collect less light for the same field of view, as the image circle is smaller, and the f number keeps the amount of light per unit area the same, in a smaller image circle.
Screenshot 2023-09-14 at 10.04.02.png

- The aperture diameter is the same, let's say 12.5 mm. Then, the lens for the large sensor has a f number of N = f / D = 35 / 12.5 = f/2.8, while the smaller lens has an aperture diameter of N = f' / D = 17 / 12.5 = f/1.4. With the same aperture diameter, the lens for the smaller sensor is much faster, for the same field of view!
Screenshot 2023-09-14 at 10.06.56.png


There's in fact a product that does the exact thing you're discussing. There's a kind lens adapter called a speedbooster, which is designed to adapt full frame lenses to APS-C cameras, while using all the light in the image circle: that is, a speedbooster concentrates the image circle of a full frame lens into the area of a smaller APS-C sensor. This makes the lenses faster (hence the name speedbooster) but at the same time, italso reduces the focal length of the lenses. :)
 
EDIT3: lol under the cpu name it says ARM. Instead of “Apple A17 Pro” or something similar. Since when have Apple named their cpu ARM. Lol. Fake.

GB is probably not updated yet. From what I understand there is no API that gives you the name "A17 Pro", you have to guess the hardware from the device label.

The high frequency is also something I find surprising. I wonder whether this is reliable?
 
Last edited:
GB is probably not updated yet. From what I understand there is no API that gives you the name "A17 Pro", you have to guess the hardware from the device label.

The high frequency is also something I find surprising. I wonder whether this is reliable?
I could well be wrong. I still don’t understand why frequency would need to be increased for a 10% increase, while going to N3. Hmmm
 
I could well be wrong. I still don’t understand why frequency would need to be increased for a 10% increase, while going to N3. Hmmm

Yes, I would expect the architectural improvements to bring at least 5% increase in IPC. If these clocks are correct, we essentially had no IPC improvements since A14. Doubly surprising since Apple claimed the new CPU is wider. Maybe IPC improvements are only visible on certain workloads?

Edit: comparing A16 and A17 (https://browser.geekbench.com/v6/cpu/compare/2592831?baseline=2595475) shows most improvements in the following tests: PDF Renderer (27%!), Photo Library (25%), Photo Filter (27%), HTML5 (15%), Ray Tracer (15%), Structure From Motion (16%), and Horizon Detection (16%). I don't really know what to make out of it. HTML and PDF tests will be branchy (as will be the ray tracer test), so maybe that's the new branch prediction in action? Other benchmarks with large improvements are ML models (running on CPU), which would suggest a faster FP unit? Maybe an additional FP pipe? But another FP-heavy test like background blur only shows improvement consistent with clock increase...

So yeah, one can certainly see IPC improvements. But I don't understand what exactly is happening.
 
Yes, I would expect the architectural improvements to bring at least 5% increase in IPC. If these clocks are correct, we essentially had no IPC improvements since A14. Doubly surprising since Apple claimed the new CPU is wider. Maybe IPC improvements are only visible on certain workloads?

Edit: comparing A16 and A17 (https://browser.geekbench.com/v6/cpu/compare/2592831?baseline=2595475) shows most improvements in the following tests: PDF Renderer (27%!), Photo Library (25%), Photo Filter (27%), HTML5 (15%), Ray Tracer (15%), Structure From Motion (16%), and Horizon Detection (16%). I don't really know what to make out of it. HTML and PDF tests will be branchy (as will be the ray tracer test), so maybe that's the new branch prediction in action? Other benchmarks with large improvements are ML models (running on CPU), which would suggest a faster FP unit? Maybe an additional FP pipe? But another FP-heavy test like background blur only shows improvement consistent with clock increase...

So yeah, one can certainly see IPC improvements. But I don't understand what exactly is happening.
Needless to say, I am very confused! Am I to believe that there is essentially no benefit to this node shrink?

Is your gut feeling that these are genuine GB scores?
 
Last edited:
Needless to say, I am very confused! Am I believe that there is essentially no benefit to this node shrink?
Is your gut feeling that these are genuine GB scores?

I think if it's a fake, then a rather elaborate one. As to "no benefit", that's maybe a bit too pessimistic. From this particular benchmark we see an average gain in IPC around 5%, much higher for certain workloads. It is also unclear how far these cores can go clock-wise.

But I must say that I am also a bit disappointed. I expected a higher improvement in IPC, at least 10% across the board. And I hoped that A17 would be clocked lower to allow higher peaks for 3N Macs. I mean, from an objective standpoint the performance is excellent. It's a smartphone that's only 10% slower in single-core than Intel's fastest desktop CPU (and in fact, faster in some fundamental benchmarks). But we expected more. Maybe it's our expectations that are unrealistic? Apple's CPU were already state of the art, maybe it's just genuinely hard to extract more IPC out of these designs?
 
I think if it's a fake, then a rather elaborate one. As to "no benefit", that's maybe a bit too pessimistic. From this particular benchmark we see an average gain in IPC around 5%, much higher for certain workloads. It is also unclear how far these cores can go clock-wise.

But I must say that I am also a bit disappointed. I expected a higher improvement in IPC, at least 10% across the board. And I hoped that A17 would be clocked lower to allow higher peaks for 3N Macs. I mean, from an objective standpoint the performance is excellent. It's a smartphone that's only 10% slower in single-core than Intel's fastest desktop CPU (and in fact, faster in some fundamental benchmarks). But we expected more. Maybe it's our expectations that are unrealistic? Apple's CPU were already state of the art, maybe it's just genuinely hard to extract more IPC out of these designs?
Yes, it’s not that the performance is bad. Far from it. It’s just a little worrying for the future. If they’ve been enable to make much improvement for this many iterations, it lends credibility to the idea that there really has been a talent drain.
 
Yes, it’s not that the performance is bad. Far from it. It’s just a little worrying for the future. If they’ve been enable to make much improvement for this many iterations, it lends credibility to the idea that there really has been a talent drain.
Or they’ve hit a design wall as @leman has mooted and the basic core design can’t really pushed any further, which is still not great.
 
Yes. I’m quite disappointed and concerned if these scores are true.

It’s also strange that the Pro Max is slower.
Then again it may be also we’re seeing a new soft limit* to IPC in general? AMD/Intel can make larger relative IPC gains but that’s largely because of how far behind they are. Much of their performance is from pushing clocks higher. Apple meanwhile has a lot more room to grow in that direction even keeping power consumption low (though eventually high clocks will themselves require changes to processor design, maybe even hyper threading to keep cores busy during multicore workloads).

*by that I mean before someone figures out a brand new system, this looks to be the best IPC a single thread can feasibly do for anyone
 
I updated a historical performance chart I've been updating every year, with the unconfirmed Geekbench 6 scores (waiting for verified ones).

Screenshot 2023-09-14 at 13.11.19.png

As everyone else, unsure what to think about the reported clock rate. But I'll wait for more extensive (and confirmed) benchmarks before reading too much into it.
 
GB is probably not updated yet. From what I understand there is no API that gives you the name "A17 Pro", you have to guess the hardware from the device label.

The high frequency is also something I find surprising. I wonder whether this is reliable?
The guy who leaked the A17 specs said the clock is 3.70ghz so the GB spec is correct
 
I updated a historical performance chart I've been updating every year, with the unconfirmed Geekbench 6 scores (waiting for verified ones).

View attachment 25904
As everyone else, unsure what to think about the reported clock rate. But I'll wait for more extensive (and confirmed) benchmarks before reading too much into it.
This is a very interesting way to view the improvements. It makes them seem much more consistent.
 
I updated a historical performance chart I've been updating every year, with the unconfirmed Geekbench 6 scores (waiting for verified ones).

View attachment 25904
As everyone else, unsure what to think about the reported clock rate. But I'll wait for more extensive (and confirmed) benchmarks before reading too much into it.

Nice work! Btw, this clearly shows what M. Handley wrote on MR earlier - the performance improvements are additive, not multiplicative. The gain from A16 to A17 is the same as from A13 to A14. So cancel that doom&gloom, people :)

Edit: may I post this chart over on MR?
 
Back
Top