Has there been any mention yet as to what time pre-orders will start on Friday?
I'll let my avatar speak to my readiness at that exact time.The recent rule is 5am pacific.
Finding rigorous explanations for this topic is extremely frustrating, because many formulas implicitly assume one of the variables as constant (sensor size, field of view...).I've always had a hard time understanding this because I've never found anywhere it's clearly and rigrously explained. But from what I understand, the rate of photon flow into the camera is proportional to the square of the entrance diameter (D) of the lens, and inversely proportional to the square of the focal length (F) (because if you double the focal length you halve the subtended angle of what you're viewing, so the scene gives off 1/4 the light). Thus photon flow ∝ (D/F)^2, where D/F = 1/f, where f is the aperture ratio. And for a given image size and exposure time, its perceived brightness will in turn be proportional to the square root of the photon flow (because images are 2-dimensional), i.e. to 1/f.
So at least based on this, f is a measure of the light-gathering capability of the optical device, and it's independent of sensor size. Thus if you have a lens of a given f-value, you should have a constant amount of light falling onto the image circle, not a constant amount of light per unit area. But maybe I'm confused!
I think what's different here, compared to competitors, is that apple is using a prism instead of a set of mirrors, but I don't think periscope designs are inherently limited as to how many reflections are allowed inside.From what's posted on Apple's website, it looks like they didn't use a conventional periscope design to get the extended light path needed for the 120 mm focal length. Instead, they used a "folded glass structure" that internally reflects the incoming light four times to increase the path length. Essentially, they stacked four small periscopes instead of using a single large one.
I'm afraid I'm not following. Let's suppose you point your camera at a building with equally-spaced dimly-lit windows of equal size, and want to take a picture of just one of those windows at night, and want as much light-gatherng as possible (hmm, just realized that sounds a bit creepy...). It seems to me that the brightness of the image you capture is going to be determined by the angle that one window subtends, irrespective of the the fov of your camera, and will thus be independent of your sensor size.Finding rigorous explanations for this topic is extremely frustrating, because many formulas implicitly assume one of the variables as constant (sensor size, field of view...).
You're right that halving the subtended angle of what you're viewing reduces the amount of photons that reach us to 1/4. But who says that increasing the focal length has to reduce our viewing angle? That's only true if the sensor size is constant! The formula for the field of view (fov) of a lens is:
View attachment 25875
So, if we increase the sensor width proportionally to the focal length, we can keep the same field of view with a longer focal length. For example, if we double the focal length but keep the same fov, the amount of light captured isn't reduced to 1/4. What changes, however, is that our image circle must be bigger (4 times the original area) to capture the same fov. The aperture size hasn't changed, and neither has our viewing angle, yet this lens has a different f-number. The light-gathering capabilities of the system remain the same (same fov, same aperture), but the amount of light per unit area has been reduced to 1/4 (we're projecting the same amount of light over an area 4 times as big). That's what remains constant for the same f number, not the total amount of light gathered.
Another example: if you switch a given lens (let's say, a 50mm f/1.4) from a full frame to an APS-C camera, you undoubtedly still have a f/1.4 lens, but you'll only be able to capture ~61.8% as much light total as before, because the sensor is smaller (and your images will have a narrower fov too, but that's not relevant to how much light we capture). What has remained constant? The amount of light in the sensor *per unit area*, which is obvious in hindsight because you're essentially just "cropping" a smaller part of the image circle the lens has.
That said, it doesn't help discussions one bit that focal lengths of smartphones are often given as the 35 mm equivalent focal length. That means that the magnitude we're dealing with is no longer a measure of the focal length, but rather the focal length a camera with a 35mm sensor would need to have to have the same fov. So it becomes essentially a different unit to measure fov, not focal lenght.
I'm afraid I'm not following. Let's suppose you point your camera at a building with equally-spaced dimly-lit windows of equal size, and want to take a picture of just one of those windows at night, and want as much light-gatherng as possible (hmm, just realized that sounds a bit creepy...). It seems to me that the brightness of the image you capture is going to be determined by the angle that one window subtends, irrespective of the the fov of your camera, and will thus be independent of your sensor size.
I disagree with "not quite". If you carefully re-read my original post on this, you'll see you're saying the same thing I did. What you're describing by having two identical cameras is equivalent to increasing the value of D (the entrance diameter, aka aperture) by a factor of sqrt(2). I think what you misunderstood is that, in the paragraph you quoted, I was considering the amount of light you capture from the specific scene you want to photograph, which in this case was one window. What determines the light emitted by that window isn't your fov, since increasing the fov doesn't increase the light from that specific window, it just means you capture additional extraneous light.Not quite. But on the right track. Consider the scenario of two cameras next to each other. They both capture light from the window, I have captured more light if I combine the two cameras than either camera did individually. So there is more light available if I have more area to capture light with. But a sensor by itself doesn't capture light. It's the whole system that captures light.
Low light performance is a tricky beast. For extended objects, it's aperture that determines how much light is captured from the object, and focal length determines the extent of the object at the focal plane. Together we get f-ratio. So consider this other scenario: Same sensor, same focal length lens. Only one is configured with a wider aperture. So you go from f/4 to f/2.8. Ultimately, the field of view is identical, but we can demonstrate that the f/2.8 captures more light per unit of time. But you can also demonstrate that two cameras with the same f-ratio and fov (note, not focal length) will have similar results for the same exposure time. Because f-ratio itself simplifies the system down to the important bits when discussing exposure time (which is a result of light capture).
And then you can get into the regime of astrophotography where certain things get even more important as read noise is suddenly no longer a negligible impact on SNR. Fun.
Does anyone know the fastest way to get to my "bag" when zero hour arrives on Friday morning? The email I got from Apple contained a calendar appointment with a link that goes to the "come back on 9.15" page. The appointment also says I can use the App Store mobile app, but my bag there is empty. I wonder if that'll change at the appointed time?I'll let my avatar speak to my readiness at that exact time.
I think what you misunderstood is that, in the paragraph you quoted, I was considering the amount of light you capture from the specific window you want to photograph. What determines the light emitted by that window isn't your fov, since increasing the fov doesn't increase the light from that specific window, it just means you capture additional extraneous light.
Seems like the usual sources have already labeled the A17 as the A16 redux and a “stopgap chip”. Sigh. We don’t even have any tests for these and the negative framing begins.
Lol indeed. The frenzy people are working themselves into.Apple told us it’s more than that (e.g. wider dispatch and execute.). I suppose I’m supposed to believe this name kremlinology and ignore what apple actually tells us?
OK, good. Now we're getting to the essence of it. I suspected this was why your view is different from mine, and this confirms it. And, needless to say, I don't believe it's me that has it backwards . But we shall see....Absolutely correct. Smaller sensors "crop" when compared to larger sensors, but don't do anything else. Effective focal length numbers tend to mess with things, but focal ratio should still be from the real focal length, not effective. So Apple's focal ratio numbers should be comparable to other systems.
"Thus if you have a lens of a given f-value, you should have a constant amount of light falling onto the image circle, not a constant amount of light per unit area. But maybe I'm confused!"
This is backwards.
So much better than in the past when it was 10 pm Pacific!The recent rule is 5am pacific.
So much better than in the past when it was 10 pm Pacific!
Looks like since A14 Apples been resting. Less than 2% IPC increase and most of speed increases are from clocks.
If the A17 CPU was clocked at A14 levels the score would be the around same, indicating little IPC improvements within 3-4 years.
The GPU team is doing more work than the CPU lately.
It's not fake. Here. https://browser.geekbench.com/search?q=iphone16,1That’s a lot of conclusions to draw from fake scores
Wow. If these are confirmed it looks bad.It's not fake. Here. https://browser.geekbench.com/search?q=iphone16,1
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.