Apple Just Patented an Image Sensor With 20 Stops of Dynamic Range

Eric

Mama's lil stinker
Joined
Aug 10, 2020
Posts
14,092
Solutions
18
Main Camera
Sony
Watching with interest.

Apple might be quietly preparing one of the biggest advancements in image sensor design — and it could change everything from iPhones to professional cinema tools. Y.M.Cinema Magazine is the first to uncover a newly published Apple patent titled “Image Sensor With Stacked Pixels Having High Dynamic Range And Low Noise”. This revelation points to a breakthrough that could reshape both mobile and professional imaging. With claims of a staggering 20 stops of dynamic range packed into a stacked sensor architecture, Apple might be gearing up to rival — or even surpass — the giants of cinema camera technology.

 
This is interesting from two perspectives. First, the apparent enhanced capabilities of this sensor, though there’s often a sizable gap between patents and large scale production.

But second, it’s yet another example of Apple working to move its hardware development in house, which can pay off as they reduce their reliance on third parties.
 
Watching with interest.



Very interesting circuit, but the trick in these things is always to manufacture them in a way that makes sense (i.e. the layout).

IMG_0981.jpeg


Haven’t read the patent yet, but from the circuit diagram it looks like what’s going on is that the camera makes a “how bright is it?” determination, and then potentially turns on LOFIC1 alone (if it’s brighter than a threshold), or LOFIC1 and LOFIC2 (if it’s really bright - brighter than a second threshold), providing additional capacitors where excess charge can be stored (normally node 42 would store all the charge generated when light hits the fotodiode 38 - I don’t like that they didn’t use a photodiode symbol for that :-)

So instead of storing charge between 0 and X coulombs, you can store between 0 and X+Y+Z, (where charge storage space for Y and Z are provided by capacitors 50 and 52). This gives you much more range to work with, and reduces the effect of noise in bright conditions, at least (since noise will be some fraction of X, and the effect of noise will be noise/(X+Y+Z) instead of noise/X.

Something like that, anyway (just judging by the circuit diagram).
 
Last edited:
Clever. And looking forward to the release as I now make most of my photos with my phone. Amazing that a phone camera will be able to achieve such a large dynamic range.

Expecting at the other place: "Who asked for 20 stops of dynamic range?"
 
Clever. And looking forward to the release as I now make most of my photos with my phone. Amazing that a phone camera will be able to achieve such a large dynamic range.

Expecting at the other place: "Who asked for 20 stops of dynamic range?"
Maybe it says in the patent application somewhere but I can't spot it, how big will this be?
 
Maybe it says in the patent application somewhere but I can't spot it, how big will this be?

Patent doesn’t say anything about it (and, of course, a patent wouldn’t have to say that, nor does a patent mean that apple will actually do it.)
 
After thinking about this for awhile, and doing a wee bit of research online, I'm left wondering if a sensor with that large of a dynamic range has any *consumer* applications regarding cell phone (or regular camera) photography.

Apparently (reading from Wiki) a human eye has a static dynamic range of 6.5 stops. Dynamic iris adjustments over time can result in 20 or more stops. I'm guessing there's signal processing going on in one's brain that takes that into account when producing an image that one is looking at.

But... photographers make photos to look at and enjoy after the exposure is made. Either via making a print from a printer, and/or viewing on an LCD screen. Both of which have a much more limited dynamic range.

That leaves me wondering about the benefit of having a sensor with 20 stops of dynamic range for consumer photography applications.

I can see applications for use in multiple scientific applications, say in astronomy. Or optical RF signal processing employing acousto-optic Bragg cell receivers where having a higher dynamic range sensor array would be beneficial in analyzing RF signals of interest, etc.
 
After thinking about this for awhile, and doing a wee bit of research online, I'm left wondering if a sensor with that large of a dynamic range has any *consumer* applications regarding cell phone (or regular camera) photography.

Apparently (reading from Wiki) a human eye has a static dynamic range of 6.5 stops. Dynamic iris adjustments over time can result in 20 or more stops. I'm guessing there's signal processing going on in one's brain that takes that into account when producing an image that one is looking at.

But... photographers make photos to look at and enjoy after the exposure is made. Either via making a print from a printer, and/or viewing on an LCD screen. Both of which have a much more limited dynamic range.

That leaves me wondering about the benefit of having a sensor with 20 stops of dynamic range for consumer photography applications.

I can see applications for use in multiple scientific applications, say in astronomy. Or optical RF signal processing employing acousto-optic Bragg cell receivers where having a higher dynamic range sensor array would be beneficial in analyzing RF signals of interest, etc.
That's a fair point, but I suppose it's better to have more information on how to adjust the picture for those scenarios? -as in scaling down in a pleasing way (and maybe under user-control) is easier/better if you have the information to begin with.
 
That's a fair point, but I suppose it's better to have more information on how to adjust the picture for those scenarios? -as in scaling down in a pleasing way (and maybe under user-control) is easier/better if you have the information to begin with.
This is the practical use I see for it as well, if the phone or device can capture that many stops and HDR/Exposure Bracket them behind the scenes the capabilities would be amazing, especially in low light situations.
 
That's a fair point, but I suppose it's better to have more information on how to adjust the picture for those scenarios? -as in scaling down in a pleasing way (and maybe under user-control) is easier/better if you have the information to begin with.

But doesn't that pose the issue of not being able to make such pleasing adjustments if one is not able to see the difference on an LCD display (with a limited dynamic range) when editing? Either to later enjoy viewing on an LCD display, or making a print which has an even lower dynamic range?

Or... perhaps (very likely) there's something I'm not understanding.
 
But doesn't that pose the issue of not being able to make such pleasing adjustments if one is not able to see the difference on an LCD display (with a limited dynamic range) when editing? Either to later enjoy viewing on an LCD display, or making a print which has an even lower dynamic range?

Or... perhaps (very likely) there's something I'm not understanding.
Yeah also a fair point, though I guess imagine a slider or other control where you can toggle through different variants and pick the one(s) you want - you can't see the whole thing, but you can see a variety of different slices - maybe even picking different regimes in different parts of the image. And of course the phone/computer can "see" the whole thing and pick a "best" setting based on user input like what they want the image to focus on and so forth so even auto settings might be more useful.
 
But doesn't that pose the issue of not being able to make such pleasing adjustments if one is not able to see the difference on an LCD display (with a limited dynamic range) when editing? Either to later enjoy viewing on an LCD display, or making a print which has an even lower dynamic range?

Or... perhaps (very likely) there's something I'm not understanding.
But you can see the differences in harsh shadows and blown highlights. When combined, that many stops would virtually eliminate those kind of issues regardless of file size or display.
 
Yeah also a fair point, though I guess imagine a slider or other control where you can toggle through different variants and pick the one(s) you want - you can't see the whole thing, but you can see a variety of different slices - maybe even picking different regimes in different parts of the image. And of course the phone/computer can "see" the whole thing and pick a "best" setting based on user input like what they want the image to focus on and so forth so even auto settings might be more useful.

I like that!
 
Back
Top