Apple Just Patented an Image Sensor With 20 Stops of Dynamic Range

Very interesting turn of events, if true. Apple can likely do to sensors what it did to CPUs and is doing to baseband chips.

My thoughts as well. What I'm wondering is when their image sensor will come to fruition and into a shipping iPhone. Clearly not the 17. An iPhone 18, a year later, perhaps? That seems kind of iffy. As far as I know, right now it's just a patent with no first silicon to evaluate. Perhaps an iPhone 19?
 
My thoughts as well. What I'm wondering is when their image sensor will come to fruition and into a shipping iPhone. Clearly not the 17. An iPhone 18, a year later, perhaps? That seems kind of iffy. As far as I know, right now it's just a patent with no first silicon to evaluate. Perhaps an iPhone 19?
I doubt the 18 unless Apple is further ahead on the sensor than has been made public through leaks or otherwise. The 19 would be a good time for a technology breakthrough, since it'll mark the iPhone's 20th anniversary, but a lot would need to happen to make that possible. I'm excited by the prospect, though, since my iPhone has become indispensable for photography.
 
I think it’s great that Apple is developing its own sensor, a huge benefit being not needing to rely on Sony for image sensors going forward.

Whether 20 stops of dynamic range can be effectively and beneficially post processed with *today’s* displays, image processing tools, and making prints for hanging on a wall for viewing, is currently (in my mind) debatable. It’ll be interesting to see what Adobe comes up with for image processing tools such as Lightroom. No doubt they’re noodling the possibilities today.

That said… it’s great that Apple has made that breakthrough, which may have future benefits. If the sensor does indeed have less noise, that would be a huge benefit *today* for iPhone photography.

I think of two big features of such a sensor that make it desirable:

1) More forgiving of exposure time without loss of detail, making it easier to recover shadows or highlights in an otherwise good shot when the automatic exposure picks the wrong thing (or can help simplify auto-exposure if you simply know you just need to avoid blowing out a highlight).
2) Easier to do stuff like this without having to do a bunch of exposure bracketing, which means you can get a more complete effect without a bunch of exposure bracketing that the iPhone would currently need to do (and I think it even does in certain cases?): https://en.wikipedia.org/wiki/Multi-exposure_HDR_capture

Both of these seem like they would be a good fit for a smartphone camera.
 

Supposedly coming in 20th anniversary phone in 2027. But some competitors supposedly will have the technology next year.
 
Using a similar architecture as Apple's, outlined in their patent?
we don’t know if it will be similar to the patent, but it will use the same idea of lateral capacitors to increase dynamic range.
 
we don’t know if it will be similar to the patent, but it will use the same idea of lateral capacitors to increase dynamic range.

Would that idea be close enough to infringe, even if implemented in a different manner?
 
Would that idea be close enough to infringe, even if implemented in a different manner?
I don’t want to guess, especially because there’s always a chance I could end up representing one of the parties. I can only say that Claim 1 seems pretty broad, but it may be possible to design the circuit differently and still achieve the same effect of having additional available capacitance for each pixel. But, sometimes in patent law, even doing it a different way could infringe, due to something we call the Doctrine of Equivalence (an example: i patent a stool that’s held together with nails, and you use screws).
 
Very interesting. IMHO we have been stagnating in consumer / prosumer DSLR/MILC camera space around the real world 11-13 stops dynamic range for video over the past few years.
To see how fast and how far smaller sensors in camera phones have come is just incredible. Once Apple/Google/Samsung manage to simulate bokeh better in video, then I think we are going to see an even greater retraction of content creators using MILC hardware and it falling into even more of a niche for nature/sports/wildlife where super telephoto lenses are needed.

I've been watching content created on an iPhone 17 pro on YouTube and it's blown me away what can be achieved (albeit with phenomenal talent, color grading, lenses, lighting etc...). In effect, the limit is no longer the tiny sensor inside the camera phone...

 
For the first time, I recently traveled (Italy) and didn't take my DSLR. I used my iPhone 15 Pro Max for photography and videography. Leaving the camera at home made life much easier on planes, trains, and automobiles (one of my favorite "Those aren't pillows!" movies!) and in some venues. I also brought my GoPro but didn't use it often.

For the most part, I'm quite pleased with the results, but there were definitely times when I missed my Nikon, especially in places with poor or tricky lighting, so this new sensor is intriguing. I'd rather not have to wait until at least 2027 to upgrade, though.
 
So, for what it’s worth, here’s Apple’s circuit diagram from the patent (which, to be clear, does not mean the patent is limited to this circuit):

IMG_1389.jpeg


The following is from a 2020 paper I found. Very similar circuit.

IMG_1388.jpeg


Claim 1 of the patent is:

1. An image sensor comprising:
a logic die, comprising column readout circuits and bitlines connected to the column readout circuits;
a sensor die, overlaid on the logic die; and
an array of detector elements, each detector element comprising:
a sensing circuit, which is disposed on the sensor die and comprises:
a photodiode having cathode and anode terminals;
a floating diffusion node connected to one of the terminals of the photodiode;
a reset transistor coupled between the floating diffusion node and a reset voltage; and
a source follower transistor having an input connected to the floating diffusion node and an output;
a pixel circuit, which is disposed on the logic die and comprises:
a select transistor, which has an input coupled to the output of the source follower and an output coupled to one of the bitlines; and
a current memory circuit, which is coupled to the input of the select transistor and is configured to sense and output a signal indicative of a level of noise in the detector element; and
a lateral overflow integration capacitor (LOFIC) circuit, comprising one or more charge storage capacitors and one or more LOFIC transistors coupled between the floating diffusion node and the charge storage capacitors, wherein the reset transistor is coupled between the LOFIC circuit and the reset voltage.

The stuff in bold underline is, I think, maybe where Apple is claiming stuff is new. If you look at the bottom diagram, the top of the capacitor labeled “FD’ is the flooding diffusion node - it is not connected to one of the terminals of the photodiode. Compare to the top diagram, where FD (node 42) is connected directly to the photodiode.

I now have very definite thoughts about all this, which I will keep to myself.
 
Back
Top