Let’s talk about pixels. In particular iPhone 14 pixels. More specifically, iPhone 14 Pro pixels. Because while the main news is that the latest Pro models offer a 48MP sensor instead of a 12MP sensor, that’s not really the most significant improvement Apple has made to this year’s camera.
Indeed, from the four biggest changes this year, the 48MP sensor is the least important to me. But keep this in mind as there’s a lot we need to unpack before I can explain why I think the 48MP sensor is far less important than:
- The sensor size:
- Pixel Binning
- The photonic motor
One 48MP sensor, two 12MP sensors
Popularly, we talk about the iPhone camera in the singular and then refer to three different lenses: main, wide-angle and telephoto. We’re doing it because it’s familiar — that’s how DSLRs and mirrorless cameras work, one sensor, multiple (interchangeable) lenses — and because that’s the illusion Apple creates in the camera app for simplicity.
The reality is of course different. The iPhone actually has three cameras modules. Each camera module is separate and each has its own sensor. For example, tapping the 3x button not only selects the telephoto lens, but switches to a different sensor. When you slide zoom, the camera app automatically and invisibly selects the correct camera module, and then does the necessary cropping.
Only the main camera module has a 48MP sensor; the other two modules still have 12MP.
Apple was fully aware of this when introducing the new models, but it’s an important detail that some may have missed (our emphasis):
For the very first time, the Pro lineup has a new 48MP main camera with a quad-pixel sensor that adapts to the picture being taken, and features second-generation sensor-shift optical image stabilization.
The 48MP sensor works part-time
Even if you’re using the main camera, with its 48MP sensor, you’re still only shooting 12MP photos by default. Again, Apple:
For most photos, the quad-pixel sensor combines every four pixels into one large quad pixel.
The only time you shoot in 48 megapixels is when:
- You are using the main camera (not a telephoto or wide-angle lens)
- You shoot in ProRAW (which is disabled by default)
- You shoot in decent light
If you want to do this anyway, here’s how to do it. But most of the time you won’t…
Apple’s approach makes sense
You may be wondering, why give us a 48MP sensor and then not use it most of the time?
Apple’s approach makes sense, because in reality there are terribly a few times when shooting in 48MP is better than shooting in 12MP. And since this creates much larger files and eats up your storage space with a hungry appetite, there’s no point in having this the default setting.
I can only think of two scenarios where making a 48MP image would be useful:
- You intend to print the photo in large format
- You have to crop the image very heavily
That second reason is also a bit questionable, because if you have to crop that heavily, you might be better off using the 3x camera.
Now let’s talk about the sensor size
When comparing a smartphone camera to a DSLR or a high-quality mirrorless camera, there are two major differences.
One of them is the quality of the lenses. Standalone cameras can have much better lenses, both for physical size and cost. It’s not uncommon for a professional or avid amateur photographer to spend a four-figure sum on a single lens. Of course, smartphone cameras can’t compete with that.
The second is the sensor size. All other things being equal, the larger the sensor, the better the image quality. Smartphones, by the nature of their size, and all the other technology they need to fit in, have much smaller sensors than standalone cameras. (They also have limited depth, which puts another substantial limitation on sensor size, but we don’t need to get into that.)
A smartphone-sized sensor limits image quality and also makes it harder to achieve shallow depth of field – which is why the iPhone does this artificially, with portrait mode and cinematic video.
Apple’s large sensor + limited megapixel approach
While there are definite and less obvious limits to the sensor size you can use in a smartphone, Apple has historically used larger sensors than other smartphone brands — which is part of the reason the iPhone has long been considered the best. camera quality phone . (Samsung later switched to do this as well.)
But there is a second reason. If you want the best possible picture quality from a smartphone, you also want the pixels to be as large as possible.
This is why Apple is religiously sticking to 12 MP, while brands like Samsung have crammed a whopping 108 MP into the same sensor. Squeezing a lot of pixels into a small sensor significantly increases noise, which is especially noticeable in low-light photos.
Ok, it took me a while to get there, but I can finally say why I think the larger sensor, pixel binning and the Photonic Engine are a much bigger deal than the 48MP sensor…
#1: iPhone 14 Pro/Max sensor is 65% larger
This year, the main camera sensor in the iPhone 14 Pro/Max is 65% larger than last year’s model. That’s still nothing compared to a stand-alone camera, of course, but for a smartphone camera, that’s (pun intended) huge!
But, as we mentioned above, if Apple squeezed four times as many pixels into a sensor that’s only 65% larger, that would actually result in poorer quality! That’s exactly why you’ll usually still be shooting 12 MP. And that’s thanks to…
#2: Pixel Binning
To create 12 MP images on the main camera, Apple uses a pixel binning technique. This means that the data from four pixels is converted into one virtual pixel (average of the values), so the 48MP sensor is usually used as a larger 12MP sensor.
This illustration is simplified, but shows the basic idea:
What does this mean? Pixel size is measured in microns (one millionth of a meter). Most premium Android smartphones have pixels somewhere in the 1.1 to 1.8 micron range. The iPhone 14 Pro/Max actually has 2.44 micron pixels when using the sensor in 12MP mode. That’s a For real significant improvement.
Without pixel binning, the 48MP sensor would – mostly – be a downgrade.
#3: Photonic Engine
We know that smartphone cameras can’t compete with standalone cameras in optics and physics, of course, but where they can compete is in computational photography.
Computer photography has been used in SLRs for decades. For example, when you switch measurement modes, it instructs the computer in your DLR to interpret the raw data from the sensor in a different way. Likewise, with consumer DSLRs and all mirrorless cameras, you can choose from several shooting modes, which again tell the microprocessor how to adjust the sensor’s data to achieve the desired result.
So computational photography already plays a much bigger role in standalone cameras than many realize. And Apple is very good at computational photography. (Ok, it’s not good at Cinematic video yet, but give it a few more years…)
The Photonic Engine is the special chip that powers Apple’s Deep Fusion approach to computational photography, and I can already see a huge difference in the dynamic range in photos. (Examples to follow next week in an iPhone 14 Diary piece.) Not just the range itself, but the intelligent decisions made about which bring out shade, and which highlight to tame.
The result is significantly better photos, which have to do with both the software and the hardware.
A dramatically larger sensor (in smartphone terms) is a really big deal when it comes to image quality.
Pixel binning means Apple has effectively created a much larger 12MP sensor for most photos, realizing the benefits of the larger sensor.
The Photonic Engine means a special chip for image processing. I can already see the real benefits of this.
More follows in an iPhone 14 Diary piece, in which I test the camera more extensively in the coming days.
FTC: We use auto affiliate links that generate revenue. More.
Check out 9to5Mac on YouTube for more Apple news: