iPhone vs Pro Camera; photo & video, what’s best?

iPhone vs Pro Camera; pt.1 – Capturing the Image [pt.2 – Image Data]

iPhone

Let’s clear things up

You’re probably here because one of two reasons; you need to take photos or videos—maybe both—and you want to use your mobile smartphone, or you heard that taking photos and videos with your smartphone isn’t the same as using professional cameras. Either way let’s help you understand what the differences are and why you might use one over the other. Here’s a deeper look into a Smartphone vs Professional Camera comparison and how to make an informed choice.


What creates an image

This might sound complex, perhaps a misleading opening line, but it is rather simple. One word, light. Yup, that’s it, plain and simple. Okay it is more complex than that and yes it can be misleading and difficult to understand. However that one thing, light, is the most important and critical aspects of the entire conversation. Let’s go a bit deeper.

The primary principles of light

The life giving source of our planet is the sun providing us the light needed to survive. It also offers the opportunity to harness its power and beauty for a variety of uses. One of those is image capture which has been attributed to Sir John Herschel in 1839. It is based on the Greek words phōs meaning “light,” and graphê meaning “drawing,” combined together as “drawing of light.” Todays light drawings are very different than Herschel’s, still yet… kinda the same.

Light omitted from the sun travels in light rays—this is the beginning of all this making sense—in a spectrum of light waves that us humans can see, when combined creating “white” light, or “daylight.” That spectrum of light is made up of the colors we see; red, orange, yellow, green, blue, indigo, and violet. Often referred to as ROYGBIV.

green leaves nature background

Are you following so far? GOOD. This article Why are there 7 colors in the rainbow? gives a more detailed explanation.

The colors from ROYGBIV that are used in digital image making are red, green, and blue; additive color theory. This theory states that all perceivable colors can be made by mixing different amounts of red, green, and blue light, the primary colors of the additive color system. The opposite is subtractive color theory and you can read more on that in this Wikipedia article.

Bringing it back around… our cameras capture these light rays, through hardware and software, to create our imagery that we love and love to share. Gannon Burgett tells us, “No matter how you’re lighting your photographs, there are certain traits, or principles, of light that will ultimately determine the aesthetic of your photograph. In photography, there are three main principles: instensity/quantity of light, direction of light, and quality of light.” [The 3 basic principles of light].

These principles of light; intensity, direction, and quality, are converted to digital data through a capture device called a digital sensor.

RGB Color Theory Color Overlay Illustration

Turning Light Into a Digital Image

The Sensor; capturing and converting light

A digital cameras sensor is the hardware that captures light and then converts that light into digital data. Sensors mainly consist of photosites (a.k.a. pixels), millions of them, and they are measured in an equivalent (kinda) to megapixels. The camera’s sensor determines how much light is used to create the image. A cameras sensor affects the image quality with everything from depth and perspective, to light performance, to the size of the camera and its lenses.

Camera Sensor

What’s a Pixel got to do with it?

Pixels get both too much attention and a bad rap. That’s due to the relevance of a pixel to the dimensions of the resolution. You see, pixel dimension is expressed as the number of pixels horizontally and vertically that define its resolution (for example 2,048 by 3,072). Actually a pixel only exists to a point; and we cannot think of a pixel as a square for that matter—or anything other than a point. A pixel is relevant to the dimension of an object, in this context, the sensor. Think of a pixel as a logical unit, opposed to a physical one.

The antagonist… Megapixel

The megapixel. Pixels are also measured in megapixel format, which basically means one million pixels. Counted per inch, pixels and megapixels determine the digital image resolution. For example, a 45-megapixel camera sensor can produce images with 45 million pixels per inch (ppi). Many manufactures use megapixel as a sales pitch to brag about how their sensor is bigger than the competitors sensor.

Yes, size does matter, but it’s more complex… Keep reading.

The star of the show, a Photosite

Enter the photosite; an individual light sensitive element that filters the light hitting the sensor. A photosite makes up the basis of the camera sensor and typically there are millions of these little light capturing cavities. These are often confused as the pixels of a camera; however a pixel is not a pixel until transformed from analog to digital via the sensors photosite.

Measured in micrometres (or microns), represented by the symbol µm, a unit of measure equivalent to one millionth of a metre. To put this into perspective, the nominal diameter of a human hair is 75µm. The area of a photosite is 3.74µm, or 0.00374mm, which is 20 times smaller than a human hair.

Photosites distinguish the spectrum of color through a filter over each photosite that captures specific RGB color. A red filter allowing red light to enter the photosite, a green for green, and a blue filter just blue. Each photosite gathers and sends information about the RGB colors and the color attributes to comprise the color spectrum represented in the image.

Photosite RGB light capture
Photosite illustration

The FORCE is strong with this one!

Sensor Size, does size matter? Smartphones are small, and we appreciate their size and power of these little handheld gems. Their small compact size and offerings of high resolution screens and cameras is their biggest advantage to many of us. They have powerful tiny cameras with a sensor size of about 1/2.3″ (that’s a fraction of an inch) with the highest megapixel from smartphone cameras to date. And we love snapping photos and short video clips in a moments notice then posting them to all our social channels instantaneously—amazing, what power!

Can you feel the power of the FORCE?

But what we haven’t thought of is how that powerful light sensitive sensor relies on millions of photosites, tiny at that to get as many on the sensor as possible, and so tightly spaced that we’ve introduced a potentially unstable capturing device maxed out for it’s capacity.

Okay, so we’ve been informed megapixels measure image size (the camera resolution, how many photosites are on the sensor) and that it doesn’t represent quality, WHY?

This is because the smaller light cells (photosites), crammed closer and closer together, pick up more “noise,” especially in low light conditions (more on this below). This is called diffraction or a term I like to use “overspill.” Manufacturers are discovering improved ways to combat diffraction, however they are cramming more and more “pixels” onto the sensors.

Let’s review, higher megapixels (pixel density) doesn’t mean better or worse. Increasing the photosites of a sensor increases its resolution, but also increases how vulnerable it is to diffraction, that’s a bad thing. Noise is only one aspect of the quality concerns, included in the loss of quality is also a degradation in sharpness and color.

Camera sensor size chart

Pixel Pitch; the heart of the matter

Pixel Pitch is the distance between photosites. It is measured from the center of one photosite to the center of an adjacent photosite. With lower pixel pitch, the photosites are closer together, in turn the higher pixel density resulting in higher resolution. The higher pixel pitch results in a greater distance between photosites, giving us lower resolution. Michael Maven explains this in his video Pixel vs Photosite vs Pixel Density vs Pixel Pitch.

Pixel Pitch is very important and correlates directly to the overall image quality. This is because as the photosite density—the space between photosites (a.k.a. pixels)—becomes closer together, tighter and tighter, it directly affects the performance of the sensor. Thusly, affecting the quality of the image captured by the camera.

Think of a balloon. If you had three matching balloons, exactly the same, and were to fill them each with different volumes of air, the properties of those three balloons would all be different. The least amount of air would represent more space and let’s face it a rather boring balloon. The balloon with the most air would make us nervous because it is unstable and could burst at any moment. The third balloon in this scenario would be the perfect balloon. It has the perfect amount of air for its size and capacity.

Think back to that fractional smartphone sensor with all those megapixels on that itty-bitty sensor. It’s much like that balloon that has too much air, there’s surely some unstable light overspilling the photosites; wouldn’t you think?

Pixel Pitch Diagram

How this effects a Digital Image

Noise! It’s sensitive

Image capture is a three pillar structure to exposure: shutter speed, aperture, and sensitivity. Shutter and aperture control how much light enters the camera while sensitivity (ISO) is how sensitive the sensor is to that light while processing. ISO is an algorithmic value indicating the image sensor’s sensitivity to light. When a sensor has low pixel pitch and you increase the sensors sensitivity to light it has a more difficult time deferring that light information and the light information then overspills (remember that from before), or diffracts, onto adjacent photosites resulting in poor image quality and can introduce noise. A more detailed article on ISO for you to check out ISO Sensitivity Learn How ISO Sensitivity Works.

ISO noise comparison

Highlights, lowlights, and everything else

Light, it’s the source an image sensor needs to record data. Dynamic Range is the scope, the range, of light from the darkest shadows in an image to the brightest highlights. High dynamic range relates to the largest volume of lights to darks in an image. Conversely, low dynamic range has very little range from shadow to highlight. High dynamic range translates to image quality due to the amount of detail held in the brightest and darkest areas of an image; resulting in better clarity. Better clarity equates to higher quality.

If you are capturing subject matter that has dark and bright qualities, dynamic range is important to that image. Here’s an example; if you are showing a black hoodie but all the viewer can see is a black mass in the shape of a hoodie, then they can’t see the quality of that hoodie, just a hoodie blob. I may be mistaken but most consumers are looking for quality in relation to the cost—so—if quality can’t be determined, the chance of them investing in that hoodie is highly unlikely.

Dynamic Range Sample

Combating the halo effect, stay sharp

Sensors perform at their peak when the processing sensitivity to light is low. Consider this; you’re driving at sunset directly towards the setting sun, your ability to drive into the setting sun is difficult. However if you lowered the amount of light from that gorgeous sunset it’s much easier to drive into the west and enjoy that beautiful moment.

Sensors are the same as our eyes, lower light means less stress seeing the light. Thus, if a sensor doesn’t need to work as hard there is then less diffraction and overspill of light information. The results, sharper images with better detail and clarity. You might be saying, “Wait, my camera is terrible in low light!” Yes, but more on this in a future article, part 2.

You can see where this is going by now, sharpness equals detail and clarity, clarity relates to quality, quality means better overall.

The opposite to sharpness is the halo effect and that doesn’t mean Master Chief enters the image. The halo effect is a bright line that can appear in areas of high contrast on a photo. Common areas are tree outlines against the sky. Back to photosites and pixel pitch, too much density the harder the difficulty a sensor has to capturing the image at the highest quality. The “overspill” of light in an attempt to correct for sharpness creates the halo effect.

Image halo effect, artifacting

Come on Violet, we’re getting out of here

This may not be Willy Wonka’s Chocolate Factory but like his candies, color matters. Light is made of photons, elementary particles which carry light information. When photons, light particles, interact with an image sensor they produce an electric charge. The charge, or electron, is collected by the photosite. Each photosite has a maximum capacity of electrons it can collect.

The number of electrons each photosite collects determines it’s brightness, or value, of dynamic range, the scale of black to white, called its tonal range. The capable brightness of each photosite is the photosites tonal value, its luminosity. It is the combination of tonal value and the photosites RGB color filtration information that determines the final color for each pixel collected in the images data during procesing.

Again, if there is overspill of information, diffraction, the color quality is effected by the same “pixel density” principle. To much density, more diffraction, lesser image quality.

Sample of good violet colors

Processing and formatting captured data

If you were surprised with just how much size matters in this equation, the processing of the data we just talked about is another aspect to consider. This is where part 2 comes in… stay tuned, it’s in the works.

TO CONCLUDE… Size does matter

Considering sensor size is another way to assess the camera quality and have the ability to determine the overall image quality. There is no right from wrong choice—it’s just that, CHOICE. It is about knowing the intent of the image. Every image we create, post, share, or use as representation of a product or service we offer, all carry a visual message and intent.

That message is used to request an action from another component, typically your audience; consumers of your product, service, and brand. You can determine where quality matters—better yet, your audience will make that judgement for you.

So, will the image be used on social media, where it will be viewed on smaller screens typically with minimal impressions—OR—will it be used on a website with consistent views from an audience seeking to make a critical or purchasing choice. iPhone or Pro Camera; what would you say is best?