Late last year, during a trip to Asia, my wife and I spent a few days in Bali, Indonesia. I decided before we went that it simply wouldn’t do to visit Bali without taking at least one photograph of the sun setting over the Indian Ocean. So as soon as we arrived, I scouted out an appropriate vantage point: an overlook on the campus of the resort hotel where we were staying with a pleasant-looking pier in the foreground. Unfortunately, I was always somewhere else at sunset and my only opportunity to make the shot occurred our final evening on the island, when I raced to the overlook the instant our driver dropped us off at the hotel entrance. The light wasn’t great and there was more cloud cover than I would have liked. But since this was my last chance, I made the best I could of it, snapping off a number of frames as the sun sank toward the horizon.
Even my favorite picture of the sequence hardly qualified as a great photograph. But it was okay, and later that night I mailed a copy electronically to some family members and friends (with the subject line, Obligatory Bali Sunset). After we returned home, I performed some simple post-processing in Adobe Photoshop, sharpening it slightly to compensate for the inevitable camera sensor softness and adjusting the colors to replicate the warm hues of that tropical evening. As I say, not a great photograph, but good enough by the modest standards of the photo gallery on my website. So I posted it along with a few other pictures from our vacation, and that was that.
After I upload a picture to the web, I rarely give it a second thought. However, a few weeks later, while telling the story of my dash to the ocean for the “obligatory Bali sunset,” I for the first time recalled a detail that made me think something might be wrong with the way I had post-processed the image: while I was looking through the camera viewfinder that evening, I explicitly was thinking that I wanted to capture the distinct yellow-orange highlights cast by the setting sun on the blue water. But the color contrast between the highlights and the rest of the ocean surface wasn’t visible in my picture because the entire photograph had an orange color cast.
So I pulled up the original image capture file and had another go at it in Photoshop. Again, the adjustments were quite simple, only this time I used a different setting for the “white balance,” a control which mediates how the image-editing software interprets the color of the ambient light that was reflected from the objects in the photograph. And this time, I wound up with a very different picture—one with distinct highlights on the water’s surface that were consistent with the colors I recalled thinking about as I made the shot that evening.
But was the second version of the photograph correct? What, precisely, does “correct” mean in this context? The first version is how I remember the view from the overlook. The second version matches what I was thinking to myself at the time. Despite their different colors, the two pictures came from the same file emitted by my digital camera. Not only that, but I have confidence in the accuracy of both my memories of the overall color cast that evening and the specific contemporaneous observation about the highlights on the water, even though the two recollections are at least superficially inconsistent. So, as I write this, I honestly don’t know which image is a more accurate representation of the colors and, more important, I’’m not sure that’s a meaningful question.
Electronic light-capturing devices, such as digital camera sensors, interpret colors differently than we do. (So for that matter, do chemical light-capturing devices, such as color films.) A digital sensor, which is what was in the camera I was using that night in Bali, has tiny receptors for red, green and blue light. The data produced by those receptors need to be interpolated and combined by software to produce a continuous range of colors, and then further processed into a usable image. This can happen inside the camera or outside of it. Most "point-and-shoot" cameras do all the processing inside the camera, and spit out a finished photograph containing the camera’s best guess as to the colors. Some more complex cameras can optionally emit an unprocessed image file—a “raw” file—which needs to be post-processed with software running on an external computer. But either way, software executing complex algorithms is required to transform the light that hits the camera’s electronic photosites into a recognizable image.
Needless to say, that’s not the way we see color. The human eye and brain collaborate to produce meaningful colors—colors that make sense in the context of the scene we are observing—and the meaning is often subjective. Because colors prompt memories and evoke emotions, we see them through the lens (pun intended) of our own experience and expectations. Color can transform a dull subject into one that is appealing. As I said at the beginning of this essay, I’m not claiming my Bali sunset photograph is great art, but I think either of the alternative versions is worth a moment of the viewer’s attention. Remove the color, and it’s a waste of web browser real estate.
When I think of a sunset in the tropics, I visualize reddish tones suffusing the entire landscape, as in the first version of the photograph. The second version isn’t what I want to see, even though it may accurately reproduce the yellow-orange highlights on the blue ocean I recall thinking about that evening. For what it’s worth, in an informal poll, all but one of my respondents preferred the first version of the photograph. I suspect that’s because, like me, they believe a Bali sunset is supposed to appear that way. Which picture better represents the actual colors? Don’t ask me. I really don’t know.
That’s not to say color is entirely arbitrary. With a little more Photoshop manipulation, I was able to transform the highlights from yellow-orange to pink and project them onto a powder-blue sea. The water that evening definitely didn’t look like that. Unless you’re trying to emulate Andy Warhol’s famous Marilyn series, there is a limit to how much you can alter color and still claim the resulting photograph is a valid representation of the physical world.
Of course, the camera interprets reality differently than the human eye and brain in respects other than color. Driving along the waterfront of Washington, D.C., on a recent afternoon, I caught a sideways glimpse of a small tourist-boat hutch next to the road with the Thomas Jefferson Memorial behind it on the other side of the Tidal Basin. I knew that if I tried to make a photograph from that vantage point, the boat hutch would dominate the picture and the Memorial would be so small as to be difficult to recognize. My brain was able to put the two structures into appropriate perspective; a lens with a wide enough angle of view to include both would have shrunk the more distant but much larger Jefferson Memorial so that it looked like a tiny toy building. A camera simply couldn’t record what I was seeing through the car window—at least, not from where I was seeing it.
Sometimes you can compensate fairly easily for the camera’s skewed version of reality. When you point a camera up at a building, the photograph it makes will show the sides of the building converging toward the top. Your brain tells you the walls are parallel, which of course they actually are. However, it’s possible to eliminate the distortion with image-editing software by introducing an equal and countervailing distortion, which in effect pulls the sides of the image away from each other at the top of the frame so the vertical walls of the building appear the way they should. (There are special-purpose and quite expensive camera lenses that can do that optically.)
Same goes for sharpness: even when it’s snowing, or the air is full of crud, or you’re looking through a smeared window, you see the sharp edges of objects because your brain knows the telephone pole is a different object than the house behind it, which is in turn a different object than the cluster of trees behind the house. Although the camera lens may be optimally focussed, digital images have some inherent softness because they are composed of chunks of light collected by the discrete photosites. Fortunately, the apparent sharpness of the image can be increased with some simple software manipulation. The software can execute inside the camera or on an external computer.
But it’s not always possible to make a photograph that accurately reproduces what the photographer saw. Light and dark are a challenge for digital cameras: your eye and brain adjust automagically to contrasty scenes so you can see detail in both the highlights and the shadows. If you want even to approximate that in photography, you have to make multiple shots from exactly the same camera position with different exposure values, superimpose them with image-editing software, and extract detail from the appropriate light and dark areas of each one. They call that “high dynamic range photography,” in case anyone asks you. But the images always look washed-out to me. They display technical virtuosity, but don’t look real.
Controlling color is especially important, however, because it influences the way we respond to an image in a way that is independent of the image content. That’s what makes abstract expressionist paintings work. Color is analogous to contrast in that the brain interprets colors to make them look “right” regardless of the color cast of the ambient light, while the camera records them literally based on the characteristics of its sensor. There exist devices that can measure the “color temperature” of light. I’ve never used one, but presumably if I had taken such a measurement that evening in Bali, I would have had an objective number to enter into Adobe Photoshop when I was editing my sunset photograph. Then, perhaps I could be certain which of my two alternate versions of reality is correct.Of course, that doesn’t mean I wouldn’t still prefer the other one.