Computational photography pressure - When phone photos look “better”
How do you deal with client expectations shaped by computational photography?
I recently photographed an event where the lighting was challenging. There was a wide dynamic range, mixed and uneven light, and not many moments where the scene looked effortlessly polished. I brought along both my Nikon Z9 and Zf, but most of the shots ended up being taken with the Z9.
I was still able to deliver a set of technically solid, well-lit photos. I edited them with selective masking and local adjustments, but I kept the overall look fairly realistic and true to the actual conditions.
When I shared the gallery, I got the impression that the organizer was hoping for something a bit more “spectacular.” I noticed that some attendees had taken smartphone photos, and it seemed like she reacted more positively to those. The phone images had that appealing look: faces were evenly lit, with controlled, punchy contrast, giving off a sort of instant ‘cinematic’ feel, and the lighting appeared flawless
I found that surprisingly difficult to deal with. Maybe part of it is my own skill level, and I’m open to that. But I also feel that computational photography has changed what non-photographers expect from images, especially in difficult lighting. Phones often produce an immediately pleasing version of reality, while professional cameras give us a more honest file that still requires judgement and restraint.
For those of you shooting events professionally: do you feel pressure to match the “perfect” computational look of smartphone photos? How do you handle clients who seem to prefer that kind of processing?
[link] [comments]
Want to read more?
Check out the full article on the original site