We’re about to learn how comfortable we are in the age of AI photos—and it’s happening soon. I effortlessly snapped this couple vaping on a beach towel. They were right there in the background of one of my vacation photos, really ruining the mood. Google Pixel’s Magic Editor feature captured them with just a few taps, and you know what? I’m fine with that. But what’s hard to accept is that there are boundaries.
The vacation photos I took with my iPhone and edited with my Pixel-8-Pro are exactly what Google’s generative AI editing features are designed for. We went to the beach on Lake Michigan to watch the sunset, and I took a cute photo of my child sitting on my husband’s shoulders. It’s a moment you want to freeze and save in a jar for a lifetime.
Except for a few little guys in the background. Three taps in the Magic Editor and they were gone. But while I was there, I started thinking about what else I could change about the scene. What about a few cars in the parking lot behind them? What about the trash cans in the distance? Maybe I could emphasize the glow of the setting sun a little more?
I fiddled with the AI tools and discovered that, yes, I could do all of those things. But after all these modifications, can I still call these photos of our vacation? Or have I fallen into the “this is a memory, not a photo” trap? At this point I got a little uneasy and simply exited the app.
Things are about to get even weirder. The Pixel 9 series, launching on August 22, will come with a whole new level of generative AI tools that will let you “reimagine” entire sections of a photo. You’ll be able to use AI to add objects and scenery to images with text prompts, or bring everyone into a group photo by merging two different frames. Not only can you tweak the background and lighting of your vacation photos; you can change the location entirely. Cleaning up a few parked cars is nothing compared to what’s going to happen a few days later. Like me, not everyone is on board with this move. In fact, some are running in the opposite direction as fast as they can. iPhone camera app maker Halide just released a new mode called Process Zero that skips AI and multi-frame processing, turning the clock back to the early days of phone cameras, before computational photography. Gen Z is driving a retro digital camera renaissance, seeking a rougher, lo-fi aesthetic that modern phone camera apps, optimized to enhance shadows, increase saturation, and brighten faces, can’t offer.
Personally, I’d rather stick with native camera apps that maximize every pixel. But it’s a powerful response to current data-intensive technology, not unlike the backlash to Google’s recent Summer Olympics gaffe. The company released an ad in which a father used Gemini to help his daughter write a fan letter to her track star idol. It offended many people who thought writing such a letter was the point. Google eventually pulled the ad.
Imperfections are sometimes the point
The thing is, imperfections are sometimes the point. Writing a heartfelt letter that’s word for word is what matters. Smoothing out the edges dehumanizes the final product. I think Gen Z’s preference for choppy, “dumb” digital cameras reflects a similar impulse. When everything looks too good, it feels less personal.
Just like digital processing, we will all find our own ways to use generative AI photo editing, as these tools are certainly not going away anytime soon. For certain types of photos, I think I do like the option to remove distractions from the background. But I don’t need every photo to look polished enough to be printed on a Christmas card, just as I wouldn’t write a letter to a friend like a college application essay. Sometimes, a little grit is perfect.
Leave a Reply