Home Investigative Report When the Lens Lies: AI Is Redrawing Photography—and Our Sense of Truth

When the Lens Lies: AI Is Redrawing Photography—and Our Sense of Truth

ai Portraits
#image_title

Two Sides. One Story. You Make the Third.

by Carlos Taylhardat


Intro

I built Art of Headshots on a simple belief: if you treat a portrait like a conversation—not a transaction—you can capture the part of a person they’re proud to show the world. We grew on that idea. Fortune 100 clients. Studios in California, Washington, and Florida. Thousands of people walked in anxious and walked out a little taller.

Then the world stopped. During the pandemic, I kept the lights on week to week, scaling only with revenue and, when I had to, taking subprime loans to survive. We made it through, but the market came back changed. “Headshots” no longer meant a camera, a lens, and a human moment. It meant algorithms. AI-generated portraits, although not very good, took a concerning percentage of our revenue, making me wonder at what point they will destroy Art of Headshots? They didn’t just undercut a business model; they challenged the very thing I’d spent a career trying to defend: capturing the essence of people. So, I’ve taken a leave of absence; my family suffered too much with my creative endeavours and entrepreneurial journey.

That’s why this story matters to me. Not as a photographer mourning the past, but as a reporter asking a present-tense question: when a perfect picture can be made without a camera or a subject, what, if anything, still makes an image real?


The picture had everything judges dream of: an uncanny 1950s mood, exquisite light, faces you felt you’d met before. It won a prestigious photography award. Then the artist, Boris Eldagsen, said the quiet part out loud: the image wasn’t photographed at all. It was generated by AI. He refused the prize—on purpose—to force the question that now stalks every screen we look at: When an image moves us, does it matter if a camera never saw it? Artforum

A year later, the script flipped. Photographer Miles Astray entered a real flamingo photo into an AI-only category, and won, before revealing the prank. His point was simple and savage: if juries can’t tell the difference, our categories and our claims of “authenticity” are already unstable. The Guardian


What Changed (Fast)

Generative image models now synthesize lighting, glass reflections, and skin micro-texture with unnerving fidelity. Humans already miss the telltale seams: one benchmark study found people misclassify nearly 39% of images when asked to separate real photos from AI fakes. Even top detectors fail on a non-trivial share. Seeing is no longer knowing. arXiv

Institutions are scrambling. World Press Photo updated rules: entries must be made with a camera; no generative fill and no synthetic images. Newsrooms like the Associated Press forbid altered reality in news imagery and require clear labelling of illustrations. Platforms, meanwhile, are testing provenance labels and AI disclosures. Meta and TikTok say they’ll tag AI media; YouTube has piloted C2PA “captured with a camera” authenticity badges. Progress is real; coverage is partial. World Press Photo · Associated Press · Reuters

On the capture side, Content Credentials (C2PA)—an open standard from the Content Authenticity Initiative—lets cameras and software cryptographically record how an image was made and edited. Leica shipped a camera with Content Credentials; Nikon has begun rolling out support on newer bodies. Adoption is growing but not universal, and some implementations still face real-world stress tests. Leica · Content Authenticity Initiative · Nikon

Two Narratives

Narrative A — AI as a Threat to Authenticity

For nearly two centuries, photographs have carried a presumption of truth. The camera became an eyewitness, its images treated as proof in newspapers, courts, and family albums. That foundation is now cracking. When AI-generated or heavily altered images slip into circulation without disclosure, the reliability of visual evidence dissolves. If a war crime photo can be dismissed as synthetic, or a fabricated image can be weaponized as “evidence”, then journalism and justice lose one of their oldest anchors.

The danger is not only in the images themselves but in the climate of doubt they create. A public primed to disbelieve what it sees is a public vulnerable to manipulation. In elections or conflicts, a single fake image can travel faster than the correction. By the time experts expose the forgery, the lie has already done its damage. Even institutions once trusted to arbitrate authenticity in photo contests, art juries, and newsrooms now risk rewarding illusions or punishing the real. When Miles Astray’s genuine flamingo photograph fooled an AI-only competition, it was more than a prank; it was a demonstration that judges, and perhaps all of us, no longer know where the boundaries lie.

There is a final, quieter threat: cultural memory itself. If archives are seeded with unlabeled AI images, what story will they tell a century from now? A generation raised on doctored evidence could inherit not just a distorted present but an imagined past.


Narrative B — AI as Creative Expansion

Yet there is another way to see this rupture, not as a betrayal of photography, but as its next frontier. Artists like Boris Eldagsen describe “promptography” as a hybrid medium, half visual art, half conceptual writing. The process is no less deliberate than composing in a darkroom: ideas are drafted, discarded, refined; an artist spends hours coaxing the algorithm into producing a mood, a gesture, a face. The result is not a stolen reality but a fabricated memory, and some argue it is more honest about its unreality than many traditional photographs have ever been.

For those without the budgets for cameras, travel, or lighting rigs, AI tools open doors once bolted shut. A student in Lagos can conjure a scene of Arctic explorers; a single mother in Manila can illustrate myths from her grandmother’s village. Entire genres may emerge: photographs that never claim to depict the world as it is, but instead propose the world as it might have been. Just as the invention of photography freed painting from the duty of realism, AI may free photography itself from the burden of literal truth, allowing it to become a more symbolic and interpretive art form.

What matters, then, is not that AI images exist but how we classify and present them. Clear labelling, “captured with a camera,” “AI-assisted,” or “fully synthetic”, would let audiences meet each work on its own terms. Freed from suspicion, photographers and artists alike could explore new territory without fear of being mistaken for frauds. In this vision, AI is not an enemy but a companion, an expansion of the canvas rather than its erasure.

“captured with a camera,” “AI-assisted,” “fully synthetic”


The Third Narrative — Truth Plumbing

The way forward may not lie in deciding whether AI belongs inside or outside of photography, but in building the plumbing of trust that runs beneath all images. Think of it less as a culture war and more as an infrastructure project: laying pipes that carry credibility from shutter to screen.

That begins with provenance by default. The Content Authenticity Initiative and its open C2PA standard allow cameras and editing software to record a cryptographic “paper trail” of an image’s life, from capture to every adjustment along the way. A Leica with Content Credentials switched on, or a Nikon embedding secure metadata, offers audiences something the eye alone cannot: a verifiable history. If provenance becomes as routine as a timestamp, doubt loses its leverage.

Then comes transparency in labelling. Just as food packaging distinguishes “organic,” “processed,” and “contains peanuts,” our platforms should mark “captured with a camera,” “AI-assisted,” or “fully synthetic.” For newsrooms, disclosure is less about decoration than about survival. Readers who know when they’re seeing an illustration are more likely to trust the photographs that are not.

Some spaces will need stricter boundaries. Journalism, for example, is a zero-tolerance zone. The Associated Press already forbids adding or removing elements in its photographs, and for good reason: an image purporting to show reality cannot be half real. The credibility of news depends on that bright red line.

Finally, there is the audience itself. We will never catch every fake with the naked eye. Research shows that even trained observers misclassify nearly four in ten AI images, but literacy still matters. Readers can be taught to notice the physics of light, the way reflections fall, and the strange geometry of hands. More importantly, they can be trained to click the small “info” chevron for provenance, to demand proof before they share a belief.

Plumbing is not glamorous. It rarely makes headlines. But without it, even the most beautiful house collapses. In the same way, the future of photography, camera-made or computer-made, depends not on choosing sides but on whether we can build the hidden systems that keep trust flowing.


Case Study, Revisited

  • Eldagsen forced a category crisis by proving AI images could pass as award-winning “photographs.” His refusal of the prize made the ethical line visible: if images are born in silicon, say so, and don’t mix them with camera work without flagging it. Artforum
  • Astray forced the opposite crisis: if judges assume the weird is AI, they’ll dismiss authentic, serendipitous reality. That’s not progress; it’s prejudice in a new costume. The Guardian

Together, they expose our real problem: not pictures but process. We lack shared, enforced norms about provenance, disclosure, and category boundaries.


Navigating the AI Image Era: Practical Steps for Digital Literacy

As AI-generated images blur the line between reality and fiction, protecting yourself from deception starts with sharpening your digital literacy. Here are practical steps to navigate this new visual landscape:

  1. Question the Source: Always check where an image comes from. Is it from a reputable outlet or an unverified social media post? Cross-reference with trusted platforms or reverse-image search tools like Google Lens to trace origins.
  2. Look for Telltale Signs: AI images often have subtle flaws—irregular textures, unnatural lighting, or inconsistent details (like mismatched hands or backgrounds). Train your eye to spot these anomalies.
  3. Demand Transparency: Support platforms and creators that label AI-generated content clearly. Push for industry standards, like digital watermarks or metadata tags, to distinguish real from synthetic.
  4. Stay Skeptical, Not Cynical: Not every image is fake, but every image deserves scrutiny. Verify claims with multiple sources, especially during high-stakes events like elections or breaking news.
  5. Educate Yourself and Others: Learn about AI tools like Midjourney or DALL-E to understand their capabilities and limits. Share this knowledge to build a community that values truth over clicks.

By adopting these habits, you can reclaim agency in a world where seeing isn’t always believing. The goal isn’t to distrust everything but to trust wisely—equipping yourself with the tools to discern truth in an AI-driven age.


What’s at Stake for News

  • Policy is sprinting to catch up. The EU’s AI Act bakes in transparency duties for higher-risk systems that shape information flows—a sign that regulators expect provenance and disclosure to become table stakes. Reuters
  • Newsroom credibility. Surveys show audiences are wary of AI-made news, especially in politics; over-promising automation could erode trust faster than any scoop can rebuild it. Reuters
  • Tooling gaps. Even promising standards like C2PA face platform and device gaps—smartphone capture, messaging apps, and some social networks still strip or ignore credentials. Until adoption is broad, bad actors can route around trust. The Verge

A Reader’s Mini-Guide: How to “Read” an Image in 2025

  1. Check the label. Look for “AI-generated,” “AI-assisted,” or “captured-with-camera” badges. On some sites and apps, tap the info icon for Content Credentials. The Verge
  2. Interrogate the source. Is the image from a newsroom with published standards (AP/Reuters), a contest with clear rules (WPP), or a random account? Standards matter. Associated Press
  3. Zoom the physics. Hands, teeth, jewelry, reflections, text—AI still stumbles there, and even when it doesn’t, provenance can settle debate.
  4. Assume ambiguity on virals. If a sensational image lacks provenance or trusted attribution, treat it like a rumor until proven real.

Close

Eldagsen and Astray each “lied” to tell a deeper truth: photography and perception are diverging. If we harden the pipeline—credentials on capture, clear labels on publish, strict rules for news, literacy for audiences—the image itself can be anything, and the trust can still be earned.

The tool is not the villain. Opacity is. Your move, dear reader: in a world where the lens may lie, how will you decide what to believe?


Quick FAQ (AI-Search Friendly)

Q: Did AI really win a major photo prize?
A: Yes. In 2023, Boris Eldagsen’s AI image won a Sony World Photography Award category; he rejected the prize to spark debate. Artforum

Q: Can judges tell AI from real photos?
A: Not reliably. A large study found humans misclassified ~38.7% of images when asked to separate real from AI. NeurIPS paper

Q: What rules do top photo contests use now?
A: World Press Photo requires camera-made images and bans generative fill and synthetic images across categories. World Press Photo

Q: How are platforms addressing AI images?
A: Meta and TikTok moved to label AI content; YouTube piloted C2PA “captured with a camera” labels on some uploads. Reuters

Q: What is C2PA/Content Credentials?
A: An open standard that records an image’s creation and edits; supported by tools like Photoshop/Lightroom and some cameras (e.g., Leica M11-P; Nikon support rolling out). Leica

1 COMMENT

  1. This piece deeply resonates with me—it brilliantly unpacks the complex, unsettling truth about AI in photography. The blend of ethical dilemmas and practical solutions is thought-provoking and urgently needed in our image-saturated world.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version