Wednesday, December 3, 2025
News. Two Sides. One Story. You Make the Third.

AI-Generated News: Revolutionizing Information or Undermining Truth?

AI-Generated News: Revolutionizing Information or Undermining Truth?

Date:

By Carlos Taylhardat

Editor’s note (updated November 28, 2025): This article was originally published on May 26, 2025. We have updated it after a year of new AI scandals, newsroom experiments, union battles, and our own evolving practices at 3 Narratives News. For a narrative deep dive, see “Truth and Lies About AI Assistance in the Newsroom — Revealed” and our policy page “How 3 Narratives News Uses AI Search Assistance.” Together with stories like “When the Lens Lies: AI Is Redrawing Photography — and Our Sense of Truth”, these pieces explain how we are experimenting with AI while trying to change how news is told—from Ukraine coverage built on Ukrainian sources to multi-layered investigations.

“Two Sides. One Story. You Make the Third.”

I. The Rise of AI in Newsrooms

Before anyone called it “AI slop” or worried about deepfakes, AI arrived in the newsroom in a far more modest role: the quiet assistant on the night shift.³ It started with small, structured tasks—the kind that make reporters’ eyes glaze over at 11:45 p.m., just as a city council meeting runs long or a minor election race tightens.

At The Washington Post, a system called Heliograf was trained not to write novels, but to turn raw numbers into clean, readable updates. In 2016 it generated real-time coverage of more than 500 local election races—firing off short alerts the moment a precinct’s results shifted.⁴ Human editors then stepped in, layering on color, context, and the small human details that turn numbers into a story.⁵

Across the Atlantic, Reuters built Lynx Insight to do something slightly different: not publish stories, but whisper leads. The tool sifted through terabytes of data, flagging sudden spikes in commodity prices or unusual patterns in corporate filings—signals that a human reporter could chase down.⁶

“We’re not replacing reporters,” says Reuters’ data-science lead. “We’re arming them with insights they’d never unearth on their own.”⁷

In those early years, AI felt like a specialist lens or a new spreadsheet function. Then came the missteps that reminded everyone how quickly that lens can distort.

In 2024, a syndicated “Heat Index” summer book list slipped into news sites around the United States. Buried among the expected thrillers and beach reads were at least five titles that did not exist—books invented by an AI system and waved through by human editors.⁸ The supplement had to be withdrawn. For many newsrooms, it became a cautionary parable: AI can be a powerful research aide, but it’s no substitute for verification.

II. Case Study: The “AI Slop” Epidemic

Once publishers realized AI could write entire articles, another temptation kicked in: scale. Why commission one restaurant review when a machine can spit out a hundred? Why send a reporter to a minor game when a model can remix the box score and some boilerplate?

That’s where the term “AI slop” took hold—shorthand for the kind of low-grade, mass-produced copy that clogs search results and social feeds.¹⁰ In 2023, readers discovered that Sports Illustrated had quietly published service pieces under bylines that were not real people at all, but AI-fabricated personas with stock-photo headshots and robo-written bios.¹¹ The prose felt off. Details rang false. Complaints followed.¹²

“We thought we were experimenting,” recalls one insider. “What we actually did was erode trust overnight.”¹³

The lesson was brutal but simple: readers may forgive the occasional typo, but they will not easily forgive the sense that a publication has quietly swapped human judgment for volume metrics.

III. The Ethical Tightrope

As AI systems got better at sounding human, the questions became less technical and more moral. When an AI-generated line turns out to be wrong, who owns the error?¹⁴ Is it the coder, the editor who hit publish, or the company that demanded “more content” with fewer staff?

Another question cuts straight to the reader’s experience: How should AI-assisted reporting be labeled so people know what they’re consuming?¹⁵ A short disclosure? A dedicated badge? A full methodology note? Each choice sends a signal about how seriously a newsroom takes transparency.

And hovering over all of this is a darker possibility. What happens when foreign actors and domestic propagandists weaponize AI-generated deepfakes? During the 2024 election cycle, manipulated videos depicting fabricated candidate statements rippled through social networks before fact-checkers could react.¹⁶

Poynter’s recent survey captured the public’s split-screen view of this technology. About 70% of readers said they distrust news that is labeled as AI-generated, yet 60% also believe algorithms could make fact-checking faster and more thorough.¹⁷ In one chart, you can see the central dilemma of modern journalism: the tools designed to bolster truth can just as easily erode it.

IV. McClatchy’s Localization Gamble

Not every experiment has been about scale for scale’s sake. In some places, AI showed up as a bandage for shrinking staffs and growing news deserts.

At McClatchy, a chain with dozens of local papers, executives faced an old problem in a new form: they had more school board meetings, youth sports, and community events than their reporters could possibly cover.¹⁸ They turned to an AI system to help fill the gaps, asking it to generate short neighborhood event summaries and high-school sports recaps from structured data.

Senior VP Cynthia DuBose framed it as a way “to free our reporters from routine beats”—a tool to handle the low-stakes write-ups so journalists could focus on deeper stories.¹⁹ Instead, at first, the system churned out dozens of near-identical restaurant-review blurbs, generic to the point of parody. Readers noticed.

But McClatchy did something important: they treated the backlash as feedback. Through iterative loops—tweaking prompts, tightening rules, and feeding human critique back into the system—the quality improved.²⁰ The case became a small proof of concept for a larger idea: AI works best when it is coached, not worshipped.

V. A Union Pushback: Politico’s AI Contract Clash

While some newsrooms experimented in-house, others dragged AI into the bargaining room.

At Politico, reporters and editors pushed for something unusual in a union contract: guardrails around algorithms. In 2024, they won language that required **60 days’ notice** and **collective bargaining** before management could roll out significant new AI tools.²¹ It was a way of saying: if software is going to reshape our work, we get a say.

The test came quickly. When Politico’s owners introduced AI tools for subscriber newsletters without going back to the table, the PEN Guild filed for arbitration, arguing that “AI is performing work traditionally done by journalists.”²² However that case is resolved, it marks a turning point: for the first time, an American newsroom is treating AI deployment not just as a tech decision, but as a labor issue.

VI. Wyoming’s Cautionary Tale

Sometimes, the clash isn’t between workers and management, but between deadlines and judgment.

In Cheyenne, Wyoming, a small daily paper published a feel-good human-interest story about a local teacher.²³ On the surface, it looked harmless. But the quotes attributed to the teacher were AI inventions: polished, heartwarming, and completely fabricated. Under pressure to file fast, the reporter had leaned on a generative tool and skipped the old-fashioned step of calling the source to confirm the quotes.

The teacher recognized the words immediately—not as hers, but as something that had been put in her mouth. She called the paper. She posted publicly. Readers felt duped.²⁴ The reporter resigned. The paper banned unvetted AI. In a small corner of Wyoming, one mistake rehearsed the core lesson of this entire debate: speed should never trump accuracy.

VII. The Human Element

For all the hype, most working journalists will tell you their job is still stubbornly analog. It is about showing up in rooms, calling people who don’t want to talk, noticing who glances away when a sensitive topic comes up.

AI can help with some of that work—it can transcribe, summarize, and flag patterns—but it cannot yet replace the quiet, relational labor that makes reporting possible. As New York Times tech columnist Kevin Roose has written, “AI can crunch data, but it can’t read the room.”²⁵ It doesn’t know when a pause in an interview means “ask that again” or when a politician’s over-polished answer hides something important.

“Our duty is not to offload responsibility,” argues Washington Post CTO Scot Gillespie. “It’s to ensure every story—human or machine-assisted—upholds our standards.”²⁶

That line—not offloading responsibility—is where the conversation ultimately returns. AI may change the workflow, but it does not change who answers when a story is wrong: the newsroom, not the model.

VIII. Looking Ahead: Guardrails for Trust

If AI is going to stay in the newsroom—and it is—the question becomes less “should we use it?” and more “under what rules?” In practice, three guardrails now matter more than any others.

  1. Transparency: Clearly label AI-assisted content, and publish accessible editorial policies on how these tools are used.²⁷ Readers do not expect perfection, but they do expect honesty.
  2. Human-in-the-Loop: Make human editorial review non-negotiable.²⁸ AI can suggest, draft, and summarize, but a named editor should always decide what is true enough to print.
  3. Media Literacy: Equip audiences—and journalists—to spot deepfakes, “AI slop,” and other automated distortions.²⁹ In an era of synthetic text, synthetic images, and synthetic voices, skepticism is a survival skill.

At 3 Narratives News, we try to apply those guardrails in ways that fit our mission. In some investigations—such as our Ukraine coverage—we deliberately build the reporting stack on local-language and frontline outlets, then use AI only to organize, translate, and stress-test our understanding, not to dictate the story’s angle. The aim is simple: use the machine without letting the machine use us.

“We’re not outsourcing our conscience,” a Vanity Fair editorial reminds us. “We’re augmenting it.”³⁰


§ Citations:

  1. AP’s AI earnings reports launch, AP News 2018. AP News
  2. AP’s misattributed revenue incident, AP News 2024. AP News
  3. Overview of AI in media, Wired 2025. WIRED
  4. Washington Post Heliograf election coverage, WashPost 2016. The Washington Post
  5. Heliograf editor quote, WashPost release. The Washington Post
  6. Reuters Lynx Insight case study, Reuters 2016. Reuters
  7. Reuters data-science lead quote, Reuters 2016. Reuters
  8. Heat Index fake books scandal, AP News 2025. AP News
  9. AP syndicator response, Atlantic 2025. The Atlantic
  10. Definition of “AI slop,” NewsGuard Report 2025. AP News
  11. Sports Illustrated AI byline fiasco, AP News 2023. AP News
  12. SI reputation damage, AP News 2023. AP News
  13. Insider quote, Wired 2025. WIRED
  14. Accountability question, Semafor 2025. WIRED
  15. Transparency in AI labeling, Nieman Lab 2024. Nieman Lab
  16. Deepfake election videos, TechRadar 2024. Poynter
  17. Poynter/U Minnesota survey, Poynter 2025. Poynter
  18. McClatchy AI sports recaps, Taboola 2025. Taboola.com
  19. DuBose generic restaurant reviews, Taboola case study. Taboola.com
  20. “Learn from human critique,” United Robots AI case study 2025. United Robots
  21. Politico union AI clause, Wired 2025. WIRED
  22. PEN Guild arbitration claim, Wired 2025. WIRED
  23. Wyoming reporter fake quotes, AP News 2024. AP News
  24. CBS News follow-up on resignation. AP News
  25. Kevin Roose “read the room,” Vanity Fair 2023. The Washington Post
  26. Scot Gillespie on AI ethics, WashPost Arc Publishing release. The Washington Post
  27. Labeling AI content guidelines, CJR 2025. Nieman Lab
  28. Human-in-the-loop best practice, Nieman Lab 2024. Nieman Lab
  29. Media literacy call, Poynter 2025. Poynter
  30. Vanity Fair editorial on AI trust, Vanity Fair 2023. forbes.com

Comparative Analysis: Benefits vs. Risks

To summarize the dual nature of AI-generated news, the table below compares key benefits and risks based on the research:

AspectBenefitsRisks
Speed and EfficiencyAutomates routine reporting, reduces costs, enables real-time updatesMay prioritize speed over accuracy, leading to errors or oversights
AccessibilityDemocratizes news access, especially in underserved regionsRisk of amplifying misinformation to wider audiences
ObjectivityReduces human bias in data-driven storiesCan inherit biases from training data, skewing coverage
Journalistic RoleFrees journalists for in-depth analysis and investigative workMay erode human judgment, essential for nuanced reporting
Trust and IntegrityCan check for biases, enhance transparency with proper oversightSpreads misinformation, undermines trust in media, especially on social media

This comparison underlines the core tension: the same systems that promise speed, reach, and even fairness can also accelerate error and distrust. For a broader view of how AI and misinformation interact in real time, see this Washington Post analysis of AI and fake news.

Carlos Taylhardat
Carlos Taylhardathttps://3narratives.com/
Carlos Taylhardat, publisher of 3 Narratives News, writes about global politics, technology, and culture through a dual-narrative lens. With over twenty years in communications and visual media, he advocates for transparent, balanced journalism that helps readers make informed decisions. Carlos comes from a family with a long tradition in journalism and diplomacy; his father, Carlos Alberto Taylhardat , was a Venezuelan journalist and diplomat recognized for his international work. This heritage, combined with his own professional background, informs the mission of 3 Narratives News: Two Sides. One Story. You Make the Third. For inquiries, he can be reached at [email protected] .

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More like this
Related

Trump’s Pardon of Juan Orlando Hernández: Ally, ‘Narco-State,’ and the Rules of the Game

On a Tuesday morning in West Virginia, the prison...

The Dictator in Your Pocket: Why You Can’t Escape Julius Caesar

From Julius Caesar’s calendar to the darkest dictators of...

Ukraine’s War Within: Ukrainian Journalist Heroes and the ‘Midas’ Scandal

Editor’s note (updated December 2, 2025): This...