How 3 Narratives News uses AI Search Assistance

Readers deserve to know how the work is made. This page is our plain-language explanation of where artificial intelligence fits into our reporting process, where it doesn’t, and why we believe transparency is a feature, not a disclaimer. We don’t outsource judgment to machines; we use tools to help humans do better journalism, faster and with fewer blind spots.

Ai Reporters

What AI is good for in our newsroom is the unglamorous work that often hides behind a clean story: sorting long documents, transcribing interviews, clustering themes across dozens of sources, building neutral timelines, checking names and dates for consistency, generating accessibility assets like alt-text drafts, and catching grammar slips. These are accelerators. They don’t decide the angle, they don’t write our verdicts because we don’t publish verdicts. Every article at 3 Narratives presents two fully-formed sides and a quieter, human layer beneath them; that structure is editorial, not algorithmic.

Where we never use AI is equally important. We don’t fabricate quotes or sources. We don’t invent people, places, numbers, or composite “witnesses.” We don’t publish AI-generated images to depict real news events unless they are clearly labelled as illustrations, and we never use synthetic audio to impersonate real voices. We don’t allow chatbots to message interviewees on our behalf without disclosure and consent. We don’t use automated systems to nudge emotions or harvest personal data. If a claim cannot be verified in public records or on-the-record reporting, it does not go live.

Our workflow treats AI like a calculator for words. A tool may help sort a stack of testimony, but a reporter still reads the transcripts. A tool may propose a summary of a 200-page filing, but a human checks the passages against the original PDF. A tool may spot anomalies in dates or spellings, but a human calls the office, emails the press desk, and reads the footnotes. When an assistant suggests a fact, we require a live link to a credible source; if we can’t trace it, we remove it. The goal isn’t to trust machines; it’s to reduce human error and make our time with sources count.

To keep faith with readers, we label the help. Each news article includes a small disclosure box that states the work was reported and edited by humans, with AI used for organisation and copyedits, and we link here for details. We also maintain a public log of fixes so you can see what changed and when, at /corrections/. If we update a piece with new facts, we timestamp the update inside the story.

Because models can be confidently wrong, we build in friction. Before publication, we run a four-part check: names and spellings, dates and time zones, numerators and denominators in percentages, and quote integrity with context. We compare any AI-generated summary to the underlying document, and if numbers are involved, we recalculate them by hand. We favour primary material transcripts, filings, official datasets and each page directly. When secondary sources are used, we prefer established standards of practice from organizations whose editorial rules are public and testable, like Reuters’ Trust Principles, the SPJ Code of Ethics, and the AP’s News Values & Principles. These aren’t decorations; they’re the guardrails we measure ourselves against.

We also try to be useful beyond the article. Our structure is Narrative 1, Narrative 2, and the Silent Story and leans on AI to surface patterns without flattening disagreement. For example, a clustering tool may show that interviews from parents and pensioners share a fear of “being priced out,” while business owners talk about “survival margins.” The machine can group the language; only a reporter can sit with a shopkeeper and ask what “survival” means on a Tuesday afternoon. The resulting story preserves both voices and the quiet pattern between them, which is our value to you.

Bias is a human problem that tools can worsen. To blunt that risk, we rotate sources across geographies and outlets, balance citations between institutions and community voices, and record how we found each link. We avoid charged labels unless they appear in a quote. We write in concrete language and show our receipts with links. If a model suggests a framing (“crackdown,” “uprising,” “terrorist,” “freedom fighter”), we replace it with specific, observable facts: who did what, when, to whom, with what stated justification. If we quote a charged term, we attribute it to the speaker and include the counter-characterization when relevant.

Privacy and safety matter. We do not paste sensitive PII from sources into third-party tools. We redact or paraphrase when we must use a system to help with structure. We store notes in our own workspace, and we share transcriptions only with the reporting team. If a source asks how we used AI in a specific piece, we tell them. If a reader emails with a concern, we answer. Accountability is not a badge; it’s a habit.

We’re also experimenting in the open. Our LLMs.txt file tells AI crawlers how to credit and link to our work, and our Categories Hub makes it easier for both readers and machines to understand our topical map. We hand-author (and validate) the structured data on our articles so that search engines and AI systems can interpret the story correctly. These are small, concrete steps that compound into better discovery and fewer misunderstandings about our intent.

Short version for busy readers: humans do the journalism; AI helps with chores. We verify everything we publish against named sources. We never use AI to fabricate or to manipulate. We label our process, fix mistakes in public, and invite you to hold us to our own standards.

If you spot an error or want to know more about how a specific story was made, email us at [email protected]. You can also browse our latest pieces from the top of the site or start with our explainer on media trust at Legacy vs. Alternative Media — The Case for Two Truths. Thank you for reading—and for expecting more of your news. So do we.

Last updated: November 12, 2025