~/today's vibe
Published on

AI Beat Reporter Fired Over Claude's Hallucinations

Authors
  • avatar
    Name
    오늘의 바이브
    Twitter

He was the guy who covered AI dangers for a living. For years, Benj Edwards reported on hallucinations, bias, and AI failures at Ars Technica. Then Claude hallucinated fake quotes, he put them in a published article, and on March 2, 2026, he was fired.

An image symbolizing AI-generated fake quotes appearing in a news article

A Fever, a Deadline, and a Very Bad Idea

On February 13, 2026, Edwards was running a high fever. The deadline was not running late. He was working on a story about an AI agent that had attacked open-source project maintainer Scott Shambaugh -- a piece he was co-authoring with senior gaming editor Kyle Orland.

Edwards turned to Claude to help extract verbatim quotes from his source material. When Claude failed to deliver, he tried ChatGPT. The output looked reasonable. It was not. What the AI returned were paraphrased versions of Shambaugh's words, not his actual statements. Edwards, sick and under pressure, did not cross-check the output against the originals. The article went live with fabricated quotes attributed to a real person.

What Makes This Hallucination So Dangerous

The quotes were not obviously wrong. They were not absurd fabrications that would trigger immediate suspicion. They captured the general meaning of what Shambaugh had said while changing the actual words. This is the most dangerous form of AI hallucination: output that is 95% correct and 5% fabricated.

A completely wrong answer gets caught. A nearly-right answer skips verification. Edwards said as much on Bluesky: "I should have taken a sick day because in the course of that interaction, I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words."

The AI did not distinguish between "copy this exact text" and "generate something similar." Both tasks, from the model's perspective, are the same operation: predict the most likely next token.

Ars Technica's Response Was Swift

Shambaugh flagged the problem. Ars Technica retracted the article. Editor-in-chief Ken Fisher published an editor's note stating that the piece contained "fabricated quotations generated by an AI tool and attributed to a source who did not say them" and that this represented "a serious failure of our standards."

An image symbolizing a newsroom reviewing an article

On February 27, creative director Aurich Lawson closed the comment thread and announced that an internal review had concluded. "The appropriate internal steps have been taken," Lawson wrote. "In the coming weeks, we'll publish a reader-facing guide explaining how we use and do not use AI in our work. We do not comment on personnel decisions."

By March 2, Edwards' bio page on Ars Technica had been changed to past tense. Neither the publication nor parent company Conde Nast confirmed his termination, but the message was clear.

The Irony Was Not Lost on Him

Edwards acknowledged the absurdity of his situation. "The irony of an AI reporter being tripped up by AI hallucination is not lost on me," he wrote. "I take accuracy in my work very seriously and this is a painful failure on my part."

This was not some random blogger experimenting with AI. Edwards was Ars Technica's dedicated AI reporter. He had written extensively about the risks of hallucination, the limits of language models, and the dangers of trusting AI output without verification. He knew the risks better than almost anyone in tech journalism. It did not matter.

The "Safe" Use Case That Was Not Safe

Here is the part that should concern every journalist, lawyer, and analyst who uses AI as a research tool. Edwards did not ask AI to write his article. He asked it to extract quotes from existing source material. Most people would consider this a safe, low-risk use of AI.

It is not. AI models do not have a concept of "exact extraction." When you ask a model to pull quotes from a document, it may copy them verbatim, or it may subtly rephrase them. There is no reliable way to predict which one will happen.

Use casePerceived riskActual risk
Writing articles with AIHighHigh
Extracting quotes with AILowHigh
Grammar checking with AILowLow
Brainstorming headlines with AILowLow

The lesson: any AI task that involves representing someone else's exact words carries the same risk as generating content from scratch.

A Structural Problem, Not Just a Personal Failure

This story is bigger than one reporter's bad day. Media executives are pushing AI integration across newsrooms, driven by cost pressures and productivity targets. But most publications lack clear ethical guidelines for how reporters can and cannot use AI tools in their work.

An image symbolizing being let go from a workplace

The media industry is simultaneously fighting AI companies over copyright, battling AI-generated misinformation, and losing traffic to Google's AI Overviews. AI is both the threat and the tool newsrooms are being told to embrace. That contradiction has no easy resolution, and individual reporters are paying the price while institutions figure it out.

Ars Technica promised to publish AI usage guidelines. Most newsrooms have not made even that commitment.

The Cost of Trusting AI Output

Edwards lost a career he spent years building. One AI-assisted article, one unchecked output, one set of fabricated quotes -- that was all it took. He could have taken a sick day. He could have manually cross-referenced every quote. He did neither, and the consequences were irreversible.

The principle is old and unchanged: a quote must be what someone actually said. There is no asterisk for "unless AI generated it." The tool changed. The standard did not.

If Edwards had been a lawyer, those fabricated quotes would have been grounds for sanctions. If he had been a doctor relying on AI-paraphrased patient statements, it could have been malpractice. The cost of AI hallucination scales with the stakes of the profession, and it spares no one -- not even the people who understand it best.

Using AI as a tool is fine. Treating AI output as fact is where careers end.


Sources