I Vibe-Coded An AI That Fact-Checks, Challenges, and Debates Any Article
I typed a simple prompt: “I’d like to build an app that fact-checks articles on the web.” And Claude built it.
Back in the fall of 2024, I presented a paper at the National Communication Association convention on AI agents as rhetorical actors. One of the core ideas was straightforward: what if an AI agent could evaluate claims in real time?
Picture this. You’re watching a presidential debate. As each candidate argues, an AI sidebar automatically fact-checks their claims — adding nuance, surfacing sources, or flagging outright falsehoods. Not after the debate, when CNN or Fox News offers their predictably slanted post-mortems, but right then, as the words leave the candidate’s mouth.
I thought this would be a great piece of software. I just didn’t have the engineering skills to build it.
From Theory to Working App
Fast forward to now. The no-code and AI-assisted development landscape has exploded — Replit, AI Studio, Lovable, Bolt, Claude Code. I’ve tried most of them. So I decided to take the concept for a spin, starting with something slightly more modest than live speech analysis: a tool that fact-checks articles as you read them on the web.
Working in Claude Cowork (I have the $100/month subscription, but the $20/month plan should work just as well), I typed a simple prompt: “I’d like to build an app that fact-checks articles on the web.”
And Claude built it.
The Iterative Process
The first version worked, but it only checked claims against Claude’s own training data — not against live web sources. So I asked it to add web search. That required plugging in an API key, which was a simple process.
Then came the real iteration. The web search was taking too long, sometimes timing out entirely. It was also burning through tens of thousands of tokens per query. But here’s what’s remarkable: I didn’t fix any of this in the code. I just told Claude what was happening.
“Hey, this is taking too long and timing out.”
Claude responded: “One of the problems is there’s no cap on how long the web query runs. I’ll add a cap. You’re also burning too many tokens, so let’s reduce the input size and control the output volume.”
When errors appeared, I’d paste them in and say, “This is the error. Please fix it.” And it would. After a couple of hours of this kind of back-and-forth conversation — without me ever looking at the code — the app was built.
What I ended up with is a Chrome extension. I even asked Claude how to install my own Chrome extension, and it walked me through that too.
How It Works
Let me walk through a real example. I pulled up an article on The Hill — “Colleges Struggle to Keep Up with Growing Mental Health Problems.”
I click the fact-checker extension icon in Chrome, which opens a panel on the right side of the browser. Then I hit “Analyze Page.” (An early version automatically analyzed every page I visited, but that burned through tokens at an unsustainable rate. For now, it’s on-demand.)
The analysis takes roughly 20 to 60 seconds. Here’s what it produces:
Page summary and initial analysis — an overview of the article’s central claims with supporting and opposing evidence, plus links to additional web resources for further reading.
Subclaim evaluation — it doesn’t just assess the article’s headline claim. It drills into specific assertions within the text. For example, the article claims that anxiety and depression prevalence rates among adults in the general population run 6–7% compared to higher rates among college students. The tool flagged this as “mixed and disputed,” then showed me the supporting evidence, the opposing evidence, and its overall assessment.
Deep dive — for any subclaim, I can click “Deep Dive” for a more thorough analysis. This takes up to a minute but returns detailed confidence reasoning, key sources, counterarguments, historical context, and caveats.
That last part is what I find most valuable. Media articles tend to present claims as binary — true or false. But reality is full of nuance. Is this a correlation or causation study? How large was the sample? Is this a recent phenomenon with limited research? The deep dive surfaces exactly the kind of context that gets lost in a 800-word news piece.
The software isn’t perfect yet. Deep dives sometimes time out. But across five or six different articles, the core functionality works reliably.
Who Is This For?
Anyone who reads the news and wants more than a headline-level understanding. Click the extension, get immediate context, follow the source links, form your own view.
Debaters evaluating claims in articles about their topic. Counterarguments appear at a click, along with links for deeper research and supporting or opposing evidence.
Students who want the full picture. We hear constantly that professors are “too liberal” or “too conservative.” Students can run their assigned readings through the tool and see arguments and counterarguments for themselves.
Teachers who don’t want to be accused of one-sided instruction. Make a tool like this available to students and encourage them to use it on class materials.
Anyone questioning the framing of textbooks. A Civil War history written from one perspective? Click and let the AI investigate the claims and present additional context or the other side of the story.
What About AI Bias?
A fair objection. Some people have accused Claude of having a liberal bias — the Department of War has reportedly considered removing it from government systems over concerns about being “woke.” But here’s the thing: the extension works with any API key. You can swap in Grok (which some consider more conservative), OpenAI, Gemini, or any other model.
You could even configure the app to use multiple models simultaneously and compare their outputs. Or have models debate each other about which claims are true or false — something we’ve been exploring extensively in our work on AI pluralism. (Our new book on the topic, edited by Dr. Anand Rao, is available at AI-Pluralism.com.)
Yes, using a single preferred model is a bit like choosing CNN over Fox News. But even single-model outputs are less biased than most current news and social media coverage and Claude is presenting both sides. And the multi-model approach offers something no single news outlet can: genuine intellectual pluralism on demand.
What’s Next
This is just the beginning. I’m planning to build desktop apps, mobile apps, and tools that can analyze different types of debate arguments. The concept also applies beyond news — imagine a fact-checker running on social media posts.
And there’s a bigger vision here for debate education specifically. The first opponent every debater should face is the AI. Debate it, find the strengths and weaknesses of your arguments, and then go into the round against humans. You should always be debating humans — but training against AI first is a powerful way to sharpen your arguments before they hit the real world.
The most exciting part of all this? Building the tool required no coding knowledge. Just a clear idea, a willingness to iterate through conversation, and a couple of hours. The barrier between “I wish this existed” and “I built this” has never been lower.





