Note on the "Critique the AI Output" Assignments
We should also use AI to critique our own work.
There’s a growing trend in classrooms: assignments where students
Input a prompt (and share it with the professor or class)
Share the AI’s output
Critique the output
Share their critique
It’s a smart exercise — but too often, critical details are missing. Without them, no one can replicate or meaningfully compare results.
1. Always Specify the AI and Model
“ChatGPT” isn’t enough. Paid tier or free tier?
If you used Gemini, Grok, Claude, or another system, say which version (Gemini, Gemini2.5 Flash, Gemini2.5Pro, GeminiUltra, Gemini Pro, Grok 4.0, Grok4.0 Turbo, Claude4.0, Claude4.1).
If you used a platform like Perplexity, You.com, or Boodlebox.ai, list which AI or combination of AIs you selected.
2. Disclose Activated Features or Modes
Many tools now have options that change how they think and search. Examples:
Deep Research (extended multi-source searching)
Agent (multi-step reasoning)
Web Search (real-time results)
Think Longer (more detailed reasoning)
These settings can dramatically impact quality, depth, and accuracy.
Different models and settings can produce wildly different outputs — especially for academic work. Not documenting them makes it impossible to fairly judge the AI’s performance or your critique.
While accuracy is improving fast — and might someday reach 99.9% — the skill of questioning and verifying output will always be essential. We shouldn’t take AI at face value any more than we should accept human statements without scrutiny.
Bonus: Let AI Critique Us
And here’s a bonus thought: this isn’t just about critiquing AI. We can upload our own work to these same systems for feedback. The more we critique both AI and ourselves, the stronger our thinking and work will become.