The Crisis: Students Need to Learn Different Stuff and I don't think Most Educators understand that
I’ve frequently noted (since my first AI presentation to a school in April 2023), that there are two boxes — How to Use AI in the current curriculum and how to change the curriculum so school is still relevant in a world where machines will be able to do all or nearly all knowledge work.
I’ve tried to stress the importance of number two because I think the skills needed to be successful in AI world both in terms of work and entrepreneurship are changing radically. I also think a compete redefinition of our economic and political systems is likely. But, to be honest, I haven’t done a very good job of convincing people of this.
But now there are a couple things that may make this clear.
Future success in work or entrepreneurship will be determined by how well you manage agent teams.
People have been saying this for at least three years, but a new column by Stanford’s Erik Brynjolfson in Time Magazine makes this clear.
This is the way I see it: Anyone who knows how to do manage teams of multiple agents will make a lot of money and at least survive in an AI world. Will they need a college education for this? No. Many high school students and early first year college students are already doing it.
Now compare this individual to one who has a typical 4 year degree, writes great papers, never makes grammar mistakes, and produces error-free MLA or APA bibliographies but cannot orchestrate teams of agents. In all or nearly all instances, this person will be unemployable.
This does not mean that college is not valuable. In college, students can learn more about AI, how to run a business, how to build product agent ecosystems, etc. My point is that if they do not learn these things they will be worse off than a high school graduate (and maybe even a drop out), who can do these things.
Many students will against start this start this semester using or not using AI to learn how to ride horses in the industrial era.
Second, the world is changing and K-12 education seems to be OK with students not understanding that.
Last year, one of my 9th grader debaters wrote that she really liked debate because she was able to learn about what is going on in the world. Of course, this made by very happy, but it also made me quite sad — kids go school all day and aren’t even exposed to what is going on in the world. It’s like they are largely learning a curriculum that was standardized for yesteryear, all fueled by an educational industrial complex making billions on a standardized test infrastructure.
The other point Brynjofson makes is that the most important question facing society is who gets to decide what AI does.
Today, we’re witnessing the most dramatic wealth concentration in history at computational speed, with zero democratic input. Five companies—OpenAI, Anthropic, Google DeepMind, Meta, Microsoft—control AGI infrastructure through insurmountable moats: $350M-$100B training costs, permanent data advantages, 4,000+ person talent ecosystem where individuals are making hundreds of billions of dollars.
Every sector is being restructured so economic value flows through AI systems they control—legal research, medical diagnosis, software development, creative production, education. It’s simply digital feudalism: Nvidia hits $4 trillion+, OpenAI reaches $750 billion, while the drivers, writers, and artists whose work enables this capture zero value. Meanwhile AI is already weaponized: autonomous targeting in Ukraine, algorithmic occupation in Gaza, AI-enabled warfare driving U.S.-China military competition through systems that execute kill chains faster than human ethical intervention allows.
Nobody voted on any of this. No referendum on replacing radiologists, authorizing autonomous weapons, or mediating children’s education through AI. The governance model is functionally autocratic: self-regulation by labs making non-binding promises while democratic institutions operate on timescales (months for hearings, years for treaties) that make intervention impossible when capabilities double every 6-18 months. A handful of CEOs and defense contractors decide which industries die, which jobs disappear, which weapons deploy, which voices get amplified—economic, military, and epistemic autocracy serving venture capital’s “move fast and break things” and national security’s demand for speed over consent. Your students are being prepared for a world already built without them, where the infrastructure mediating their entire lives is being assembled right now and nobody asked what they wanted.
As Dr Armada Shehu, the VP of AI at George Mason, has noted, the A(G)I world is being built without our input because schools are refusing to engage it.
Some Hope
This weekend the top most of the top policy debate teams in the US, and even one from Taiwan, gathered to debate.
In round 1, two of the best teams debated each other. The affirmative argued the US needs to be economically and militarily strong to deter an attack by China on Taiwan.
The negative argued that such representations of China were driven by racism and that it was hypocritical for the affirmative to argue this because the US had just attacked Venezuela.
Within 90 minutes (the length of the debate), high school students synthesized Trump’s January 3, 2026 Venezuela escalation, Taiwan Strait military buildups, semiconductor supply chain vulnerabilities, Chinese naval exercises, and competing international law frameworks—not as memorized facts but as weapons in intellectual combat.
The negative deployed graduate-level constructivist IR theory, arguing that China isn’t objectively threatening but rather constructed as threatening through racist orientalist discourse to justify US militarism. This forced the affirmative into genuine epistemological crisis: How do we know China is a threat? What evidence counts? Can geopolitical analysis escape the observer’s ideological position?
Then the killshot—the negative weaponized current events the affirmative didn’t prepare for, forcing them to defend ongoing US military action in Venezuela while arguing for more military readiness against China. No textbook gives you that problem. No teacher’s answer key solves it. No ChatGPT prompt generates the solution. Students must take a position on genuinely hard ethical questions—Is racism in foreign policy discourse worse than potential war? Can you oppose both US imperialism and Chinese authoritarianism simultaneously?—and live with the consequences when the judge decides their reasoning failed.
Students are executing metacognition under fire—simultaneously advancing arguments, anticipating responses, monitoring their own biases, evaluating whether evidence actually supports claims, recognizing motivated reasoning, adjusting strategy based on what’s winning. They’re identifying logical fallacies in real-time (false equivalence? Tu quoque?), stress-testing causal chains (does military strength deter or provoke security dilemmas?), evaluating source credibility (is this think tank funded by defense contractors?), and defending against rhetorical attacks that weaponize comparisons. The judge isn’t grading right answers—she’s evaluating quality of reasoning under adversarial pressure. This is distributed cognitive training that can’t be automated: learning to think with and against other intelligence people—opponents, partners, evidence, judges—in ways that build resistance to manipulation.
This is why debate—with no limits on what students can argue, no ideological guardrails, no “acceptable positions”—is preparation for resisting autocratic AI architecture.
While five companies reconstruct the world without democratic input, imposing values through foundation models and acceptable use policies, these students are learning the only skills that matter: how to contest any construction of reality, how to identify whose interests are served by which arguments, how to maintain adversarial reasoning when AI generates infinite superficially-plausible claims and someone must decide which ones are true. They’re not just learning academic skills. They’re learning to be citizens capable of fighting for human control instead of subjects waiting to be absorbed into someone else’s empire.
Debate education isn’t enrichment anymore—it’s training citizens capable of contesting this reconstruction and fighting for human control, rather than preparing subjects for someone else’s empire.
What Will be Remembered and What Really Matters
What will be remembered is who wins this tournament, not who won a Round 1 match..
But what actually matters is that this debate happened at all. As we face enormous structural and social change—AI reconstructing every institution, and even individuals, without consent.
We need a million more of the confrontations that took place this morning. We need a million more students and adults learning to argue both sides of whether US military power is deterrence or imperialism, whether constructivist epistemology undermines security analysis or reveals its hidden biases.
We don't need a million more blue book essays. We don't need even one more regurgitation of what the textbook said. We need citizens trained to fight for human control in real-time, under pressure, against opponents as smart as they are. That's what happened this weekend. That's what we're losing by treating debate as enrichment instead instead of infrastructure for the AI world.





