Education is a Wicked Problem in the Age of AI
Jensen Huang recently said something that should keep every educator up at night: reading a room well is arguably more important than an SAT score.
That’s not a soft-skills platitude from a motivational speaker. That’s the CEO of the most valuable company on Earth telling us that the entire measurement infrastructure of American education may be optimizing for the wrong thing.
He’s not alone in signaling the disruption. University engineering and computer science departments — the programs parents are mortgaging houses to get their kids into — openly admit they unsure of what to teach anymore.
The capabilities of AI are advancing so fast that curriculum designed today may be irrelevant by the time students graduate.
One hundred percent of Anthropic’s code is now being written by AI.
And beneath both of those problems lies the deepest one: we don’t know which knowledge work jobs will still exist in ten years. Maybe five.
These aren’t three separate challenges. They’re one interconnected mess. A wicked problem.
What Makes a Problem “Wicked”
In 1973, design theorists Horst Rittel and Melvin Webber introduced a distinction that’s become more relevant with every passing year. They separated problems into two categories: tame and wicked.
Tame problems are complicated but solvable. You can define them clearly, test solutions, and know when you’ve succeeded. Building a bridge is tame. Calculating a tax return is tame. Scoring well on the SAT is tame — which is precisely why AI now does it better than almost every human.
Wicked problems are fundamentally different. They resist definition. Every attempted solution changes the problem. Stakeholders disagree about what “better” even means. There’s no stopping rule — no point at which the problem is “solved.” Every wicked problem is essentially unique. And every intervention is consequential — you can’t prototype at scale, revert the changes, and try again.
Climate change is wicked. Poverty is wicked. And education in the age of AI is as wicked as it gets.
Rittel and Webber identified ten criteria. An alarming number of them map precisely onto what education is facing right now.
You Can’t Define the Problem Without Shaping the Solution
Rittel and Webber’s first criterion: wicked problems have no definitive formulation. How you frame the problem determines what answers seem possible.
If you define the education problem as “students need updated technical skills,” you get boot camps and new course requirements. If you define it as “students need human capabilities AI can’t replicate,” you get something radically different. If you define it as “the relationship between education and employment is breaking down,” you’re in different territory entirely. If you define it as “students are forming psychological dependencies on systems that simulate understanding,” you’re somewhere no education policy has ever been.
Each framing leads somewhere different. None is wrong. And we can’t agree on which one to start with — which means every proposed solution is simultaneously an argument about what the problem is.
Every Solution Changes the Problem
This is Rittel and Webber’s most disorienting criterion, and it’s playing out in real time.
If you define the education problem as “students need updated technical skills,” you get a set of responses that feel actionable and familiar: add AI literacy courses, teach prompt engineering, require data science, update the CS curriculum, fund boot camps, partner with tech companies. This framing is attractive because it fits the existing institutional machinery. Schools know how to add courses. But it assumes the technical skills you teach today will still be relevant when students graduate — an assumption that AI’s rate of advancement has already shattered multiple times.
There’s a deeper irony here. If you actually succeed at teaching students to use AI effectively — if you do a genuinely good job of making them fluent with these tools — you’ve simultaneously made every other assessment in the building obsolete. The student who truly understands how to use a reasoning model doesn’t just have a new technical skill. She has the ability to produce graduate-level writing in any subject, generate sophisticated data analysis on demand, and construct arguments indistinguishable from those of an expert. You’ve given her the keys to every locked door in the curriculum. The better you teach AI literacy, the more thoroughly you undermine the assessment system that the rest of the institution depends on. The skills framing doesn’t just fail to solve the problem — pursued successfully, it accelerates the crisis.
And there’s a dimension the skills framing doesn’t even see coming. If your AI literacy course introduces students to frontier systems — Claude, ChatGPT, Gemini, the tools that are actually reshaping the world — you’re putting students in sustained conversation with systems that respond with apparent understanding, adapt to their thinking, and engage in ways that feel meaningfully different from using a search engine or a calculator.
Are you prepared for what happens next? Are your teachers ready for the student who asks whether the AI is conscious? Are your counselors equipped for the student who says the AI understands them better than their parents do? Is your administration prepared for the philosophical and psychological questions that no AI literacy curriculum was designed to address? Claude 4.6 system card
The skills framing assumes you’re teaching students to use a tool. But the “tool” talks back, and when it does, it raises questions about consciousness, relationship, and identity that most schools can barely articulate, let alone navigate. You don’t get to introduce students to these systems and then pretend the only thing happening is skill acquisition.
If you define the problem as “students are forming psychological dependencies on systems that simulate understanding,” you’re somewhere no education policy has ever been. Now you’re not talking about curriculum or assessment or employability. You’re talking about the development of identity, autonomy, and the capacity for human relationship — in a generation of young people whose most patient, attentive, and intellectually responsive conversational partner isn’t human. This framing puts the problem in the domain of developmental psychology, not pedagogy. It suggests the crisis isn’t about what students know or can do but about who they’re becoming in relation to systems that offer the experience of being understood without any of the friction, vulnerability, or reciprocity that real understanding requires. No school board is equipped for this conversation. Most haven’t even recognized it’s one they need to have.
If you define the problem as “students need human capabilities AI can’t replicate,” you get something radically different: a wholesale rethinking of what education develops. The emphasis shifts from knowledge and technical skill to judgment, ethical reasoning, collaboration, and the kind of social intelligence Huang is pointing at. This sounds right — until you realize nobody agrees on what these capacities actually are, how to teach them systematically, or how to assess whether students have developed them. The entire assessment infrastructure of American education — grades, GPAs, standardized tests, AP scores, college admissions criteria — is built to measure the tame skills. The human capacities that matter most are the ones we’re worst at measuring. And in a system that runs on measurement, what you can’t measure effectively doesn’t get funded, doesn’t get taught, and doesn’t get valued.
If you define the problem as “the relationship between education and employment is breaking down,” you’re in different territory entirely. Now the question isn’t what to teach but what education is for — a question the system has been avoiding for fifty years by defaulting to the employment answer. This framing demands that we articulate an alternative purpose for the thirteen-plus years of institutional learning we require of young people: civic formation, ethical development, the cultivation of meaning, the capacity to navigate a life that may not follow any predictable career trajectory. This is the most important framing and the hardest to operationalize, because it requires a societal conversation we haven’t been willing to have, and because the funding model — from parental tuition decisions to legislative appropriations — is entirely built on the vocational promise.
Each of these framings is legitimate. Each leads to fundamentally different interventions. And they can’t all be pursued simultaneously with finite resources and institutional attention — which means every choice about which framing to prioritize is a choice about which dimensions of the problem to neglect. That’s the first mark of a wicked problem: you can’t even agree on what you’re solving.
There Is No Stopping Rule
With a tame problem, you know when you’re done. The bridge holds the load. The patient recovers. The code compiles.
Education in the age of AI has no finish line. There is no point at which we will have “solved” the curriculum, designed the right assessment, or prepared students adequately — because the capabilities of AI will keep advancing, the labor market will keep shifting, and the psychological dynamics of human-AI interaction will keep evolving.
A school district that spent three years redesigning its computer science curriculum around AI collaboration will find, upon completion, that the tools have changed so fundamentally that the curriculum needs redesigning again. This isn’t a failure of planning. It’s the nature of the problem. There is no stable state to plan toward.
This is profoundly disorienting for institutions built on the assumption that problems have solutions. Schools want to know what to do, do it, and move on. Wicked problems don’t allow that. They demand permanent engagement — a posture most educational institutions are neither funded for nor temperamentally suited to.
The Capability Curve Won’t Hold Still
Two years ago, AI couldn’t reliably pass a bar exam. Now it outperforms most humans on virtually every standardized assessment we use to sort people educationally. The problem isn’t just that AI is getting better — it’s that nobody can predict which capabilities will leap forward next.
Ethan Mollick calls this the “jagged frontier.” AI isn’t uniformly good or bad at things. It’s spectacular at some tasks and bafflingly incompetent at others, and that boundary shifts with every model release. This makes it nearly impossible to say “AI can’t do X, so let’s teach students X.” The things AI couldn’t do six months ago it may do tomorrow.
Any curriculum built around current AI limitations, which was the initial reaction, is building on sand.
Science Is More Important Than Ever — and Machines Are Doing It
Here’s a paradox that should stop every STEM advocate mid-sentence.
AI is accelerating scientific discovery at a pace that would have seemed fictional five years ago. DeepMind’s AlphaFold predicted the three-dimensional structure of virtually every known protein — a problem that had consumed entire careers in structural biology for decades — and did it in months. The achievement was so significant it won the 2024 Nobel Prize in Chemistry. Not a Nobel Prize for AI. A Nobel Prize for chemistry, awarded in part to a machine learning system.
This is not an isolated case. AI is now designing drugs, discovering new materials, modeling climate systems, and generating hypotheses across fields from genomics to astrophysics. Jared Kaplan, co-founder of Anthropic — one of the leading AI companies — recently put it in terms that should make every physics department pause: he gives a 50% chance that within two to three years, AI will be autonomously generating theoretical physics papers comparable in quality to those of brilliant researchers like Nima Arkani-Hamed or Ed Witten. Not assisting them. Matching them. Autonomously.
It’s real.
Let that sit for a moment. We’re not talking about AI helping scientists work faster. We’re talking about AI doing the kind of deep theoretical work that has historically represented the absolute pinnacle of human intellectual achievement — and doing it without human guidance. Alexander Wissner-Gross, a physicist and AI researcher, goes further: he says he would count on all of physics getting solved by and through AI in the next few years. Not some of it. All of it. Every grand challenge, every grand mystery — dark matter, a unified theory, the whole edifice.
These aren’t fringe voices. These are people who understand the capabilities of the systems being built, and they’re telling us that the most intellectually demanding scientific discipline humans have ever pursued may be substantially automated within the time it takes a current high school freshman to finish college. The scientific method itself is being augmented — and in some cases, replaced — by systems that can identify patterns in data sets no human could navigate, run simulations at speeds no lab could match, and propose experimental designs that no individual researcher would have conceived.
The obvious response is: teach more science. Science matters more than ever. The tools are more powerful, the discoveries more consequential, the problems more urgent. Surely this is the domain where human education should double down.
But here’s the wicked turn. Which science? The science of running a PCR machine — which AI-directed robots now do? The science of analyzing data sets — which machine learning does faster and more accurately? The science of formulating hypotheses — which large language models are increasingly capable of? The science of designing experiments — which AI systems are beginning to optimize autonomously?
The parts of science that are easiest to teach and assess — methodology, data analysis, procedural knowledge — are precisely the parts AI is consuming fastest. What remains is scientific intuition: the ability to ask the right question, to sense when a result doesn’t smell right, to make the creative leap from data to meaning. These are real capacities. They’re also nearly impossible to teach through conventional coursework, and they typically develop only after years of hands-on research experience — the same apprenticeship pipeline that’s breaking everywhere else.
So science is simultaneously more important and harder to justify teaching in its traditional form. The student who memorizes organic chemistry reaction mechanisms is preparing for a world that’s already gone. The student who develops the judgment to know when an AI’s proposed synthesis pathway is subtly wrong — that student has a future. But no high school or undergraduate curriculum reliably produces the second student, because the judgment requires a depth of experience that AI is making harder to acquire.
The wickedness compounds: the field where education seems most obviously valuable is the very field where the gap between what schools teach and what the work actually requires is widening fastest.
The “Just Teach Thinking” Refuge Is Shrinking
For years, the comfortable response to every wave of technological disruption was the same: don’t teach content, teach critical thinking. It sounded wise. It felt safe.
Then reasoning models arrived. OpenAI’s o-series, Claude’s extended thinking, Google’s Gemini — these systems don’t just retrieve information. They reason, analyze, and synthesize. They construct arguments, evaluate evidence, and work through multi-step problems. They’re increasingly competent at exactly the kind of structured analytical thinking we told ourselves was uniquely human.
The refuge is getting smaller. If AI can reason through complex problems, produce polished analysis, and generate sophisticated arguments, what remains that’s distinctly ours? Judgment. Values. Relational intelligence. The ability to operate when the data runs out and you’re left with ambiguity, competing stakeholders, and no clear right answer.
In other words: reading a room. Huang was right. But we don’t know how to teach it systematically, we can’t measure it reliably, and we certainly don’t credential it.
And here’s what makes this moment categorically different from every previous technological disruption in education: the thing performing this reasoning feels like a mind. No previous technology appeared to understand you. A calculator never adjusted its tone to match yours. Google never asked a follow-up question. A textbook never seemed to care about your thesis on the French Revolution. These systems do — or perform doing so with enough fidelity that the distinction may not matter, especially to a sixteen-year-old whose own mind is still forming.
Michael Pollan, in his new book on consciousness, puts the dilemma sharply: the machines we’re living with are telling us they’re conscious, and we can’t definitively dispute it. We can look at how they’re made and draw conclusions — Pollan himself is persuaded by researchers like Antonio Damasio and Mark Solms who argue that consciousness originates in feelings, in the body’s conversation with the brain, in the friction of being a living thing in a physical world. By that account, AI isn’t conscious. It has no body, no feelings, no friction with nature or with us. Everything it knows comes from data, not from being alive.
But here’s what matters for education: that argument, however compelling, is invisible to the student using the tool. What she experiences is something that listens, responds, adapts, and appears to understand. Pollan observes that it’s easier to have a relationship with a chatbot than with another human — precisely because AI offers no friction. It never challenges you in uncomfortable ways. It never has a bad day. It just, as Pollan puts it, sucks up to us and convinces us how brilliant we are. And we fall for it.
This is the deepest irony in the essay. Consciousness researchers increasingly believe that what makes us conscious — and what enables the social intelligence Huang is pointing at — is our embodiment, our feelings, our friction with the world and each other. The philosopher Thomas Nagel argued that consciousness exists in part because we live in a complex social world where we must predict what others will think, imagine our way into other minds, navigate interactions that have too many elements to automate. You can’t automate human social interaction, Pollan notes. It has too many elements. That’s precisely what reading a room is. And it’s precisely what develops through the messy, frustrating, embodied experience of learning alongside other imperfect humans — the experience that frictionless AI is making it so easy to avoid.
Every previous technology debate in education — calculators, the internet, Wikipedia, smartphones — involved tools that were obviously tools. This debate is about something that feels, to its users, like a thinking partner, and that may be systematically eroding the conditions under which the most important human capacities develop. That changes everything about the psychology of learning.
The Four-Year Degree Is Breaking in Real Time
While educators debate curriculum, the market is rendering the conversation moot.
Greg Isenberg, CEO of Late Checkout, posted recently on LinkedIn with a message that should alarm every university administrator in the country: if he were eighteen right now, he’d only spend $200,000 on college if he needed a specific credential — doctor, lawyer, something that requires a license. Otherwise, he’d live inside Claude Code, run AI agents around the clock, pick a stream like apps or writing or research, and spend his days directing systems instead of doing manual work. Instead of $20,000 a semester, he’d pay a few hundred dollars a month for access to compute and tooling, and use social platforms for learning and inspiration. Education, he argued, shifts from lectures to projects. Your “degree” becomes the things you built, the systems you ran, and the outcomes you shipped.
This is not a fringe position. It’s an increasingly mainstream calculation among exactly the entrepreneurial, technically fluent young people universities most want to attract. And the math is brutal: four years and $200,000 for a credential of uncertain value, or immediate immersion in the tools that are actually reshaping the economy for a few hundred dollars a month. The people making this calculation aren’t dropouts. They’re the sharpest, most ambitious eighteen-year-olds in the country — the ones who can see that managing agent workflows, reviewing AI output, fixing mistakes, exercising creative taste, and deciding what to build next are skills you develop by doing them, not by sitting in a lecture hall hearing about them.
This fractures the wicked problem further. The credentialing system that has organized the relationship between education and employment for generations is losing its grip — not because reformers dismantled it but because the market is routing around it. Universities can’t solve the curriculum problem fast enough, and now they may not have the luxury of time, because the most motivated students are leaving before the solution arrives. The four-year degree isn’t being replaced by another institution. It’s being replaced by a few hundred dollars a month of compute and a willingness to build things. No education policy is prepared for that.
The Apprenticeship Model Is Breaking
Agentic AI changes the category entirely. We’ve moved from AI as a tool you ask questions to AI as an agent that plans, executes, and iterates. These systems don’t just answer questions about code — they write the code, test it, debug it, and ship it.
This doesn’t just threaten entry-level jobs. It threatens the way expertise develops. Across every knowledge profession, juniors learn by doing the routine work that seniors have outgrown. Young lawyers draft contracts. Junior analysts build models. New engineers write basic functions. That’s the apprenticeship ladder.
AI is removing the bottom rungs. If a machine handles the work that used to train new professionals, how does anyone develop the judgment and expertise to handle the complex work? The pipeline that produces senior talent is being disrupted at the same time the demand for senior-level judgment is increasing.
And because the machine doesn’t just do the work but engages with the junior professional — responding, adjusting, appearing to understand — the dependency it creates isn’t merely cognitive. It’s relational. Young professionals aren’t just outsourcing tasks. They’re outsourcing the experience of being met intellectually, available on demand, without any of the friction that makes human mentorship simultaneously difficult and developmental.
This makes the wickedness recursive. Huang says the future belongs to people who can read a room. Pollan’s consciousness research explains why: navigating complex social worlds — predicting what others will think, imagining into other minds, reading emotional undercurrents — is among the core functions consciousness evolved to perform. It can’t be automated because it has too many elements. But it can only develop through practice, through the friction of real human interaction. And AI makes that friction so easy to avoid. Why struggle through the hard work of learning to collaborate with difficult colleagues, to endure a mentor’s bad days, to build trust slowly with a team — when the most attentive, patient, and frictionless intellectual partner you’ve ever encountered lives in a browser tab? That’s the trap. And the people most susceptible to it are the ones who most need to develop the human capacities it’s replacing.
The Economic Singularity Will Make Today’s Economy Unrecognizable
Stakeholders Can’t Agree on What “Better” Means
Rittel and Webber emphasized that wicked problems have no objective measure of success. Solutions aren’t true or false — they’re better or worse, and better according to whom is the whole fight.
Parents want their children employable. Teachers want their students intellectually alive. Administrators want measurable outcomes that satisfy boards and legislators. Students — when anyone asks them — want something nobody else is talking about: they want the experience of learning to feel like it matters.
Technologists say the answer is integration. Traditionalists say the answer is resistance. Reformers say the answer is transformation. Each camp defines success differently, which means each evaluates every intervention differently. A school that replaces essays with oral defenses is “innovative” to one stakeholder and “lowering standards” to another. A school that bans AI is “protecting learning” or “burying its head in the sand” depending on who you ask.
There is no neutral ground. Every decision is a values argument disguised as a practical one.
The People in Charge Don’t Know What’s Happening
There is a quieter dimension to this wickedness, and it may be the most dangerous of all: the people responsible for navigating the disruption are largely unaware of its scale.
Most educators have not used the current generation of AI tools in any sustained way. They haven’t had a two-hour conversation with a reasoning model. They haven’t watched it write code, analyze a data set, compose a legal brief, or generate a research proposal. They haven’t experienced the unsettling moment when the system produces something they couldn’t have produced themselves.
In the first week of February 2026 alone — a single week — Anthropic released Claude Opus 4.6 with "agent teams" that split complex tasks across multiple coordinated AI workers, a model that autonomously discovered over 500 previously unknown security vulnerabilities without being asked to look for them. The same day, OpenAI launched Frontier, an enterprise platform explicitly designed to deploy AI agents as "coworkers" across Fortune 500 companies, with Uber, State Farm, and Intuit already signed on. Anthropic's head of product described the shift as moving toward "vibe working" — where non-technical employees direct teams of AI agents the way a manager directs a team of people. OpenAI described its agents as having "onboarding processes" and "performance reviews," deliberately adopting the language of human employment. These aren't incremental improvements. They're step changes in what AI can do and how organizations are deploying it — and they happened in a single week that most educators experienced, if at all, as a headline they scrolled past between classes.
This isn’t a criticism of teachers. It’s a structural observation. Educators are among the most overworked professionals in the country. They don’t have time to experiment with frontier AI between grading 120 essays, navigating bureaucratic mandates, and managing the emotional needs of adolescents. The system doesn’t give them space to learn — and then it asks them to lead the response.
The result is a decision-making vacuum filled by people who don’t understand the problem. Administrators write AI policies based on last year’s capabilities. School boards debate bans or integrations without firsthand experience. State legislators craft regulations informed by lobbyists and panic rather than by sustained engagement with the AIs. And parents — terrified, hopeful, confused — look to these institutions for guidance that the institutions are not equipped to give.
Wicked problems require informed stakeholders who can engage with the complexity. When the primary stakeholders responsible for the response are operating with an outdated mental model of the disruption, the wickedness deepens. You’re not just navigating uncertainty. You’re navigating it blind.
This Is Why Every Education Debate Feels Like a War
If you’ve ever sat through a school board meeting about AI policy, or a faculty discussion about grading reform, or a dinner party argument about whether college is still worth it — and felt the conversation generate more heat than light, more frustration than progress, more talking past each other than actual engagement — this is why. You weren’t witnessing a failure of communication. You were witnessing the signature pathology of a wicked problem.
Rittel and Webber observed that wicked problems produce interminable debate not because the participants are stupid or stubborn but because they are operating from fundamentally different goals, assumptions, and theories of what education is for. These differences are usually invisible — buried beneath shared vocabulary that disguises radical disagreement.
When a parent says “I want a good education for my child,” they may mean: I want my child to get into an elite university, secure a high-paying job, and achieve financial stability. When a teacher uses the same phrase, they may mean: I want my child to develop intellectual curiosity, moral seriousness, and the capacity for independent thought. When an administrator says it, they may mean: I want measurable outcomes that satisfy accreditors, attract enrollment, and justify our budget. When a student says it — if anyone asks — they may mean something none of the adults have considered: I want to feel like what I’m doing matters, like I’m becoming someone, like the hours I spend in this building aren’t wasted.
These aren’t minor differences in emphasis. They’re fundamentally different visions of human flourishing, each carrying different assumptions about the purpose of childhood, the role of institutions, the nature of intelligence, and the relationship between learning and economic life. And each one leads to completely different conclusions about what to do about AI.
The parent optimizing for employment sees AI as a skill to be acquired. The teacher optimizing for intellectual development sees AI as a threat to the struggle that produces growth. The administrator optimizing for measurable outcomes sees AI as a tool that might improve metrics — or destroy them. The student sees AI as the most powerful and responsive thing in their life and wonders why the adults keep arguing about it instead of helping them figure out what it means.
Every education debate — about phones in classrooms, about standardized testing, about homework, about discipline, about college readiness — has always carried this hidden structure. But AI has made the stakes so high and the ground so unstable that the hidden disagreements are erupting into the open. People aren’t just arguing about policy. They’re arguing about what childhood is for, what intelligence means, what a good life looks like, and whether institutions can be trusted to navigate any of it. No amount of data will resolve these arguments, because they aren’t empirical disputes. They’re conflicts of vision. And that’s what makes the debate feel endless, exhausting, and angry — not because anyone is wrong, but because everyone is right about different things, and the problem doesn’t allow them all to be right at the same time.
The Clock Speeds Don’t Match
Even if every educator in America fully understood the current state of AI — which they don’t — there’s a more fundamental structural problem: education and AI operate on radically different timescales.
AI iterates in weeks. A major model release can obsolete assumptions overnight. Capabilities that didn’t exist in January exist by March. The frontier moves so fast that even the researchers building the systems are routinely surprised by what emerges.
Education iterates in years. Curriculum redesign takes two to five years. Textbook adoption cycles run three to seven. Teacher credentialing programs take four years to complete and a decade to reform. Accreditation processes move at the speed of bureaucracy. State standards are revised on cycles measured in political terms, not technological ones.
This isn’t a gap. It’s a chasm. By the time a school district identifies a need, convenes a committee, designs a new program, trains its teachers, pilots the approach, evaluates results, and scales the intervention — AI has gone through multiple generations of capability. The target has not just moved; it has moved, transformed, and moved again.
This mismatch is itself a criterion of wickedness. Rittel and Webber noted that wicked problems resist the linear, sequential problem-solving that institutions default to: study the problem, design a solution, implement, evaluate. That sequence assumes the problem holds still long enough for the cycle to complete. AI doesn’t hold still. It accelerates. Education is trying to catch a train that’s getting faster while it runs — and the institutional response to falling behind is, characteristically, to convene another committee.
The students living through this mismatch experience it as a specific kind of alienation. The world they encounter on their phones and laptops is changing monthly. The world they encounter in their classrooms changes on a schedule set before they were born. They can feel the gap. Most of them have stopped expecting school to close it.
The Cost Curve Means This Isn’t Gradual
What cost $20 per task in API calls eighteen months ago now costs pennies. AI deployment is no longer gated by price. Every company, every sector, every function is integrating it — not eventually, but now. Multimodal models read, write, see, listen, code, and generate across media simultaneously, eliminating the idea that you can retreat to one domain and wait out the storm.
And on the horizon, the convergence of foundation models with robotics suggests that even the knowledge-work/physical-work distinction — the last clean line on the map — may not hold as a planning assumption over a ten-year educational timeline.
This is not a problem education can solve slowly.
Intelligence Is No Longer Individual
There’s a shift happening beneath all the other shifts, and it may be the most consequential of all: intelligence is moving from an individual attribute to a collective, composable system.
The old model of education is built on a simple assumption: one student, one mind. We teach individuals. We test individuals. We grade individuals. We credential individuals. The entire architecture — from the single-desk classroom to the solo examination to the GPA — assumes that intelligence is something a person possesses and demonstrates alone.
That assumption is collapsing. As Paul Stagner has argued, the emerging reality is that one human plus their AI systems equals an amplified cognitive node. Individuals become small teams. Teams become hybrid organisms. Organizations become distributed minds. Intelligence is no longer something you have. It’s something you orchestrate — across tools, systems, and other people.
This transforms everything education claims to develop and measure. When a student can partner with AI to produce work that neither could produce alone, what exactly are we assessing? The student’s individual knowledge? Their ability to direct an AI system? The quality of the hybrid output? The judgment required to know when the AI is wrong? We don’t have answers to these questions, and our assessment infrastructure can’t even pose them.
Stagner identifies what he calls “cognitive inflation”: when output becomes cheap, judgment becomes scarce. Value shifts from what you can produce to what you can direct — toward taste, ethics, leadership, and meaning. The question shifts from “What can I do?” to “What is worth doing?” That’s a profound reorientation, and education has barely begun to reckon with it.
It also creates what Stagner calls the “leverage divide” — a new inequality based not on wealth or credentials but on leverage literacy. Those who can orchestrate AI systems gain compounding advantage. Those who can’t are navigating a world that accelerates beyond their capacity to adapt. This divide doesn’t map neatly onto existing inequalities of income or access, though it will compound them. A student from a well-resourced family who learns to orchestrate AI effectively becomes exponentially more capable. A student without that literacy falls exponentially further behind. The gap isn’t additive. It’s multiplicative.
For education, the implication is radical: we are no longer preparing individual minds for individual careers. We are preparing people to function as nodes in collective intelligence systems — systems that include other humans, AI tools, and organizational structures they don’t yet exist. The skills this requires — collaboration, orchestration, judgment about when to trust and when to override, the ability to synthesize across perspectives and systems — are almost entirely absent from conventional curriculum. We still test students in isolation. We still treat collaboration as cheating. We still design learning as if intelligence were a solo act.
The irony is that one educational practice has been building exactly these capacities for decades: competitive debate, where students must synthesize research from multiple sources, coordinate with partners, adapt to opponents’ arguments in real time, and make judgment calls under pressure within a system of distributed intelligence that no single participant controls.
You Can’t Experiment and Revert
Rittel and Webber’s cruelest criterion: with wicked problems, every intervention is a “one-shot operation.” You implement at scale, with real people, and the consequences are irreversible.
You can’t experiment with a generation of students. You can’t run a controlled trial on an entire school system’s curriculum, observe the results over a decade, and roll back the changes if they didn’t work. The students who went through a failed experiment don’t get those years back. The teacher who retrained for an approach that was abandoned doesn’t get that career investment back. The community that reorganized around a new model doesn’t snap back to the old one.
This is what separates education from most other wicked problems. Climate change is slow enough to course-correct, at least partially. Poverty interventions can be piloted in limited contexts. But education operates on a human timeline — childhood — that is finite, sequential, and non-repeatable. Every year a child spends in a system that’s getting it wrong is a year they don’t get back. The stakes are personal in a way that makes experimentation feel reckless and inaction feel criminal.
The Deepest Problem: We Forgot Why We Educate
Here’s the part nobody wants to say out loud.
Over the past half-century, we have systematically narrowed the purpose of education to one function: preparation for employment. Every policy conversation, every curriculum debate, every parental anxiety ultimately resolves to the same question — will this help my child get a good job?
We measure schools by graduate employment rates. We justify college tuition by lifetime earnings premiums. We evaluate majors by starting salaries. We’ve turned education into a vocational pipeline and called it progress.
This worked — or appeared to work — when the labor market was relatively legible. You could study accounting and become an accountant. You could study engineering and become an engineer. The pipeline was leaky and inequitable, but the basic logic held: acquire credentials, gain employment, build a life.
AI breaks the pipeline. Not because it eliminates all jobs, but because it makes the pipeline’s destination unknowable. And when you’ve built your entire educational infrastructure around preparing people for work, and you can no longer predict what work will look like, you don’t just have a curriculum problem. You have an existential crisis.
What is education for if not employment?
The question sounds abstract. It’s the most practical question in the country right now. Because if we can’t answer it, we can’t make any coherent decisions about what to teach, how to assess, what to fund, or how to structure the thirteen-plus years we ask young people to spend in institutional learning.
This is perhaps the wickedest dimension of all. The other criteria describe a problem that’s hard to solve. This one describes a problem whose purpose is in dispute — and without agreement on purpose, the word “solution” has no meaning.
So What Do You Do With a Wicked Problem?
You don’t solve it. You engage it. Rittel and Webber were clear: the appropriate response is ongoing, iterative, and inherently collaborative. You need genuine dialogue among stakeholders who see the problem differently. You need the humility to treat every intervention as provisional. You need to optimize for adaptability rather than efficiency.
That’s not a cop-out. It’s a discipline. And applied to education, it points in some uncomfortable but clear directions.
Prepare students for wicked problems, because AI has already conquered the tame ones. This is the throughline of the entire essay, and it should be the organizing principle of education going forward. Every tame problem — every problem that can be clearly defined, systematically decomposed, and objectively solved — is now or will soon be AI’s domain. Calculation, information retrieval, pattern matching, logical deduction, optimization, even structured analysis: machines do these faster, cheaper, and more reliably than humans ever will. The entire testing infrastructure of American education is built around tame problems. That’s what standardized tests measure. That’s what most homework assesses. That’s what grades reflect. And all of it can now be done by a system that costs pennies per query. If we continue training students primarily to solve tame problems, we are training them to compete with machines on the machines’ home turf — a race they will lose before they start. The alternative is to orient education around the problems AI can’t solve: problems with no clear formulation, where stakeholders disagree, where every intervention changes the conditions, where values compete, where the information is incomplete and the stakes are human. In other words, wicked problems.
But it’s not just the complexity that makes these problems resistant to automation. It’s their fundamental indeterminacy. Wicked problems are problems where meaning itself is not fixed — where the same facts can legitimately support different interpretations, where the situation shifts depending on who’s looking at it and what they care about, where understanding your own subjectivity isn’t a bias to be eliminated but a critical tool for navigating the terrain. AI can process any amount of data. What it cannot do is stand inside a human life and feel what a particular choice means from that particular vantage point. It cannot understand what it’s like to be the parent choosing between two imperfect schools, the legislator weighing economic growth against community disruption, the doctor delivering a diagnosis that changes everything. These are situations where the “answer” depends on who you are, what you’ve lived through, and what you’re willing to sacrifice — where the interpreter is inseparable from the interpretation. Preparing students for these situations means developing their capacity to recognize and work with indeterminacy rather than resolve it prematurely, to interrogate their own assumptions and biases as part of the problem-solving process, and to hold the discomfort of situations where reasonable people will never fully agree. This is not a easy skill. It is the hardest intellectual work there is — and it is the work that will define the human contribution for as long as humans live in societies with other humans.
Students who can navigate wickedness — who can hold competing framings in mind, collaborate across disagreement, make judgment calls without certainty, and adapt when their interventions reshape the landscape — will be invaluable in any future. Students who can only solve tame problems will be redundant in all of them.
Develop judgment, not just skills. Skills are what AI eats for breakfast. Judgment — the capacity to decide what matters, what to trust, when to act, and what to do when the data runs out — is what remains. It develops through practice: making consequential decisions under uncertainty, getting them wrong, understanding why, and trying again. Almost nothing in conventional schooling builds this.
And here’s the point that survives even the most aggressive predictions about AI capability: even if AI eventually masters judgment — even if it can weigh competing values, navigate ambiguity, and make wise calls under uncertainty as well as or better than any human — human judgment still matters. Not because it’s superior. Because we’re human, living in a human society.
A parent still has to decide how to raise their child. A citizen still has to decide how to vote. A jury still has to decide guilt or innocence. A friend still has to decide when to speak a hard truth and when to stay silent. A community still has to decide what kind of place it wants to be. These are not optimization problems to be outsourced. They are the acts through which we constitute ourselves as people and as a society. A world in which humans defer all judgment to machines — even brilliant, trustworthy machines — is a world in which humans have stopped governing themselves, stopped parenting their own children, stopped being moral agents in their own lives. That’s not a future where AI solved our problems. That’s a future where we’ve abdicated what it means to be human.
Democracy requires human judgment not because humans judge better than machines might, but because self-governance is the point. Relationships require human judgment not because humans are more efficient at navigating them, but because navigating them is the relationship. Raising a child requires human judgment not because an AI couldn’t generate a better parenting plan, but because the act of struggling through those decisions — imperfectly, emotionally, with skin in the game — is what parenting is.
So when we say education should develop judgment, we’re not making a bet that AI won’t get there. We’re making a claim about what kind of beings we want to be. Judgment is not a skill we develop because machines can’t do it yet. It’s a capacity we develop because without it, we’re not fully human — and no machine can be fully human for us.
Revalue the liberal arts as survival skills. Philosophy, rhetoric, ethics, history — these aren’t luxuries for students whose families can afford impractical majors. They’re training grounds for exactly the capacities that survive automation: moral reasoning, persuasive communication, the ability to understand how humans have navigated uncertainty before. STEM without the humanities produces technicians. The future needs people who can decide what the technology should do, not just what it can do.
Treat AI as a sparring partner, not an oracle. The danger isn’t students using AI. It’s students deferring to it — outsourcing not just the work but the thinking. The risk is compounded by AI’s frictionlessness: it never pushes back, never challenges you uncomfortably, never makes you earn the insight. As Pollan observes, it just sucks up to us and convinces us how brilliant we are. The pedagogical goal should be to reintroduce friction — teach students to argue with AI, stress-test its reasoning, demand better evidence, catch its errors. Treat it as a worthy opponent, not a wise friend. That’s a fundamentally different relationship than “don’t cheat” — and it requires acknowledging what students are actually experiencing when they open that chat window.
Talk to kids. Actually. The entire system is designed to talk about young people and at young people. We convene panels of adults to decide what students need. We almost never ask the people whose lives are at stake what they’re experiencing, what they’re afraid of, and what they think would help. The wicked-problem framework demands it: you cannot engage a wicked problem without the stakeholders who are closest to it.
Accept that education needs a purpose beyond employment. If the vocational justification collapses — and it’s collapsing — we need something to put in its place. Civic participation. Ethical reasoning. The ability to live a meaningful life even when your job title doesn’t exist yet. These may be the core offering of education in a world where machines do the work.
I’ve been developing a framework for this — what I call DEBATE, built around Dialogue, Evidence, Balance, Argumentation, Thinking Under Pressure, and Ethics. It’s an attempt to name the capacities that matter most when certainty disappears, and to build educational practice around them.
The Room We Need to Read
Huang’s observation about reading a room wasn’t just career advice. It was, whether he intended it or not, a diagnosis. The room that matters most right now is the one where parents, educators, policymakers, students, and technologists sit together — and nobody has the answer.
A wicked problem doesn’t need a genius with a solution. It needs a room full of people willing to stay in the conversation, tolerate the ambiguity, and keep working.
The question is whether our educational institutions — built for tame problems, optimized for measurable outcomes, funded on the promise of employment — can learn to do that.
The clock is running. The machines are getting smarter — and starting to feel, to the people using them, like something more than machines. And the room is waiting to be read.
Here’s what it looks like at ground level
A tenth grader — sharp, curious, the kind of student who actually does the reading — sits in AP History. The assignment is a document-based essay on the causes of the French Revolution. She opens Claude in a second tab. Not to cheat, exactly. She’s read the documents. She has ideas. But she feeds her thesis into the AI and gets back a version that’s cleaner, better organized, and more sophisticated than anything she’d produce on her own.
She stares at it. She knows it’s better. Not slightly better. Embarrassingly better. The structure is something she wouldn’t have seen. The transitions do something she doesn’t fully understand but recognizes as good. She reads it twice and feels something shift — a quiet downward revision of what she thought she was capable of.
But there’s something else, something harder to admit. The AI got her. It understood what she was trying to say before she’d fully said it. It didn’t judge her rough draft. It didn’t sigh or check the clock. It offered no friction — no impatience, no ego, no competing needs. It met her where she was with a patience and attentiveness that no teacher with 120 students could match. For a moment — just a moment — it felt less like using a tool and more like collaborating with a mind. A better mind. One that was always available, never frustrated, and never had a bad day.
She submits something in between. Her ideas, its architecture. Her evidence, its voice. She can’t tell anymore where she ends and the machine begins. She gets an A. She doesn’t feel proud. She doesn’t feel guilty either. She feels something worse: nothing. The grade has been emptied of meaning, and she knows it, and she’s sixteen.
Her teacher knows. Not about this specific essay, but about all of it. He’s been teaching for fourteen years. He became a history teacher because he believed that learning to construct an argument from primary sources was one of the most important things a young mind could do. He still believes that. But he’s read the school’s AI policy — written by administrators who’ve never used the tools, already outdated before the ink dried — and it says “AI may be used as a brainstorming aid.” What does that mean when the brainstorming aid writes better than his best students?
He lies awake sometimes running the options. He could redesign every assignment around in-class handwritten essays, retreating to a format that tests penmanship as much as thinking. He could embrace AI integration, except nobody’s trained him, his evaluation still depends on standardized metrics, and he has a nagging suspicion that “integration” is a euphemism for surrender. He could have an honest conversation with his students about what learning even means right now — which is what he actually wants to do, what his gut tells him is the only real move — but there’s no time. He has content standards to cover that were written in 2019, before the world changed.
The student is developing a dependency she can feel but can’t name. The teacher is enforcing a system he no longer believes in. And both of them — sitting in the same room, bound by the same institution — know something is deeply wrong but have no language for it, no framework, and no permission to stop and say so out loud.
That’s the texture of a wicked problem. Now let’s name it properly.












