The traditional markers of academic excellence—memorizing facts, achieving high test scores, and processing information quickly—are rapidly becoming obsolete as AI systems surpass human capabilities in these areas. Our competitive advantage now lies in other human capacities: the breadth of human perspectives we bring to problems, our capacity for nuanced judgment, our resilience in learning from setbacks, our moral reasoning, and our ability to continuously evolve our thinking. These qualities represent the new definition of merit, and they are relevant no matter how far AI advances since humans will always prioritize human relationships and needs.
Educational institutions must fundamentally shift their focus toward nurturing these human-centered skills. Universities should redesign their admissions criteria to recognize and value these capabilities, then structure their programs to strengthen them throughout students' academic journeys. The future belongs not to those who can outcompute machines, but to those who can think, adapt, and contribute in ways that remain especially relevant for human society.
Introduction
Picture this: It's 1960, and Harvard decides to admit students based on a single criterion—who can save the most crops from rotting before harvest. The top applicants boast about their superior root cellars, their keen eye for spotting decay, their lightning-fast sorting techniques. By every measure that mattered in 1860, these students demonstrate pure "merit."
Of course, this sounds absurd. Refrigeration had already revolutionized food storage. What seemed like essential skills a century earlier had become charmingly obsolete.
Yet we're making the exact same mistake today.
In an age where AI masters traditional academic skills faster than any human ever could, we're still measuring merit by standards that may already be as outdated as crop preservation techniques.
If university admission should truly be based on competitive merit, we need to redefine what skills are actually meritorious in an AI-dominated world. Universities should admit based on these criteria and build the development of these capabilities in their students.
What Do Current Criteria Measure?
SATs reward speed-based pattern recognition and favor students who can afford test prep. They don’t assess creativity, ethics, or depth of thought—just mastery of testable formulas.
AP Exams mostly assess memorization and conformity. They teach students to anticipate rubrics rather than question them.
Grades prioritize compliance, rule-following, and predictability. Students quickly learn to perform rather than think, tailoring their work to teacher preferences.
Essays have become performative. Coached narratives dominate, making it easier for the wealthy to package themselves as ideal candidates.
Each of these measures shares a common flaw: they assess performance within predetermined frameworks rather than the ability to create new frameworks. They reward optimization of existing systems rather than the capacity to envision better ones. In an AI age where machines excel at pattern recognition, rule-following, and rapid information processing, universities continue selecting for students who think like (and now almost as well as) very sophisticated computers.
But is It? What if Universities Keep Focusing on Accelerating Pattern Matching?
The Internal Logic Still Works
A student who excels at SAT pattern recognition will likely perform well on university exams that test similar skills. Someone who masters AP-style essay writing will succeed in courses that reward the same formulaic approaches. High school grades do predict college grades when both systems reward the same behaviors: compliance, memorization, pattern recognition, and working within established frameworks.
From this perspective, universities are being entirely rational. They're selecting students who will thrive in their existing educational model. The admissions process isn't broken—it's precisely calibrated to identify students who can navigate traditional academic structures successfully.
The Deeper Institutional Inertia
This creates a self-reinforcing cycle that's actually more problematic than a simple measurement error. Universities use these metrics because they work within their current system, which gives them little incentive to change that system. Students optimize for these metrics because that's how they gain admission, which reinforces the university's belief that these are the right students. Faculty continue teaching in ways that reward these same skills because that's what their students are prepared for.
The result is institutional inertia at a massive scale. Universities have created a closed loop where their selection criteria justify their teaching methods, and their teaching methods justify their selection criteria. Everyone succeeds within the system, so the system appears to be working perfectly.
The Education-Real Life Disconnect
The Hidden Cost of Consistency
But this internal consistency comes at an enormous opportunity cost. Universities could be developing entirely different capabilities in their students—teaching them to grapple with ambiguous problems, to collaborate across disciplines, to question fundamental assumptions, or to create rather than consume knowledge. They could be fostering the kinds of minds that complement rather than compete with AI.
Instead, they're optimizing for a kind of education that produces graduates who excel at tasks that are rapidly becoming automated. The university experience becomes increasingly disconnected from the world students will actually inhabit after graduation.
The Employer Disconnect
This explains why employers increasingly complain about new graduates who can ace tests but struggle with open-ended problems, who can write perfect essays but can't communicate effectively in teams, who can memorize vast amounts of information but can't synthesize insights across domains. The university system, and the K-12 pipeline that supports it, is working exactly as designed—it's just designed for a world that no longer exists.
The Institutional Trap
Perhaps most troubling, this creates a trap for universities themselves. They become skilled at producing a very specific type of student success while losing the ability to cultivate other forms of intellectual development. Faculty who might want to teach differently face students who are optimized for traditional approaches. Administrators who might want to innovate face pressure to maintain metrics that demonstrate "effectiveness" within the existing paradigm.
The system becomes its own justification, making it incredibly difficult to change even when external conditions shift dramatically. Universities aren't failing by their own standards—they're succeeding so completely at their traditional mission that they can't see why they might need a different mission altogether.
This institutional consistency actually makes the problem more urgent, not less so. It means the disconnect between education and the external world will continue to widen until some external force—employer demands, student debt crises, or competition from alternative educational models—forces a reckoning that the institutions themselves seem unable to initiate.
The Faculty Panic Response
Faculty are experiencing something close to professional vertigo. They've built careers around teaching and evaluating skills that ChatGPT can now demonstrate at superhuman levels. A literature professor who spent years teaching students to write five-paragraph essays suddenly faces the reality that AI can produce dozens of them in minutes. A math instructor who prided themselves on helping students master calculus problems watches AI solve complex equations instantly while showing its work.
This isn't just about cheating—it's about the terrifying realization that the core value proposition they've offered students has evaporated overnight. If AI can write better essays, solve more complex problems, and analyze literature more thoroughly than most students, what exactly are faculty members teaching that has lasting value?
The Impossible Defense
Universities are now trapped in an increasingly absurd position: they must keep AI out precisely because letting it in would reveal how little their current educational model actually develops students. Consider the logical impossibility of their situation:
They can't allow AI in assignments because students would immediately outperform traditional expectations.
They can't ban AI from society, where students will use it constantly after graduation.
They can't acknowledge AI's capabilities without admitting their curriculum is largely obsolete.
They (supposedly) can't redesign their programs quickly enough to stay ahead of AI's rapidly expanding abilities.
The Whack-a-Mole Desperation
Watch how frantically universities are trying to plug holes in a rapidly sinking ship. They're implementing AI detection software, creating elaborate honor codes, designing "AI-proof" assignments, and requiring handwritten exams. But each defensive measure only highlights the absurdity of their position.
A university that requires students to write essays by hand in 2024 is essentially admitting that the skills they're teaching have no relevance to the world students will inhabit. They're creating artificial scarcity around abilities that have become abundant, like insisting students travel by horse-and-buggy to prepare them for a world of automobiles.
The Acceleration Dilemma
But here's the cruelest irony: AI isn't standing still while universities figure out their response. Every semester they spend trying to maintain the status quo, AI becomes more capable. The gap between what students could accomplish with AI assistance and what they're allowed to accomplish in university settings grows wider.
Students graduate having spent four years deliberately avoiding the tools they'll immediately need to use professionally. They've been trained in a form of intellectual anachronism, like learning to use slide rules in preparation for careers that require advanced calculators.
The Institutional Admission of Obsolescence
The very intensity of universities' efforts to exclude AI inadvertently admits that their current model cannot coexist with intelligent technology. If their educational approach truly developed irreplaceable human capabilities, AI wouldn't pose such a threat. The fact that they must create AI-free zones to maintain relevance reveals that they've been developing replaceable capabilities all along.
This is why faculty are being pushed over the edge. The current model simply cannot survive full contact with AI. Universities can either evolve rapidly toward developing human-enhancing capabilities, or continue this doomed effort to maintain relevance through technological prohibition—a strategy that grows more absurd with each passing day.
A New K-12 to University Pipeline: Redefining Merit in the Age of AI
The solution isn't to abandon merit-based admissions—it's to radically redefine what merit means in an AI-integrated world. Instead of measuring students' ability to replicate what machines now do better, universities should identify those who excel at human-augmented intelligence: designing experiments to test hypotheses no one else thought to ask, facilitating genuine dialogue between opposing groups, creating original works that reveal new perspectives, and synthesizing insights across disciplines. This requires reimagining the entire K-12 to university pipeline to assess students' capacity for ethical reasoning under ambiguity, their skill at asking profound questions rather than providing expected answers, and their potential to collaborate with AI systems to solve complex problems. The most meritorious students of the AI age will be those who can leverage artificial intelligence to amplify human wisdom, creativity, and insight.
Content Knowledge
Content Knowledge Remains Essential
Emphasizing AI-augmented learning doesn't mean abandoning content knowledge—it makes deep knowledge more valuable. Content knowledge serves as cognitive infrastructure: you can't analyze history without knowing it, engage in scientific reasoning without core principles, or make ethical judgments without philosophical frameworks. The question isn't whether students need this foundation, but how AI can help them build it more effectively than traditional methods.
AI as Personalized Learning Accelerator
AI means students may be able to learn more in the same amount of time (or less).
AI assistants provide truly individualized instruction, adapting to each student's pace, style, and knowledge level. Instead of one teacher delivering identical lessons to 30 diverse students, AI can explain photosynthesis using sports analogies for athletes, musical metaphors for musicians, and visual diagrams for spatial learners—simultaneously.
AI's 24/7 availability removes artificial learning constraints. Students can explore topics when curiosity strikes, get immediate feedback, and receive patient re-explanations without fatigue or frustration. AI continuously assesses what students know, skipping mastered material and precisely targeting knowledge gaps, allowing students to cover far more ground efficiently.
AI can present information through multiple perspectives, helping students develop nuanced understanding. It can explain the Civil War from economic, social, political, and cultural angles, or demonstrate how mathematical concepts apply across physics, engineering, and art.
Reexamining Curriculum Content
Educational curricula must systematically reexamine what we teach rather than layering new subjects onto an overloaded system. This requires courage to remove outdated requirements, not just add trendy subjects.
For example, the shift from calculus to statistics reflects our data-driven reality. While calculus serves specialized fields, statistics and data literacy have become essential life skills for understanding medical research, evaluating news claims, making financial decisions, and recognizing algorithmic bias. Dasey
Similarly, basic computer literacy has evolved from helpful skill to fundamental requirement. Understanding how computers process information isn't just career preparation—it's about comprehending the infrastructure governing modern life. When banking, voting, medical devices, and communication all depend on computational processes, citizens lacking this knowledge become vulnerable to manipulation.
The goal isn't diminishing academic rigor but aligning educational priorities with intellectual tools students actually need to thrive as informed citizens and capable professionals.
The Fundamental Importance of Skills Beyond Content Knowledge
Diversity of Thought
As AI systems excel at optimizing for known patterns and generating statistically probable responses, diversity of thought—encompassing different cultural perspectives, problem-solving methodologies, creative approaches, and unconventional reasoning—represents a less replaceable human contribution to innovation and progress.
Students must understand that their individual cognitive styles, cultural backgrounds, and unique ways of connecting disparate ideas are not obstacles to overcome but assets to cultivate. Where AI might identify the most common solution to a problem, human diversity of thought reveals alternative pathways some algorithms might never consider—the artist's approach to engineering challenges, the historian's perspective on technological development, or the philosopher's questions about ethical implications. This cognitive diversity becomes especially crucial when tackling complex societal problems that require not just computational power but wisdom, empathy, and the ability to understand human nuance and contradiction.
The future belongs not to those who can compete with AI's processing speed or memory capacity, but to those who can think in ways that complement and direct artificial intelligence. Students who develop their unique perspectives, embrace unconventional approaches, and learn to value different ways of understanding problems will find themselves positioned to guide AI systems rather than be replaced by them. In a world where AI can generate endless variations on existing themes, the human capacity for genuine breakthrough thinking—informed by lived experience, cultural insight, and creative leaps that transcend pure logic—becomes the ultimate competitive advantage.
Learning to Learn
More important than content knowledge, students must learn to learn how to learn.
The ability to learn continuously has never been more critical than in our current age of exponential change. New knowledge is being generated at an unprecedented pace—from breakthroughs in quantum computing and CRISPR gene editing to discoveries about dark matter and the microbiome's role in mental health. Simultaneously, long-held assumptions are being overturned: we've learned that ulcers are caused by bacteria, not stress; that fat consumption doesn't directly correlate with heart disease as previously believed; and that neuroplasticity continues throughout life, contradicting decades of neuroscience doctrine. Entire industries are emerging seemingly overnight—think of how quickly cryptocurrency, social media management, and AI prompt engineering became legitimate career paths (and how now prompt engineering is already not becoming one), or how electric vehicle manufacturing has transformed from niche to mainstream in just a few years. In this environment, students who can only apply pre-learned patterns will quickly become obsolete, while those who can rapidly acquire new frameworks, question existing assumptions, and synthesize information across disciplines will thrive. AI can execute known procedures, but it cannot replace the human capacity to identify what needs to be learned next, to recognize when old models no longer fit new realities, or to make creative leaps that combine disparate fields in novel ways. Bauschard
Learning Through Iteration
True learning emerges not from perfect first attempts, but from the messy, iterative process of trying, failing, and refining. When we embrace failure as a teacher rather than an enemy, we unlock the most powerful learning mechanism humans possess: the ability to adapt and improve through experience. This iterative approach transforms mistakes from roadblocks into stepping stones, where each error becomes valuable data for the next attempt. The magic happens in the space between failure and success—in the reflection, the adjustment, the willingness to try again with new insight. Modern learning environments, enhanced by AI feedback systems, can create safe spaces for this experimental learning, where students can test ideas, receive immediate guidance, and iterate rapidly without the fear of permanent consequences. Rather than punishing wrong answers, we should celebrate the learning that emerges from the cycle of hypothesis, experiment, failure, and refinement. This is how real expertise develops: not through memorization of correct answers, but through the gradual accumulation of wisdom that comes from learning what doesn't work and why, building toward deeper understanding through persistent, reflective practice.
Learning to Fail and Succeed
Equally important is developing resilience through meaningful failure and iterative success—skills that standardized testing actively discourages. In the real world, breakthrough innovations emerge from cycles of experimentation, failure, and refinement. The entrepreneurs who built today's most valuable companies failed repeatedly before succeeding: Airbnb's founders sold cereal boxes to stay afloat, and Pandora's team was rejected by VCs over 300 times before securing funding. Scientists make countless "failed" hypotheses before major discoveries, and artists create hundreds of pieces they'll never show before producing their masterworks. Yet our educational system rewards students for avoiding failure entirely—getting the "right" answer quickly on standardized tests rather than wrestling with ambiguous problems that require multiple attempts. Students learn to play it safe, to stick with familiar patterns, and to view mistakes as defeat rather than data. This creates graduates who crumble when faced with the inevitable setbacks of real innovation, unable to pivot when initial approaches don't work, and paralyzed by the possibility of being wrong. In contrast, students who learn to fail forward—to extract insights from unsuccessful attempts, to iterate rapidly based on feedback, and to maintain motivation despite temporary defeats—will possess the emotional and intellectual toolkit necessary for navigating an uncertain future where AI handles the routine work and humans tackle the undefined problems. Seiji Isotani
Judgement
Perhaps most critically, the AI era demands sophisticated human judgment—the ability to navigate ethical dilemmas, weigh competing values, and make decisions in contexts where there is no clear "right" answer. AI systems can process vast amounts of data and identify patterns, but they cannot grapple with questions that require moral reasoning, cultural sensitivity, or long-term consequence evaluation. These are all inherently human (to-human) questions.
Should a self-driving car prioritize the safety of its passengers over pedestrians? How do we balance privacy rights with public health benefits in contact tracing? When does AI-generated content cross the line from helpful tool to academic dishonesty? These questions require humans who can synthesize technical knowledge with ethical frameworks, consider multiple stakeholder perspectives, and make principled decisions under uncertainty. Yet our current merit system rewards students who can quickly select from predetermined multiple-choice options, not those who can thoughtfully deliberate when faced with genuinely complex trade-offs. Students trained only in pattern recognition lack the practice in moral reasoning, stakeholder analysis, and values-based decision-making that will define leadership in an AI-augmented world. The future belongs to those who can exercise sound judgment when the stakes are high, the information is incomplete, and reasonable people might disagree—precisely the scenarios where AI falls short and human wisdom becomes irreplaceable. Dasey
The 5Cs
The skills that will define success in an AI-augmented world are fundamentally about human-to-human interaction and the application of technological capabilities to complex human affairs. Regardless of how sophisticated AI becomes, humans will always need to negotiate with other humans, build relationships with colleagues and customers, resolve conflicts within families and communities, and make collective decisions about how we want to live together.
The "5Cs"—critical thinking, creativity, collaboration, communication, and character—represent the essential bridge between AI's computational power and these enduring realities of human society. Critical thinking helps us evaluate AI-generated insights and determine how to apply them responsibly to real-world problems involving real people. Creativity enables us to frame problems in ways that leverage AI's strengths while addressing distinctly human needs, emotions, and values. Collaboration becomes crucial as we work in teams where some members are human and others are AI systems, but ultimately serve human purposes and answer to human stakeholders. Communication remains fundamentally human-to-human, whether we're explaining AI-driven recommendations to skeptical board members, building consensus around AI implementation strategies, or translating between technical possibilities and the concerns of worried parents or employees. Character provides the ethical framework for deciding when and how to deploy AI tools, especially in sensitive domains like healthcare, education, and criminal justice where human lives and dignity are at stake. These skills cannot be measured by standardized tests because they emerge through sustained interaction with other people, require contextual judgment about human motivations and fears, and depend on the kind of trust and rapport that develops only through shared human experience.
Discernment
In an era where information flows from countless sources—both human and artificial—discernment becomes perhaps the most crucial skill for navigating our complex information landscape. Students must learn to evaluate not just the accuracy of information, but its source, motivation, and reliability across the full spectrum of human and AI communication. This means recognizing when human sources are biased, emotionally compromised, or simply mistaken, while simultaneously identifying AI-generated content that may be statistically plausible but contextually hollow.
Human-generated misinformation can be just as dangerous as AI hallucinations—consider how human-spread conspiracy theories, politically motivated distortions, or well-intentioned but incorrect medical advice can cause real harm. Students need to develop sophisticated evaluation skills that account for human cognitive biases, emotional reasoning, and self-interest, while also understanding AI's tendency toward confident-sounding but potentially fabricated details. They must learn to triangulate between multiple human perspectives, cross-reference AI analysis with human expertise, and recognize when human intuition reveals something that data analysis misses—or when algorithmic processing catches patterns that human observers overlook due to their own limitations. This kind of discernment requires practice distinguishing reliable human expertise from opinion masquerading as fact, authentic human emotion from performative outrage, and genuine human insight from both deliberate deception and unconscious bias.
As both human and AI communication become more sophisticated and pervasive, the ability to evaluate information quality regardless of its origin will determine who can make sound decisions in an increasingly complex world.
AI Augmentation
Students must also learn to seamlessly integrate AI into their workflows and thinking processes, recognizing that AI will soon be as ubiquitous as electricity—embedded in their laptops, digital applications, robots, smartphones, smart glasses, contact lenses, wearable devices, and invasive and non-invasive Brain Computer Interfaces.
AI assistance will be available at every moment of their personal and professional lives. This doesn't mean becoming dependent on AI, but rather learning how to leverage it strategically while maintaining and developing their own cognitive capabilities.
Students need to understand when to rely on AI for rapid information processing, pattern recognition, or routine calculations, and when to engage their own critical thinking, creativity, and judgment. They must learn to communicate with and among AIs, interpret AI outputs critically, and combine AI-generated insights with human knowledge, intuition, and judgement.
The future belongs to those who can fluidly collaborate with AI systems, using AI to augment their human capabilities rather than replace them, and who understand how to direct AI's computational power toward solving complex human problems that require both machine precision and human wisdom. Pratschke
Redefining Admissions Criteria
Redefining merit for college admissions requires a fundamental shift from measuring students' ability to replicate what AI can now do better to identifying those who excel in these areas.
Universities should admit students who have demonstrated an understanding of basic knowledge that is needed to thrive in an AI World.
Universities should prioritize applicants who demonstrate mastery of the "5Cs"—critical thinking, creativity, collaboration, communication, and character—alongside sophisticated discernment and the ability to learn continuously.
This means developing assessment methods that reveal a student's capacity for ethical reasoning under ambiguity, their skill at asking profound questions rather than providing expected answers, and their potential to synthesize insights across disciplines while leveraging AI as a collaborative tool. Rather than relying heavily on standardized test scores that measure pattern recognition and information recall, admissions processes should evaluate portfolios of original work, evidence of meaningful failure and iterative improvement, and demonstrations of how students have navigated complex human problems that require both technical knowledge and moral judgment.
Universities must then systematically redesign their academic programs to reinforce these redefined merit criteria throughout the collegiate experience. This involves restructuring curricula to emphasize human-augmented intelligence through interdisciplinary problem-solving courses, debates, ethics seminars that grapple with AI integration dilemmas, and collaborative projects that require students to work with both AI systems and diverse human teams.
Assessment methods should move beyond traditional exams toward portfolio-based evaluations, peer collaboration metrics, and real-world problem-solving challenges that cannot be easily gamed through AI assistance alone. Universities should create learning environments where students practice discerning between reliable human expertise and AI-generated content, develop resilience through meaningful academic risks and failures, and build the judgment skills necessary for making principled decisions in ambiguous situations.
By aligning both admissions criteria and academic programming around these human-centered capabilities, universities can ensure they're identifying and developing the leaders who will thrive in an AI-augmented world where techniques.
A Break
If universities redesign their curricula to emphasize future-ready capabilities—like ethical reasoning, creativity, collaboration, and interdisciplinary problem-solving—but continue to admit students based on outdated metrics like SAT scores, GPAs, and AP exam results, they will create a fundamental mismatch that undermines their goals. These traditional metrics primarily reward speed, compliance, and pattern recognition, favoring students who have been trained to optimize for standardized frameworks rather than to think critically or adaptively. As a result, universities will fill their classrooms with students who excel at test-taking but may lack the very qualities the new curriculum is designed to cultivate: intellectual humility, a willingness to learn from failure, the capacity to iterate ideas through feedback, the ability to embrace diversity of thought, and the drive for continual learning in a world that changes faster than any syllabus can keep up with.
This disconnect is not just a matter of pedagogy—it threatens the integrity of the entire educational mission. Students selected under the old system may struggle with open-ended tasks, resist ambiguity, and feel lost when there isn’t a clear rubric to follow. At the same time, genuinely creative, resilient, and collaborative students—often from more diverse backgrounds and less privileged educational environments—may be excluded from admission entirely, not because they lack potential, but because they weren't optimized for a test-centric admissions game. This creates a cynical two-tier system: universities preach transformation but continue to reward conformity. Worse, it sends a mixed message to students that success still depends on gaming the system, not on developing meaningful capabilities. In the long run, such a misalignment will erode student trust, deepen inequities, and cause even the most ambitious reforms to collapse under the weight of outdated selection logic.
If we truly want to educate students who can thrive in an AI-augmented world—students who can learn continuously, collaborate across difference, recover from failure, and adapt to the unknown—then we must design an admissions system that finds and values those traits. K-12 instiutions will then respond in a way that prepares students based on the new criteria. Otherwise, we are simply preparing yesterday’s high achievers for a future they are not equipped to navigate.
Conclusion
Do you want to battle student AI use for the rest of your career?
The traditional markers of merit—memorization, standardized test scores, and formulaic essay writing—are rapidly becoming obsolete in an age where AI excels at precisely these tasks.
Universities, in their pursuit of internal consistency, have inadvertently created a self-reinforcing system that rewards skills easily replicated by machines, leading to an increasing disconnect between education and the demands of the real world. This institutional inertia, coupled with a panicked reaction to AI, highlights a profound truth: the current educational model struggles to coexist with intelligent technology because it has historically developed replaceable capabilities.
We are at a critical juncture where continuing to measure students by these outdated standards risks preparing them for a world that no longer exists. This resistance is not only futile but also deeply damaging. By creating "AI-free" zones and engaging in a daily battle to prevent students from using these tools, universities are trapping themselves and their faculty in an unsustainable fight against a pervasive reality (I often ask: Do you want to battle student AI use for the rest of your careers?). Faculty are forced to become technological enforcers, not educators, expending immense energy on policing behavior rather than inspiring genuine learning. This creates a deeply artificial environment, a bubble where students are taught to operate without the very tools they will immediately use in their personal and professional lives.
To thrive in an AI-augmented future, we must fundamentally redefine merit. The focus must shift from rote learning to cultivating essential human capacities. This new definition of merit prioritizes human-augmented intelligence, emphasizing skills like judgment, ethical reasoning, continuous learning, adaptability, and the 5Cs: critical thinking, creativity, collaboration, communication, and character. Furthermore, discernment—the ability to critically evaluate information from both human and AI sources—will be paramount.
The future of education lies in preparing students not just with knowledge, but with the wisdom to apply it, the creativity to innovate, the ethics to guide their actions, and the resilience to navigate an ever-changing landscape alongside intelligent machines. The most meritorious students of tomorrow will be those who can seamlessly integrate AI into their workflows, amplifying human wisdom and insight.
The challenge now is for our educational institutions to embrace this redefinition of merit, adapting their curricula and admissions criteria to cultivate the truly indispensable human qualities that AI cannot replicate. Only then can we truly prepare the next generation to lead, innovate, and thrive in a world transformed by artificial intelligence.
This transformation represents a lot of work, but the alternative is simply maintaining a largely closed system that flirts with relevance rather than offering a direct stepping stone to success. It may not mean the end of universities, but it will likely mean their replacement by non-traditional learning pathways to prepare us for a non-traditional world. The pressure is already there, it’s just a question of how universities respond. Do they transform themselves or will they be replaced by non-traditional but more relevant learning “institutions.”
Stefan refers to the ability for students to learn more now than ever before. I touch upon this in a proposition for a new set of AI-based Competencies that builds capacity as an AI-augmented learner: https://stevecovello.substack.com/p/what-is-scholarship-in-the-age-of
If content knowledge and processing have been streamlined, then scale up scholarship.