AIxHigher Education: February 24 Update
Anand is heading off to a conference this week so we are releasing a short update early.
Framing
I want to start this week’s Substack update by highlighting Jerry Crisci’s Framework —
I highlight the framework because it guides so much thinking on this blog and in my “AI and Education” work. Before we start using AI in the classroom, the we have to first think about what the world we are educating students for will be like. Then we need to build a sense of urgency, seek permission for change, and engage in instructional redesign. As Anand notes in the podcast, computers are now are thought partners, and that idea alone requires a change in how we approach the “classroom.”
The Hottest AI Article on the Internet
This week, the hottest article on the internet is the The Citrini Research piece — The 2028 Global Intelligence Crisis — co-authored with Alap Shah, is a speculative thought experiment written as if from June 2028, imagining what happens if AI capabilities continue accelerating but the economic consequences turn deeply negative. In the fictional scenario (despite being fictional it is impacting the markets), AI-driven layoffs initially boost corporate margins and send the S&P to 8,000 by late 2026, but a vicious feedback loop takes hold: companies replace white-collar workers with AI, those displaced workers stop spending, weakened consumer demand pushes more companies to cut costs with AI, and the cycle accelerates. The authors describe “Ghost GDP” — output that shows up in national accounts but never circulates through the real economy — and an “Intelligence Displacement Spiral” where productivity gains flow entirely to capital owners while the consumer economy withers. Agentic AI tools also dismantle businesses built on human friction, from SaaS platforms to travel booking to real estate commissions, as machines optimize away the inefficiencies that entire industries monetized.
The fallout in the scenario cascades from sector-specific pain into systemic risk. Private credit markets, bloated with PE-backed software deals underwritten on assumptions of perpetual recurring revenue, begin defaulting. Life insurers that funded these deals using household annuity savings face regulatory pressure, exposing a web of opaque offshore structures. Most ominously, the $13 trillion mortgage market starts cracking — not because borrowers were subprime, but because prime borrowers with 780 credit scores see their incomes structurally impaired by AI displacement. The government struggles to respond, caught between falling tax receipts (since fewer people earn high wages) and rising demands for transfers, all while political factions argue over how to fund a rescue. The piece ends by pulling back the curtain: you’re actually reading this in February 2026, the S&P is near highs, and the feedback loops haven’t started yet.
It’s important to stress that this is entirely fictional — the authors call it “a scenario, not a prediction” and acknowledge that some of what they describe won’t happen. But it isn’t implausible either. The scenario is built from dynamics already visible in early form: AI tools are genuinely improving fast, companies are using them to reduce headcount, SaaS pricing is under pressure, and the question of what happens to consumer spending when white-collar jobs shrink is real. The value of the piece lies not in treating it as a forecast but as a stress test — asking how much of the financial system, from mortgage underwriting to government revenue to private credit, depends on the assumption that human intelligence stays scarce and well-compensated. Even if the full doomsday chain never materializes, individual links in it are already emerging, and the exercise is meant to sharpen thinking about those risks while there’s still time to prepare.
AGI
AGI is not a bright line—it is not a moment that arrives on a Tuesday and changes everything overnight. But as a benchmark, it remains useful precisely because it captures where the trajectory is heading and, just as importantly, where the consensus about that trajectory is shifting.
The most telling indicator is not whether any single system has crossed the threshold of human-level intelligence but how rapidly the number of serious researchers and institutions willing to say "we are close" has grown. That shift in expert judgment—from dismissal to debate to expectation—matters more than any one definition.
As mentioned last week, there is a new article in Nature that claims we have reached AGI. One of the authors, Dr. Mihail Belkiin, was on the Signal Front podcast yesterday and made a strong case that the models can generalize and have at least a basic understanding of what they are doing.
He said the latter does raise questions related to how we great AIs but that answering the questions is above his pay grade. For more on the case for AI consciousness, see SignalFront.org
Similarly, on LessWrong, Gordon Seidoh Worley claimed “AGI is here.”
He does agree that we may not be here yet for #4, but that we are at least close.
Sam Altman says AGI is “pretty close” and superintelligence “not that far off.”
In schools, we still talk about AI literacy and basic chatbot use, but are we prepared to start talking to students about how AIs may possess one of the key aspects of human intelligence and that it is, at least to a degree, self-aware? It just seems we (education and the world at large) is not prepared for what we have, let alone what is being used internally and not yet released.
It does seem smart enough to do your coursework.
Productivity
As the models get smarter, it makes more sense that productivity increases.
In the absence of AI access, higher-education participants outperform lower-education participants by 0.548 standard deviations; with AI access, this gap falls to 0.139 standard deviations, implying that generative AI closes about three quarters of the initial productivity gap.
Coding
Everyone is impressed by the current coding models. Claude code can code for approximately 14 hours in some cases without human intervention.
But current coding models are supposedly nothing compared to what is coming (Tibo is part of the OpenAI Codex team).
Software
Claude Code has sent software stocks plummeting, as companies can now make their own software (some consider this an overreaction). Now Anthropic’s cyber security tool has the stocks of cyber security companies plummeting.
Math
Math advances continue, with Gemini Pro 3.1 solving a Frontier Math 4 problem.
For more on math, see Mathematics in the Library of Babel.
How AI “Fluent” Are We?
Anthropic recently introduced what it calls an “AI Fluency Index,” built from an analysis of nearly 10,000 anonymized Claude conversations from January. The central finding is counterintuitive: polished AI outputs actually seem to make people less careful, not more.
When Claude produced tangible deliverables — working apps, formatted documents, interactive tools — users gave clearer instructions upfront but then largely stopped scrutinizing what came back. Fact-checking dropped by 3.7 percentage points, questioning of reasoning fell by 3.1 points, and flagging of missing context declined by 5.2 points compared to ordinary conversations. In short, the better the output looked, the more users seemed to trust it at face value.
There’s a flip side, though. Among the 85.7% of conversations that involved iterative back-and-forth, engagement was dramatically higher. Users who refined their prompts over multiple turns questioned Claude’s reasoning more than five times as often and caught gaps in context four times more frequently than those who accepted early results.
Anthropic suggests a few explanations for the polish-trust tradeoff. A finished-looking result may simply signal “done” to users, short-circuiting the impulse to evaluate. It’s also possible that for tasks like UI design or app prototyping, visual quality matters more than factual precision — or that users are doing their verification elsewhere, testing code in a separate environment rather than interrogating Claude in the chat.
Integration
The technology is obviously advancing faster than adoption and integration.
Legacy companies are still struggling to integrate generative AI, a point Gary Marcus celebrated.
To speed adoption, OpenAI has launched a new partner program called "Frontier Alliances." The program to bring the company's recently introduced Frontier platform to large enterprise customers.
A failure of legacy businesses to properly integrate AI more likely means they’ll lose out to AI-native companies than that AI will collapse.
Privacy
From a handful of comments, LLMs can infer where you live, what you do, and your interests; then search for you on the web.
Robotics
Robotics continues to accelerate .
Nvidia’s AI research team has released DreamDojo, an open-source, interactive world model for robotics.
It takes robot motor controls and generates a simulated future in pixels; no engine, no meshes, no hand-authored dynamics are required. Jim Fan, Director of AI and Distinguished Scientist at Nvidia, calls it “Simulation 2.0.”
Figure’s robots are running 24/7.
The future of drone warfare is not limited to the skies
What to Study?
It’s confusing mess out there, and leaders don’ have a lot of advice.
Uber CEO Dara Khosrowshahi says we have time to adapt but thinks unemployment will spike in 5-10 years. His simple advice: “work hard.”
Daniela Amodei, Co-founder and President of Anthropic, suggests studying the Humanities.
Her brother, Dario Amodei, co-founder and CEO, suggests focusing on character education.
Personally, I think there will be plenty of opportunities for those who work hard learn how to use the tools and apply AI.
And the idea that I keep coming back to - judgement (I stole its importance from Tim Dasey). How do we cultivate that?












