Our Students Will Soon Use Powerful AGI-Like Systems
Some teenagers are already using AI to change the world
Powerful AGI-Like Systems
Ask five AI leaders what artificial general intelligence means and you’ll get five different timelines and yardsticks. Anthropic’s Dario Amodei treats AGI as a pragmatic threshold: systems that can match top‑quartile human performance across most cognitive tasks—a bar he thinks we might clear as soon as 2026. He refers to this as “powerful AI”
Meta’s Yann LeCun, by contrast, argues we’re still “below cat‑level intelligence” on reasoning and world‑modeling, so he prefers to drop the buzzword altogether and talk about building new architectures that can learn and plan in the physical world, though he gives a time line for an AGI-like system (“Advanced Machine Intelligence”) of 3-10 years. Deep Mind’s Demis Hassabis says 3-10 years. Ben Goertzel says 2028ish. Ray Kurzweil says 2029.
Whether you side with Amodei’s optimism or LeCun’s caution, one thing is clear: every quarter brings systems that blow past last year’s benchmarks. “AGI” may be fuzzy, but increasingly powerful is an empirical fact.
And you don’t need a full‑blown AGI to upend institutions. Elon Musk’s Department of Government Efficiency (DOGE) has already shown how far today’s models can reach when paired with privileged data:
Surveillance at scale. Reports allege DOGE is deploying generative‑AI tools to monitor federal workers’ emails for political “disloyalty.”
Core‑system takeovers. Whistle‑blowers say the team is rewriting Social Security and IRS tech stacks, siphoning citizen data, and even degrading benefit delivery during the transition.
Young talent with root access. One key engineer—19‑year‑old Edward Coristine, a former hacker known online as “Big Balls”—now holds credentials inside multiple agencies.
If a handful of coders barely out of high school can weaponize today’s models to “scoop data on all Americans,” imagine what savvy students can do once those same models run locally.
No Legal or Technical Limits
Unlike the EU’s AI Act or China’s fast‑gelling licensing regime, Washington still leans light‑touch. Last summer the NTIA told the White House there was “no need—for now—to restrict open‑source AI,” embracing freely downloadable model weights. With President Trump promising to scrap most of Biden‑era safeguards, analysts expect even lighter federal oversight ahead.
Combine permissive policy with new hardware—Qualcomm’s Snapdragon 8 Elite smartphone chip is purpose‑built for running multimodal models on‑device —and by the time today’s freshmen are seniors, powerful, modifiable AI will fit in their pockets.
Specifically, what can our students do with these models?
Create deepfake personas and synthetic identities that can bypass digital verification systems, enabling fraud across financial services, government benefits, and identity-dependent platforms.
Run small-scale influence campaigns by deploying networks of AI-powered social media accounts that generate authentic-seeming content to amplify specific viewpoints within targeted communities.
Automate harassment campaigns by generating personalized threatening content at scale against individuals, overwhelming traditional content moderation systems.
Write plagiarism-proof academic submissions that evade detection tools, undermining educational assessment integrity while appearing to be original work.
Generate counterfeit documents including fake medical records, credentials, or official communications that appear legitimate even under moderate scrutiny.
Quickly Entering the World of Work
Beyond using these systems at “home,” our graduates with skills can directly enter the world. Palantir just launched a Meritocracy Fellowship that skips college entirely. Score 1460+ on the SAT (or 33 ACT), spend four paid months inside Palantir’s Gotham/Foundry engineering org, and you could walk out with a defense‑tech job before freshman orientation would have started.
Students can join Palantir right out of high school to dominate.
And it’s not just about working direclty for Palantir.
What Schools (and We) Must Do Now
Here are 10 suggestions
Ethics first, tech second. If a 17‑year‑old can mount an influence campaign or deploy a robotics fleet, moral reasoning must be taught alongside Python.
Simulation labs for tough calls. Give students role‑play scenarios—deploy the botnet or refuse the contract?—mirroring the dilemmas above.
Credential pathways. Partner with industry (yes, even Palantir) so that students see both the allure and the guardrails of high‑impact AI work.
Cross-disciplinary integration. Connect computer science with humanities, forcing students to consider social implications of technical decisions through joint projects.
Real-world impact assessment. Require students to evaluate potential consequences of their technical creations across diverse communities and contexts.
Digital citizenship curriculum. Develop comprehensive programs teaching responsible technology use, digital literacy, and online civic engagement.
Community tech councils. Create student-led committees to evaluate proposed tech implementations within the school environment.
Technical limitation awareness. Teach the boundaries and biases of AI systems alongside their capabilities to prevent over-reliance.
Global perspective workshops. Expose students to how emerging technologies affect different regions and socioeconomic groups worldwide.
Alumni mentorship networks. Connect students with graduates now working in tech ethics to provide real-world guidance and inspiration.
Conclusion
The race to AGI may take a decade—or arrive before the next graduation speech. Either way, pre‑AGI AI already grants teenagers an unprecedented lever over society. Our job as educators, parents, and citizens is to make sure they wield that lever with wisdom, not just skill.
Because when a 19‑year‑old can sit inside the nation’s most sensitive databases, the line between homework and history‑shaping action is thinner than ever.
As I’ve been saying for two years, this isn’t about writing papers with AI.