Are You Preparing Your Students for an AGI World?
It may be here faster than you think. Are you ready for many of your skills to potentially be irrelevant?
I haven’t had time to blog everything I wish to, so today I’ll share portions of a new Kevin Roose article over at the New York Times that I entirely agree with.
I encourage you to read the whole article.
Powerful A.I. Is Coming. We’re Not Ready.
Three arguments for taking progress toward artificial general intelligence, or A.G.I., more seriously — whether you’re an optimist or a pessimist.
Here are some things I believe about artificial intelligence:
I believe that over the past several years, A.I. systems have started surpassing humans in a number of domains — math, coding and medical diagnosis, just to name a few — and that they’re getting better every day.
I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do.”
I believe that when A.G.I. is announced, there will be debates over definitions and arguments about whether or not it counts as “real” A.G.I., but that these mostly won’t matter, because the broader point —
that we are losing our monopoly on human-level intelligence, and transitioning to a world with very powerful A.I. systems in it — will be true.
I believe that over the next decade, powerful A.I. will generate trillions of dollars in economic value and tilt the balance of political and military power toward the nations that control it —
and that most governments and big corporations already view this as obvious, as evidenced by the huge sums of money they’re spending to get there first.
I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones, and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems
I believe that the right time to start preparing for A.G.I. is now.
..
In San Francisco, where I’m based, the idea of A.G.I. isn’t fringe or exotic. People here talk about “feeling the A.G.I.,” and building smarter-than-human A.I. systems has become the explicit goal of some of Silicon Valley’s biggest companies. Every week, I meet engineers and entrepreneurs working on A.I. who tell me that change — big change, world-shaking change, the kind of transformation we’ve never seen before — is just around the corner.
“Over the past year or two, what used to be called ‘short timelines’ (thinking that A.G.I. would probably be built this decade) has become a near-consensus,” Miles Brundage, an independent A.I. policy researcher who left OpenAI last year, told me recently.
But today, the people with the best information about A.I. progress — the people building powerful A.I., who have access to more-advanced systems than the general public sees — are telling us that big change is near. The leading A.I. companies are actively preparing for A.G.I.’s arrival, and are studying potentially scary properties of their models, such as whether they’re capable of scheming and deception, in anticipation of their becoming more capable and autonomous.
Sam Altman, the chief executive of OpenAI, has written that “systems that start to point to A.G.I. are coming into view.”
Demis Hassabis, the chief executive of Google DeepMind, has said A.G.I. is probably “three to five years away.”
Dario Amodei, the chief executive of Anthropic (who doesn’t like the term A.G.I. but agrees with the general principle), told me last month that he believed we were a year or two away from having “a very large number of A.I. systems that are much smarter than humans at almost everything.”
Maybe we should discount these predictions. After all, A.I. executives stand to profit from inflated A.G.I. hype, and might have incentives to exaggerate.
But lots of independent experts — including Geoffrey Hinton and Yoshua Bengio, two of the world’s most influential A.I. researchers, and Ben Buchanan, who was the Biden administration’s top A.I. expert — are saying similar things. So are a host of other prominent economists, mathematicians and national security officials.
To be fair, some experts doubt that A.G.I. is imminent. But even if you ignore everyone who works at A.I. companies, or has a vested stake in the outcome, there are still enough credible independent voices with short A.G.I. timelines that we should take them seriously.
…
Today’s A.I. models are much better. Now, specialized models are putting up medalist-level scores on the International Math Olympiad, and general-purpose models have gotten so good at complex problem solving that we’ve had to create new, harder tests to measure their capabilities. Hallucinations and factual mistakes still happen, but they’re rarer on newer models. And many businesses now trust A.I. models enough to build them into core, customer-facing functions.
As these tools improve, they are becoming useful for many kinds of white-collar knowledge work. My colleague Ezra Klein recently wrote that the outputs of ChatGPT’s Deep Research, a premium feature that produces complex analytical briefs, were “at least the median” of the human researchers he’d worked with.
If you really want to grasp how much better A.I. has gotten recently, talk to a programmer. A year or two ago, A.I. coding tools existed, but were aimed more at speeding up human coders than at replacing them. Today, software engineers tell me that A.I. does most of the actual coding for them, and that they increasingly feel that their job is to supervise the A.I. systems.
…
But even if A.G.I. arrives a decade later than I expect — in 2036, rather than 2026 — I believe we should start preparing for it now.
Most of the advice I’ve heard for how institutions should prepare for A.G.I.
boils down to things we should be doing anyway: modernizing our energy infrastructure, hardening our cybersecurity defenses, speeding up the approval pipeline for A.I.-designed drugs, writing regulations to prevent the most serious A.I.
harms, teaching A.I. literacy in schools and prioritizing social and emotional development over soon-to-be-obsolete technical skills.
These are all sensible ideas, with or without A.G.I…
I don’t worry about individuals overpreparing for A.G.I., either. A bigger risk, I think, is that most people won’t realize that powerful A.I. is here until it’s staring them in the face — eliminating their job, ensnaring them in a scam, harming them or someone they love.
This is, roughly, what happened during the social media era, when we failed to recognize the risks of tools like Facebook and Twitter until they were too big and entrenched to change.