Altman's Dismissal Hints at Breakthroughs Sutskever Wants Revealed
We need to start preparing our curriculum, our students, our families, and our communities for more advanced AI capabilities.
Update: Altman may be coming back :). Even if they do get him to come back by the time you open this, you should still read it because I think it is evidence of how rapidly AI is developing and it might help you understand AGI a bit more than you already understand it. There is a discussion of the implications for education at the end.
TL;DR
*OpenAI's decision to dismiss Altman and reassign Brockman's role could be attributed to concerns from Sutskever, one of the world’s leading AI scientists who pushed for his ouster, that their approach to commercial AI deployment, especially the forthcoming GPT store, was too aggressive relative to existing safety measures and because they were close (or had) AIs that could do everyone’s jobs, even if did not have human-level intelligence.
*Alternatively (or additionally), it might have been because they reached a significant AI milestone but chose not to publicize it due to monetary considerations. Revealing such advancements could jeopardize the partnership with Microsoft, as the partnership does not include licensing AGI systems, but not revealing it could contradict OpenAI's Charter, which mandates the sharing of AGI benefits universally and safely because the technology has the potential to automate all forms of employment. [Note: I do not think OpenAI has achieved AGI, but the new capabilities may mean they are getting closer faster than expected. If they have autonomous AI (see below and what Sam claimed was coming at the developer’s conference), they could have autonomy without alignment, which is dangerous].
* There is a chance we are rapidly moving closer to a world where machines can do much of what humans can do at work (all jobs) given recent, barely disclosed advances by OpenAI over the last few weeks and Sustkever’s concerns. This is independent of the accuracy of what the fight is actually about. Doing everyone’s jobs may not be “in the interest of humanity” if controlled by a private, profit-driven corporation.
*All schools (not just those that are emerging as leaders) need to act to implement AI literacy programs to help students navigate this world and they should prepare to adjust learning and assessment.
*There is part of a slightly modified video I made and played at a training session I did with @Sabba Quidwai for school district leaders on Friday to help them to think about what type of world they need to plan for. It addresses how AI will advance and how it is changing the world of work, even without these unanticipated advances.
____
Late Friday, OpenAI announced that Sam Altman had been fired because he “was not consistently candid in his communications” with the board.
They didn’t say what he was not candid about, and some have speculated that he has not been candid about starting another venture, but I think the best explanation is that Sustkever pushed Altman and Brockman out over AI safety concerns and/or because they wouldn’t tell the board and the world about significant advances they have made.
These are the key reasons I believe this is the case.
There have been very significant AI developments.
Last Thursday, the day before he was fired, Altman spoke at the APEC Summit and said a major advance had been made in the last couple of weeks.
He continued to explain how it will produce new models.
Are the advances he’s referring to constitutive of AGI? We don’ know, and it’s probably not AGI, but they are very significant, regardless of how they are defined.
What has Altman done since early November and before he was fired?
Announce the development of ChatGPT 5 (November 16).
Seek funding from Microsoft to build “super intelligence” (November 13), noting that “there’s a long way to go, and a lot of compute to build out between here and AGI . . . training expenses are just huge.” So, he wants more funding to support the new dramatic advances and doesn’t think (or at least isn’t saying) that those advances are AGI or will help us get there much faster.
OpenAI announces how they will decide when AGI has been achieved (November 13).
Sam and Greg go on a massive fundraising adventure (November 7-9)
GPT will, according to Sam (see the video at the end of this post), include autonomous agents. (November 6)
Ilya tells Sam he’s confirmed there is this great level of autonomy without the safety team yet having figured out how to get it to love humanity. (November 4)
Sam witnesses the breakthrough referenced in the first video. Some speculate it was the degree of autonomy it developed or an approximation of AGI. (November 2)
Altman, Brockman, and Sustkever were the three cofounders on the six-member board.
Two of the other board members (McCauley, Toner) are focused on the safety of AI (“McCauley and Ms. Toner have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that A.I. could one day destroy humanity” (New York Times)). The third, (D’Angelo), supports OpenAI because it is a non-profit, so Altman & Brockman’s drive to (arguably) focus more on profit and commercial interests may have alienated him. Since there were six members and we know Altman and Brockman were against the two of them being demoted (Brockman) and fired (Altman), these three +Sustkever would have had to vote to remove Altman.
Sustkever informed Altman that he was fired, and 19 minutes later he informed Brockman (who later quit) that Brockman was no longer the President of the company and Chair of the Board and that he would report to the new CEO, Mira Mirati.
Sustkever is very concerned about the arrival of AGI and believes that the chance of it coming soon is high enough that we need to plan for it (based on an interview he did…)
And he expects significant developments in 2024. He said on November 4 that the models will have “new and unprecedentedly valuable applications” and then gives the examples of “producing high-quality legal advance,” doing homework, completing taxes, and offering reliable financial advice. “Reliability” will grow and we’ll have systems “completing some large and complicated tasks.” Regardless of AGI, this means it can do most peoples’ jobs.
He is also very concerned about “AI Safety,” which is focused on preventing AI from doing bad things to humans if it starts to act autonomously.
“It doesn’t seem at all implausible that we will have computers — data centers — that are much smarter than people… What would such A.I.s do? I don’t know…” [November 2 Podcast]
Sustkever fears AIs will treat us the way we treat animals: they won't intentionally try to harm us, but if we get in their way, they will (just as when we build highways, we don't care if animals die).
That's why he's been spending time trying to get the models to treat us nicely. He was already diverting 20% of his time and OpenAI resources to issues of “alignment” (how to make AI safe so AI doesn’t harm humans).
There are commercial implications for how AGI is defined and when it is achieved due to Microsoft’s licensing agreements.
The Microsoft deal only applies to pre-AGI tech: “we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.” (OpenAI). Their six-member board (it is unlikely to survive the Altman firing) will determine if the company achieves AGI. This is significant, because while Altman is seeking more money from Microsoft, once they achieve AGI then they don’t have to share the AGI-level technology with Microsoft. Of course, they could choose to do so, but it’s supposed to be used for the “benefit of all humanity,” which it may not be if it is give to a company, especially one company.
So, imagine that we are close to AGI and Sutskever has both safety concerns and concerns that it will not be disclosed and used to benefit all humanity. Altman and Brockman do not want to say OpenAI has AGI or is close to it in order to keep Microsoft investing and to develop commercial applications rather than those that benefit all of humanity. If Altman and Brockman did not share the potential significance of these developments with the three non-employee board members, they would think he had not been “consistent and candid.”
Now, there could also be an honest disagreement. Sukstever may think OpenAI is close to achieving AGI and Altman and Brockman may not. There is an internal division in the AI community as to whether or not large language models that power current developments can get us to AGI. Some such as Suskstever believe that there is a good chance that they can; others, now potentially including Altman, think we’ll have to push beyond the current LLM models to get to AGI.
The other explanations don’t make much sense. There are two other leading explanations.
Altman and Brockman were starting their own competitor company. I think this is the most popular competing theory to what I suggest, but it has a huge hole: if the board thought this, why didn’t they also fire Brockman instead of just demoting him? If they thought Brockman and Altman were starting a competitor company, they would have fired them both, not merely demoted Brockman (and they were concerned about him or they wouldn’t have demoted him).
Altman did bad stuff. This doesn’t make sense because Brockman was also pushed out at the same time (Altman was significantly demoted). And now they are in discussions to bring him back…
What motivated Sutskever to do this had to be significant. He didn't do this because of hallucinations, the inability to completely eliminate bias, the fact that the models use a lot of energy, because he doesn't like capitalism, or because kids might learn from bots instead of humans, but because he thought the rapid development of AGI without adequate safeguards threatened humanity itself and/or because if he thought they had it they should disclose it.. He thinks these models are quite the opposite of stochastic parrots (on at least two occasions, he said they may be “slightly” conscious).
He simply didn't blow up OpenAI and fire a close friend who he was pursuing a passion project with because he thought the technology was too weak, but because he thought it was too strong given the current safety levels. It seems that he was willing to risk the collapse of OpenAI and end significant friendships to avoid greater dangers, which means he thinks this is a serious threat. [If you started a company with a friend based on a shared passion and the value went from $0 to $90 billion, what would it take for you to fire your friend?]
___
What is AGI, and why is it so significant?
AGI refers to Artificial General Intelligence, an AI model or permutation of models (maybe “multiple agents; this is probably where things are headed) that was originally defined to be “systems that are generally smarter than humans” (Altman/OpenAI). Subsequently, Altman clarified the definition to be “equivalent of a median human that you could hire as a co-worker” (IBID). Ilysa Sutskever defined it as “a system that does any job or any task a human does, but only better.”
In order for it do that, it doesn’t necessarily have to have all the intelligence abilities of humans; it just has to be able to do their jobs; it needs to have those capabilities, not necessarily the ability to think like humans.
So, it’s possible that we could be close to that, and that alone would be tremendously economically disruptive, and arguably not in the interest of humanity. And if it can do any job, it may be able to act autonomously. If it can act autonomously, it could do bad things (especially if the safety controls cannot pace it). If it could do all those things, perhaps it could train itself to do more (especially without controls).
Anyhow, you can at least imagine the concerns.
Do I think we are close to AGI?
I don’t think I’m qualified to say. Most (maybe all, at least nearly all) leading AI scientists say 5-20 years and point out that the timeframe is constantly accelerating. Based on some definitions, and what Altman and Sustkever have said about the models recently, we could be close (the potential disagreement above aside), depending on how it is defined.
Regardless, there are more and more developments that will significantly impact education and society regardless of whether or not we achieve AGI soon.
Should Sutskever be ignored?
There is some temptation to ignore Sutskever, to paint him as some “mad scientists” or a self-interested individual who tried to pull-off a coup, but, as Lex Fridman, the popular AI scientist at MIT, notes:
Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientists in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic.
A recent article in Fast Company adds:
“I remember Sam [Altman] referring to Ilya as one of the most respected researchers in the world,” Dalton Caldwell, managing director of investments at Y Combinator, said in an interview for a story about Sutskever with MIT Technology Review that was published just last month. “He thought that Ilya would be able to attract a lot of top AI talent. He even mentioned that Yoshua Bengio, one of the world’s top AI experts, believed that it would be unlikely to find a better candidate than Ilya to be OpenAI’s lead scientist.” OpenAI cofounder Elon Musk has called Sutskever the “linchpin” to OpenAI’s success
What does AGI mean for education?
What do students need to learn in school?
In a world where it can do most people’s jobs, that will change.
Instruction and Assessment
Even if AGI is not obtained, continued AI developments will have a big impact on education, but when students have autonomous agents that function as individual tutors, education will change in a big way. This includes not just what and how students are taught, but the values and skill we attempt to cultivate.
We outline this in our paper, but professor Michael Sankey also has some quick slides and I’ve included a few samples for you to think about.
Jobs
I do not see schools replacing teachers with AIs, but AI-driven instruction will make distributed, non-school learning more possible and fewer students may choose to attend brick & mortar schools.
And the role of the teacher will certainly change to be more of a “guide on the side.”
We unpack this more in our paper.
AI Literacy
As I stated many times, it is very important that students and faculty understand these developments, so AI literacy is essential.
What should educational administrators plan for?
Six months ago, I wrote a blog post about how any 3-5 year plans educational institutions have need to be torn-up because they don’t assume an AI World, and that is becoming more and more true every day.