Tuesday's Ten for Educators
OpenAI; Smaller Models; RAG; Autonomous Agents; Image & Video Generation, Avatars, AI is a BFD: "Sub-cat level" AI out-performs humans at many tasks.
Want to read more? Our comprehensive report; debate and AI; our book
Note
I write Tuesday’s ten for both administrators and teachers. Some who support AI professional learning for teachers think teachers should only get the basics — here are some tools you can use in your classroom. I don’t agree with that at all.
1. If teachers don’t understand AI, its capabilities, and how it is and will impact the world, then how can they make informed decisions about how to use it in the classroom?
2. Do people seriously think that teachers are only capable of using some magic tool to generate a lesson, produce a quiz, feed the quiz into the AI grader, and provide direct instruction to students? If so, it won’t be long before they are all replaced by AI, because AI can do all of those things already, and its ability to do so will dramatically improve.
3. If we want teachers to be “writers, coaches, dreamers, parental figures, public speakers, community leaders, impromptu artists, problem-solvers, strategizers, counselors, advocates, and designers” (Gulya), they need to understand the AI World.
4. Teachers are citizens and live in the world, just like the students who we think need AI literacy. If people as intelligent as teachers can’t understand AI, society in an AI world is going to be problematic.
Anyhow…
___
This is the second edition of AI developments and news stories that I think are important for educators to pay attention to.
Sorry, but you have to read this. Education Week —
OpenAI (ChatGPT)/Sam Altman. The biggest news story in AI since last Friday is the firing of Sam Altman as the CEO of OpenAI, the creator of ChatGPT. No one has any idea why he was fired (there is speculation that his board felt he wasn’t taking AI safety seriously, that he told them inconsistent things about personnel, and that one of his board members had a competitive product). We may never know, and maybe there isn’t a good reason, but it appears that he will either return as CEO or take all of the employees with him to Microsoft and run a new research division there. Microsoft is the largest investor in OpenAI and has IP rights to all or nearly all of the technology, so even if he leaves, there will likely be minimal disruption.
Regardless, this has led many to question the desirability of relying on a single language model or other AI operating system in enterprise solutions, and we will probably see the emergence of more providers that can simply switch out language models, especially open-source models, for schools and companies when needed. While OpenAI’s model certainly sits at the top of even the “frontier” models, most users, especially K–12 schools and smaller businesses, do need to run their commonly used bot applications on ChatGPT4+. Providers will soon be stepping up to offer these types of solutions to schools.
Regardless, even if OpenAI disappears, all you’ve learned about prompting and image generation can easily be used in other tools (Claude, Perplexity, etc.).
Smaller models. Over the last few days, we’ve seen the release of much smaller models that have similar capabilities to the larger, frontier models (Microsoft’s Orca; Yi). This both dramatically reduces the cost of running models (making things more affordable for schools) and reduces the environmental impact. Many models are being trained to be “larger,” as increasing scaling data and compute continues to be proven to advance the models, but we are also seeing smaller models emerge with incredible capabilities.
RAG. There continue to be significant developments in Retrieval-Augmented Generation (RAG) by many companies and research institutions. Some speculate that it was the significant development Altman is referring to when he talked about new breakthroughs at the APEC summit (see the first two short videos on my last post)
RAG is a technique that enhances language models by retrieving relevant information from a database to provide context for generating responses. This additional context helps the model to ground its answers in real-world knowledge, thereby reducing the likelihood of producing "hallucinated" or factually incorrect information. By anchoring the model's output to actual data, RAG improves the accuracy and reliability of the generated content.
Autonomous Agents. Autonomous agents are AIs that can accomplish a task when given a goal ("Help Johnny master the math in Unit 5”). Once given the goal, the AI agent will figure out how to accomplish the task using different tools and potentially cooperate with other agents (“agent swarms”) to accomplish the goal. Some speculate that on November 2 (see this post for the reference) maybe OpenAI figured out how to “make” useful autonomous agents. Altman did refer to autonomous agents in the Dev Day conference, and perhaps that is what spooked the board and triggered his firing. Regardless, powerful agents are coming (Mira Mirati is the CTO of OpenAI) —
The idea of releasing autonomous agents in the ChatGPT "store,” given that we have not yet figured out how to align AI with human interests, could have spooked the board.
Trend
So, yes, we are seeing the emergence of smaller, more affordable models that will be able to provide very accurate knowledge using RAG and other tools (fine-tuning, RHLF, RLAIF, and even debate) to radically reduce, if not entirely eliminate, hallucinations in school instructional material.
Providers will be able to deliver these services in environmentally friendly, low-cost ways to schools, learning networks (home school associations, micro school networks, etc.), and individuals who want to purchase them, and they will work.
The future of AI in education is not your experience with the free version of ChatGPT.
Will it replace teachers? If we focus our AI professional learning teachers using magic tools to simply improve content instruction and make fancy images, it will; AIs can do those things and are getting better at doing so every day. If we focus on teachers as writers, coaches, dreamers, parental figures, public speakers, community leaders, impromptu artists, problem-solvers, strategizers, counselors, advocates, and designers, it won’t.
Art. There are always new developments in image generation that interest educators. I’m not sure that everyone will love history class now that they get to talk to a virtual George Washington, but some are, and it’s always good to spruce up lesson plans and the presentations on our new Promethean boards.
Meta (Facebook) has released new tools for text-to-video and text editing of images. The need to generate the best image from single prompts will soon be gone.
Bad jokes about virtual George Washington aside, image generation is useful for product design ideas in business courses, student presentations on a variety of topics, school promotional flyers, representing scientific concepts, visualizing new ideas, prompting creative writing, and teaching vocabulary. I’ll unpack some more ideas in future posts.
Before I move on from images, I thought it would be cool to demonstrate how AIs such as ChatGPT4 are expanding prompts to generate images. Look what it does to my 5-word prompt - “image for being close to AGI.”
As an addition to the trend notes above, the new AI tutors are not just going to be accurate but also visually interesting, not only with text but also with video.
Conversational Avatars. You can create and add avatars with new Microsoft technologies.
Higher-order thinking question: How does the ability to create avatars in minutes impact the “trends” discussion?
Music generation. If your class is tired of your playlist, you can create some of your own music with Deep Mind’s (part of Google) music generator.
Policies. There was a big push over the summer for schools to develop “policies” that regulated in great detail how AI could and could not be used in the classroom. I was always skeptical of such policies because AI is literally everywhere, and its use is largely undetectable. And you can pile the rate of change on top of this. Instead, I argued for a few rules (teachers should not put personally identifiable information into AI systems that aren’t approved enterprise solutions, the human-in-the-loop should have the final say, and AIs should not be used to determine if a student cheated) and then guidelines and norms for how AI should be used that were based on discussions (see our paper). More and more PL providers have moved in this direction, and we now find more articles about this approach.
Intelligence leveling. A highly referenced study of the Boston Consulting Group shows that lower-skilled consultants who use AI have much larger gains in skills and abilities than those who don’t.
Applied to education, we can see a world where “less intelligent” students (poorer writing skills, ELL learners, students who can’t or don’t memorize as much material, students who do not see as many patterns or relationships, students who don’t see things as quickly, students who are less creative, maybe have lower IQs) are going to be nearly as or just as capable at school and work than those who have these superior intelligence capabilities.
Currently, you can generally get paid more if your biological brain is “more intelligent,” but when everyone has a “silicon brain” to “extend their mind,” then any natural advantage you have in biological intelligence may not be relevant.
If and when that happens, how will we distinguish ourselves? In the past, we were distinguished by our physical strength, and now it is our intelligence. Intelligence distinctions may soon no longer be meaningful. How will we distinguish ourselves then? Hopefully, the most caring, kind, and committed individuals will be the ones who get the 2,000 slots at Harvard. And if that happens, it would be fine if 1,000 of those are legacy admissions. :)
AI is a BFD. Pardon my French, but I keep struggling to communicate that AI is a BFD. As an educator, just think about how much an intelligent, autonomous bot that doesn’t hallucinate and that can individualize instruction will change education. Now think about how an autonomous, intelligent bot that doesn’t hallucinate will impact other industries.
Some say, well, AI can do a lot of things, but it isn’t that smart (It can get a 1410 on the SAT, 5s on most AP tests, and past most professional licensing exams, but it isn’t smarter than a cat or dog (Yann LeCun)). LeCun is right; it doesn’t have many human intelligence capabilities (it lacks common sense, it can’t reason from one domain to the next, it can’t engage in high-level reasoning (example: hypothesis generation), it isn’t sentient or conscious), but if it can do what we can do well, what does it matter? It will still change the world.
This is reflected in OpenAI’s definition of Artificial General Intelligence (AGI): The ability to “outperform any person at ‘most economically valuable work.’”
Here are some examples of what it can do even if it is less intelligent than a cat, a dog, or even an ant.
Educators need to start preparing students for a world of work and entrepreneurship where they spend their days working with AIs that can do many incredible things, regardless of how smart they are.
Academic abilities
Some of what it can do in practice:
Would you rather have an employee with all human intelligence capabilities or one who is chicken-level and can improve customer satisfaction and dramatically lower your costs?
And it’s producing major advances in science.