2024 AI Predictions: Material Progress with Societal and Educational Disruptions
We all know that 2023 has been an aggressive year for technological change. We saw significant advances in AI, which grabbed most of the headlines, but also the approval of gene editing and advances in nanotechnology, quantum computing (expected in a decade), brain-machine interfaces, and easy-to-use synthetic biology applications.
These are my predictions for 2024, focused on AI’s likely technological advances, how AI will impact society, and what developments in the education sector relate to AI.
My New Year’s AI Wishlist
Before we start, I offer my New Year’s AI wish: Please think through what the most advanced AI models/systems/tools can do and where the technology is likely headed. If you think of the limits of AI grounded in what ChatGPT3.5 (November 2022 release) can do, you are establishing your expectations based on some very antiquated technology.
My Exponential Caveat
Given that the current rate of change is likely exponential, even conservative predictions are difficult.
My AGI Caveat
People are hinting at “AGI” in 2024. I don’t think we are close to that, and I don’t think any leading scientist believes we are going to see AGI, even under narrow definitions, in 2024. This is a good explanation as to why such predictions are likely off. That said, AI will rapidly impact our world in 2024 and into 2025.
2024 Technology Predictions
Continued significant improvements in existing systems. As discussed, we already have models that can “read,” write, generate text based on knowledge, see, speak, generate art, images, and even videos from text (including text-to-3D), all in more than 100 languages. Significant advances will continue to be made in these areas.
Advances in image and video models will make it even easier to produce any desired images and videos from simple prompts.
Language models and new multimodal language models will train on more and more text and video in specialized areas and non-English languages. Models that train on vision (image and video) and text combined will continue to advance through enhanced algorithms and more extensive training data, enabling more accurate and nuanced interpretation of visual information, which is crucial for applications ranging from medical diagnostics to autonomous vehicles, thereby revolutionizing how machines understand and interact with the physical world.
Growing abilities to “learn” with less training and more fine-tuning in RLHF and RLAIF, combined with RAG-type systems, mean hallucination rates will continue to fall dramatically.
Language written and spoken by machines will sound more natural (“human-like”) as AI systems train on language that is contextually relevant, syntactically correct, and stylistically varied. This will enable AI to adapt its language style to suit different purposes, tones, and audiences and to mimic different writing styles, such as persuasive, descriptive, or narrative, and adjust its tone—be it formal, informal, humorous, or serious—depending on the context and the intended audience. AI systems will be able to communicate in these ways in at least 100 different languages and adapt the nuances just discussed to the linguistic and cultural contexts.
Text-to-3D (NeRFs and Gaussian splats) will make it incredibly cheap and easy to build immersive environments (yes, the metaverse is back).
Reasoning and planning abilities/”Agents”. Rumors of “Q” after the OpenAI management debacle brought the issue to the attention of the public, but all of the major AI companies (Google, Meta, Nividia, OpenAI) and others have been working on the development of AIS that can reason and plan for years. Many of the leading AI scientists (Andrew Ng, Jim Fan, Allie Miller, Yann LeCun) have started saying we are close to having AIS that can do this, and you can find growing conversations related to this among AI companies. Reasoning and planning are significant because you’ll be able to give AIs goals – “plan my vacation” and that will be enough for them to return to you with some vacation options. AIs that can act to carry out goals are Agents. We’ve already seen this ability demonstrated in video games, and we can expect greater use of these in games and then greater implementation into the non-game industry at the end of 2024 and into 2025. Some video game platforms may make a meaningful play for the education space.
These will involve “full context” AI assistants.
Proliferation and decentralization of AI models, including localization. While most people associate AI with ChatGPT, most individuals understand that there are other models such as Claude (Anthropic) and Pi (Inflection), but there are actually more than 400,000 AI models that have been developed, easily downloaded, and run on PCs and even iPhones. One of these (Mistrial) is slightly more capable than ChatGPT3.5 (the November release). Many of these are open source and can be fine-tuned on specific data and modified. Many individuals will soon be running their own AI models for different purposes.
Nascent “World Models” will improve, pushing major AI developments in 2025–2029. Smaller and smaller models will be able to do what many of the large models can do (why they have started to work on phones).
Wearable tech. Wearable tech (glasses, pins) that allow recording of a person’s daily activities, instant translation of verbal and written language, the ability to identify objects by scan, make phone calls, and ask for almost any information will grow in usage. Glasses will make it much easier to interact with immersive (AR & VR) worlds.
Seamless product integration. Unique AI products will still stand out, but they will also be seamlessly integrated into Google Suite and Microsoft Office the same way it is in social media platforms, search engines, Netflix, and the product purchasing websites we use.
“Smarter than us.” There are never-ending debates about whether and how AIs may be “generally” smarter than us. As I’ll explain in a post later this week, I think that debate is irrelevant, but even if current Ais can’t reason better than a cat (Yann LeCun), we do know they can hedge equity markets better than humans, speak in more languages than humans, and satisfy more customers than human phone call centers. So, in these ways, they are smarter than us. Ais will continue to develop “intelligence” in more and more human domains, start to exceed it in some domains, and, most importantly, perform many work tasks better than many humans.
Robotics. Physical robots will continue to advance, but we won’t see more widespread use of robots, and even then, they will be confined to factories until 2025/6+.
Impact on Society in 2024.
These are some of the major highlights.
Economy and employment. As discussed in a blog post and in our report, AI is starting to cause job losses in certain industries (call centers, graphic design, only adding sales, as workers are replaced with AI tools. As the verbal language abilities of the tools improve, call centers could be almost completely wiped out. At the same time, there are more and more ads for individuals with a machine learning background and even basic experience using AI tools. The year 2023 saw businesses start to experiment with these technologies, and in 2024 we will see more business implementation, which will likely exacerbate these trends. All the progress in the development of existing AI capabilities plus the use of agents could potentially trigger a dramatic impact. We don’t need to be anywhere close to AGI for this to happen..
Deep-Fakes: Politics, Scams, Objectification. A lot has been written about the potential for deep fakes – computer-generated replicas of real people, objects, scenes, and voices—to upend democratic elections in the US and Europe in 2024. We’ve already seen AI start to be integrated into campaign communications and marketing ( #1, #2, #3). Completely fabricated news stories circulating on social media make this even more likely.
There are a couple of potential implications for this. One common fear is that someone could be elected based on lies. Given the current state of political advertising, this may not be particularly unique. Two, individuals may become completely disillusioned with politics, unable to determine what is true and false in any way. Such scenarios do risk the potential for “strongmen” to come to power (full supporting quote below).
Deep-fake scams are not limited to politics. People will receive calls in the voices of their relatives saying they are in distress and need money. Malicious actors will spoof the numbers when making the calls. OpenAI’s Sam Altman has suggested families have safe words that only they will know to communicate in such situations. And there will be more situations, such as in Spain and New Jersey, where girls have fully nude replicas of their bodies created from online images. These images are indistinguishable from other images of the victim.
Some news programming (written and video) will be entirely AI-generated, and we already have a news channel that is entirely AI-generated.
Military. AI has been used in various military applications for quite some time, but we are approaching (or maybe have already approached) a point at which weapons can make decisions about where and when to shoot. In the Israel-Gaza war, we have seen these types of systems used for target selection but based on public knowledge they have not yet made decisions to kill on their own. It is possible that we could see this in 2024, and such actions may not be limited to state-based warfare.
Health care and science. The year 2023 saw AI being used to discover new antibiotics, engage in human-level tutor detection, and discover millions of more materials. We can expect to see even more significant advances that will start to be applied in 2024 and extend the human life span.
Regulatory failure. Regulations may make a difference at the margins and in particular contexts, but the most serious problems created by this technology aren’t going to solve them. Fabricated news stories designed to alter elections, for example, aren’t going to be made in Spain with ChatGPT4+. They are going to be made on a mountain in Nepal with an AI model no one knows exists; many of them will spread faster on social media than anyone can take them down, and the only hope of detecting and taking them down will be AI tools. In particular, the US government struggles to pass annual budgets, and regulators fail to stop feces from polluting the water. It’s beyond my imagination how they are going to regulate technology in exponential growth that is ubiquitous and can be run on a local computer.
Education
It is both difficult and not difficult to predict developments in education, but let me identify some trends and take a shot.
Inevitable Changes
Many inevitable changes are happening/will happen in education, regardless of what any school (K-16+) does about AI.
Curriculum pressure. AI started to impact schools when it started writing school essays and papers in what became a recognizable voice with fabricated bibliographies. We are moving past both of those limitations now, and it can write essays and school papers better than most students in “voices” that are no longer detectable by the AI-writing detectors that companies have swindled schools into purchasing, selling them lies. Many AI tools and general chatbots (perplexity.ai) can produce reliable bibliographies. It’s becoming less and less possible to know if machines or humans are writing papers, and this trend will continue. There is no way for an AI writing detector to detect AI writing that is becoming more human.
In the world of languages, AI that can translate 100+ languages in a way that sustains conversation with someone wearing a pin or a pair of glasses is going to put pressure on language instructors. I said back in May that I didn’t think teachers would lose their jobs directly from AI, but noted this might be an exception. The demand to learn languages will rapidly decrease.
Intelligent Tutoring Systems (ITS). Companies continue to push schools and parents onto ITS systems (teaching and tutoring bots), and these systems are getting better. We’ll see some more aggressive roll-outs next year, but over time (2025+), we’ll see tutoring systems that offer personalized instruction in 100+ languages in fully immersive and often gamified environments that are supported by “full context” agent Ais will be able to do a lot of this instructional planning and evaluation. You can put the pieces together from the above areas of technological progress: Just think about how all the developing AI capabilities identified above can support ITSs. There will be huge battles over the best system designs, and different schools and parents will choose different systems (just as many choose different schools now). While not free, these systems will be way more affordable to parents than private schools and will be in the price range (say $50/month at most) for most parents. Some of these systems can now run directly on students’ iPhones and laptops (since the AI models can now run locally on computers and phones), keeping all the data local.
Wearables. Wearable pins and glasses will make their way into schools in the fall of 2025, and perhaps in a small way a bit before. These will present privacy and instructional challenges to schools. Glasses can even have prescriptions, and the even smaller pins will get smaller.
Decentralization. Homeschooling has doubled over the last year, and college enrollment outside of community colleges is falling. High school dropout rates continue to present challenges. Fewer male students are pursuing college degrees. Industry continues to complain that high school and college graduates don’t have the skills needed to succeed at work (soft/durable skills, AI skills). ITS systems will all exacerbate these trends.
These are all trends I see, even if schools ignore AI (as some do). Decentralization will be increased in areas where schools continue to ignore AI.
K-12 Approaches to AI.
Schools are reacting to AI in three ways, and I no longer think this is a public-private school divide as it was in the spring.
(Mostly) ignore AI. This runs the gamut from completely ignoring it to having an occasional PD session.
Engage AI. There are three ways schools are engaging AI.
(a) Teachers and administrators using AI. Teachers and administrators are being trained to use various AI applications in the classroom to improve instruction, reduce workload, and even increase surveillance of students. This benefits staff and teachers and gives them exposure to the significance of the technology, and it’s an easy extension of the current grammar of schooling (centralized management of the students). These are good things (at least the helping teachers save time part), but they do not do anything to address the question of how to prepare students for a world where they will live and work with machines that have incredible capabilities.
(b) Administrators and teachers thinking about an AI World. How will and should education change to prepare students for an AI World? How does education need to change to adapt to it? A limited number of educators are engaging with these questions. We cover these questions in-depth in our report.
(c1) Permitting and, at least indirectly (though sometimes directly), encouraging student use of common AI applications (ChatGPT, Bing/New, Claude, etc.). This applies to public and private schools in every state.
(c2) Providing support for AI literacy. This is close to non-existent now, but appetite is building for AI literacy in the 2024-5 school year.
We hope that schools engage C1 & C2, as schools have always helped ameliorate inequality in access to opportunity, and the chance to understand and learn about AI will probably have a bigger impact on their lives than anything else they are currently doing in school.
Student at The Knowledge Society —
I have no idea why we’d teach a student French and not teach them about AI, and I’d be happy to debate someone on that question if they’d like.
I suspect this will continue and be exacerbated in 2024-5.
University approaches
There are varying university approaches to AI. Some are embracing it, using it to transform their schools and hopefully increase student recruiting. Others are hoping it goes away or believe it’s not a real thing. Some have varied based on school/department (business vs. classics, for example) and by faculty members (some faculty members have embraced it and others won’t touch it).
Education is in a difficult spot. As noted by many leading AI investors and scientists, individuals building and expanding these AI companies and relatede applications are literally moving as fast as they possibly can. Education just doesn’t move that way for a variety of reasons. As a result, we may end up with a completely different educational system than we have now (Smithson, #23).
Conclusions
My thoughts:
(1) As a society, we need to start thinking a lot more about how education needs to change for an AI world. We don’t have to, but we shouldn’t complain when institutionalized education’s relevance fades.
(2) AI literacy is essential. The world is going to change radically over the next 1-2 years, even if Sam Altman stands up and says that his AI stuff is no different than crypto.
(3) The world is going to change, whether we want it to or not. Schools and families that understand the significance of the coming changes will give their students and children the best chance to succeed.
As Ethan Mollick said last spring, the worst AI you will ever use is the AI you are using right now. It’s time to prepare for the future-present.
__