There has been a lot of talk recently about an “AI Bubble.”
Supposedly, the industry, or at least the generative AI subset of it, will collapse. This is known as the “Generative AI Bubble.”
A bubble — a broad one or a generative one — is nonsense.
These are the reasons we will continue to see massive growth in AI.
*The US military is planning full AI integration in warfare and operations, with a two-year time frame.
— GAI is also being used in the military. GAI can help search across vast resources like policy documents, memorandums, and other information to find specific capabilities or data useful for particular applications. It can generate operations orders, synthesize a Commander's intent, and parse through open source and cyber intelligence data. Military training simulation software incorporating generative AI can build digitized models that prepare soldiers with combat systems deployed during operations. Generative AI models can also be used to create military training materials. GAI will enable weapons systems to use vision models to communicate with soldiers.
*All countries and alliances will need to develop sovereign AI models and build massive underground/secured data centers.
*There will be substantial disruptions to health care and education markets due to AI, and there is a lot of spending there for entrepreneurs to go after.
- GAI models can also answer medical questions and support diagnosticians, saving time. Additionally, it is used in drug discovery to generate novel molecular structures, screen compounds, predict drug interactions, and optimize clinical trials.
— GAI is transforming education by enabling the creation of personalized learning materials, interactive content, and immersive simulations that enhance the quality and engagement of educational resources. It powers adaptive learning pathways, virtual teaching assistants, and targeted feedback to students based on their individual needs and performance, promoting self-paced and effective learning. Generative AI also automates administrative tasks, identifies knowledge gaps, and advises curriculum designers by processing vast amounts of student data to uncover trends and suggest improvements.
It’s now a critical component of AR/VR tutoring systems.
*AI is turbocharging scientific discoveries that have economic value.
— In the field of biology, AI is being used for drug discovery and generative chemistry. Protein structure prediction has seen fundamental breakthroughs thanks to AI models like Google's AlphaFold.AI is also transforming research in physics, mathematics, and social sciences by providing fresh directions for scientific exploration. Generative AI will likely have a seismic impact on scientific knowledge production in social sciences, particularly in how experiments are conducted. When real-world data is limited, expensive, or difficult to obtain, synthetic data generation allows researchers to create realistic datasets that mimic the characteristics and patterns of the target domain. This enables research to progress even when access to real data is constrained.
*The economic and security risks of not developing AI are too great.
*There are many, many working AI projects and applications, despite some visible failures.
*$ continues to be invested in AI (recent Cohere, Anthropic, and Groq investments). These are generative. Fei Fei Li’s start-up just received $100 million in funding, though I don’t know how large of a role GAI plays in it.
*Robotics are accelerating, which will have, at a minimum, huge implications for factories.
— By integrating generative AI techniques like transformers, GANs, and VAEs with robotic systems, robots gain the ability to learn from data, generate novel solutions, and adapt to changing environments. This synergy between generative AI and robotics has immense potential to transform what robots are capable of. Generative AI brings several key benefits to robotics, including enhanced autonomy, which allows robots to make decisions and solve problems in real-time by processing data on the fly, improving their ability to operate with minimal human oversight. Additionally, by mimicking human creativity, generative AI enables robots to generate novel designs, solutions, and ideas, pushing the boundaries of what was previously thought impossible in robotics. Furthermore, generative AI powers unsupervised and adaptive learning in robots, enabling them to continuously learn from their environment, operational data, and interactions with humans to optimize their performance.
General scientific Advances with language model teams.
*The companies that are moving most rapidly on this have more cash reserves than most people, and even some countries, can imagine. The oil-producing states are also investing.
*There. are the creative industries, for which accuracy is irrelevant.
*These companies and countries can look for returns 10+ years out; running $5 billion in the red in a given year is sort of irrelevant
*I find it useful and use it every day.
*Outside of Europe, regulation will not slow AI down.
*These copyright lawsuits have been going nowhere, and the cat is out of the bag. There are tens of thousands of GAI datasets in free circulation.
*Ethics concerns won't stop businesses and the military from using AI.
*The arguments for scaling should not be dismissed. They are at least plausible. I’m shocked by the # of non-ML engineers who just dismiss scaling.
*AI isn't just gen AI. There are other models and architectures, such as neurosymbolic AI, that are being invested in. For educators (the primary audience of this blog), what difference does it make if AI “edtech” apps use GAI or neurosymbolic AI?
Will many AI start-ups fail? Yes, but it's not because AI is weak but because it's too powerful. Any start-up that tries to establish a niche based on a tech innovation will likely be quickly wiped-out by another that uses AI to replicate the tech and has better customer services. But in the end, who cares? It doesn’t deny the reality of ever-advancing AI and the need to prepare students for this world.
Perhaps. On most days I'm pretty bullish on generative AI.
Yet I wonder about the threats facing the technology:
1) There is no good business model yet. None of the proprietary uses make profits, and the financial supports are clearly getting anxious.
2) The copyright/IP issues loom large. It would be easy for a judge in the many lawsuits now proceeding to order, say, OpenAI to pause ChatGPT operations.
3) Cultural attitudes are souring. There's a *lot* of anxiety out there. Yes, this kind of attitude often greets new technologies, but it might have legs this time, as we've seen with other technologies culture turned against, like nuclear power. I don't think we'll go full outlawing of AI (cf Dune), but could well see a cultural split, with some people proclaiming their happy use of AI, against folks who proclaim their proud resistance.
4) Government regulations... governments don't want to kill what might be a goose laying golden eggs, and some (US, China) want to maintain AI supremacy in part for geopolitical reasons. Yet states also listen to constituencies, especially wealthy ones, and might well decide to throttle back AI. Think of mandating watermarks, more laws enabling people and states to sue over AI (deepfakes for a start), or even simply banning the stuff, as Italy did for a few months. It's important to remember that most legislators don't understand most technology.
We have to see a path forward around these threats.