AI-Student-Society "Groupwork"
Thank you for supporting this blog. Your subscriptions and shares have helped its growth and lay a foundation for financial sustainability.
The ability of AIs to act autonomously and collaborate is growing. Our students’ future success (and maybe even their survival) will be determined by how well they work with these AIs.
__
Last week, Altera released Project Sid.
It reminded me a bit of Smallville, where researchers from Google and Stanford, using ChatGPT, created a town where generative agents autonomously perform daily activities, form relationships, and make decisions based on their memories and experiences. Unlike traditional game characters, these agents exhibited emergent social behaviors without hard-coded scripts, as demonstrated by their ability to organize and attend a Valentine's Day party.
But Project Sid “ups the ante,” with 1,000+ autonomous agents collaborating in a virtual world to build economies, religions, and governments.
The significance of collaboration should not be underestimated. Human collaboration has been critical in both evolutionary success and societal development, setting us apart from other species. Evolutionarily, collaboration fostered mutualistic interdependence, leading to the development of altruistic behaviors and complex social structures. This enabled cumulative cultural evolution, where humans learned from each other, spreading behaviors that enhanced group competitiveness and prosocial motives. Societally, large-scale cooperation facilitated division of labor, trade, and care for the vulnerable, regulated by shared moral systems and social norms. Cultural group selection further shaped societies by favoring norms that enhanced group welfare. Despite these advantages, humans sometimes struggle with cooperation, especially in addressing global challenges, due to the complex interplay between cooperation and competition. Overall, collaboration has driven human progress, enabling the development of intricate social systems and cultural innovations.
GSV’s Claire Zau identified some of the questions Project Sid raises.
Of course, this was not the only development we’ve seen. Last Friday, a paper was released that claims LLMs can generate novel ideas.
This reminded me of the “AI Scientist” paper produced by Sakana, a Japanese AI company that recently raised an additional $100 million, that explains how their AI can independently generate research ideas, design and execute experiments, and write full scientific papers.
While AI cannot yet generate fully unique hypotheses, such as the original correct claims that microorganisms are responsible for the disease, the overwhelming majority of scientists go their entire careers without making such discoveries. One professor, after reviewing the sample papers generated by Sakana, pointed out that the papers are strong enough for a tenure file. And, of course, we are just getting started. OpenAI has publicly stated that creating an AI that can do this is one of their goals.
Third, it appears we will soon see significant advances in the reasoning abilities of the models. Building on their earlier Q* project, Open AI’s Strawberry, which is expected to be released soon, focuses on enabling AI models to perform complex tasks like autonomous web navigation, deep research, and long-term planning and execution. The project's goals include improving AI's ability to handle logical reasoning, overcome task-specific errors, and conduct independent research by browsing the internet. Strawberry employs a specialized post-training process to refine AI models, similar to Stanford's "Self-Taught Reasoner" (STAR) method. While details remain confidential, the project is seen as a significant step towards human-like AI reasoning. OpenAI isn't alone in this pursuit, as other companies like Google, Meta, and Microsoft are also working on enhancing AI reasoning capabilities.
Reasoning, of course, supports Project Sid and AI Scientist.
Fourth, we continue to see substantial scaling of AI models. Scaling AI models involves expanding their size, computational power, and the volume of training data to enhance their capabilities. As these models grow, they often exhibit emergent properties—unexpected and novel behaviors or skills that are not explicitly programmed.
These are some commonly identified emergent properties.
Language Translation: AI models can translate text between languages without being explicitly programmed for specific language pairs.
Text Summarization: They can summarize long passages of text into concise summaries, capturing the main ideas.
Question Answering: These models can answer questions based on provided text or general knowledge, even if they haven't been specifically trained for those questions.
Creative Writing: AI can generate creative and coherent stories, poems, or essays, showcasing a form of creativity.
Sentiment Analysis: They can detect and interpret the sentiment or emotional tone of a piece of text.
Code Generation: AI can generate code snippets or solve coding problems, even in programming languages it wasn't explicitly trained on.
The largest AI companies are currently engaging in scaling, with some estimating that the next significant AI model released by OpenAI will be trained on around 40 trillion words/50 trillion tokens.
On Monday, Elon Musk unveiled "Colossus," a massive AI training system featuring 100,000 Nvidia H100 graphics cards, which Musk claims is the world's most powerful to date.
Musk plans to double Colossus' capacity to 200,000 chips within months, including 50,000 of Nvidia's newer, faster H200 GPUs. This expansion could enable xAI to create more advanced AI models, potentially surpassing the current flagship Grok-2. The development of Colossus follows xAI's recent $6 billion funding round and highlights the intense competition and investment in cutting-edge AI infrastructure among tech giants.
As a point of comparison, most report that the largest models to date have been trained on approximately 30,000 chips. And, of course, the models can do substantially more for the same amount of training as they could in the past.
Anyhow, we are starting to see the emergence of more and more powerful AI capabilities. And while the capabilities won’t reach the everyday person for a bit, as that generally requires productization, that productization will happen and these technological advances will alter society way than chatbots have.
Today, noted AI scientist Gary Marcus, who is very critical of LLMs but believes we will reach human-level AI in 5-20 years based on alternative architectures (he favors neurosymbolic architectures), asked a series of questions that highlight that while AI will not receive much attention in the Presidential debate, it is the most important issue facing us.
Educators have begun to grapple with these questions. Most institutions have at least offered faculty basic training on chatbots; this helps to build AI literacy. Some K-12 institutions, such as Santa Ana USD (CA), Cottesmore (UK), and a few others have started aggressively planning for the future.
While we don’t know exactly how the future will unfold, we do know that our students will live in a world of highly intelligent machines they will need to collaborate with to be successful.
At a foundational level, we know that helping them prepare involves focusing on teaching problem-solving and evaluating processes rather than output. It can include problem-solving with AI tools, as while the tools will change by the time our students reach adulthood, many of the concepts the tools rely on will be the same.
But it doesn’t even have to involve the tools in an extensive way. It can involve developing essential cognitive processes that students need to thrive in this world.
Many schools are continuing to think of AI through the language/lens of "ed tech” or “instructional technology,” but this doesn’t really capture what AI is and what AI means. AI is an intelligent entity capable of dynamic interaction with both students and teachers and its intelligence is growing. No other piece of “technology” has ever done this. It’s intelligence in silicon; it’s not a “Smart Board” that isn’t actually smart. We don’t call fellow intelligent humans “biotech” :).
As AI becomes increasingly integrated into various professional fields, students must develop an understanding of AI and learn how to collaborate effectively in human-AI-human relationships. This goes beyond simply learning to use AI tools; students need to grasp how to work alongside AI and other humans as dynamic partners in problem-solving, decision-making, and creative processes. Much like traditional group work teaches students to leverage diverse human strengths, collaborating with AI requires understanding its capabilities, limitations, and optimal integration points
This AI-inclusive collaborative skillset will be essential in future workplaces where highly intelligent AI systems are likely to be as common as intelligent human colleagues. By framing AI collaboration as an extension of group work skills, educators can help students develop the adaptability and critical thinking needed to thrive in an AI-augmented professional landscape. Along the way, they will build basic AI literacy, understand the world they will grow up in, develop problem-solving skills (with AI), and strengthen their metacognition.