Chapter I. When AI Surges Beyond Expectations: Guiding Students Toward Deep, Durable Learning
Humanity Amplified
This is a revised version of Chapter 1 of Humanity Amplified. The current version of Humanity Amplified is accessible to all paid subscribers of this Substack at the “Subscription and Book” link above. If you are interested in consulations or keynotes, please reach out at stefanbauschard@globalacademic.org. And please encourage your students to join the Global AI Debates!
__
Prioritizing Investment in Machine Intelligence Over Human Intelligence
Developing human intelligence in a way that amplifies an individual’s capabilities has always been a central goal of not only the educational system but also society at large for thousands of years. However, in recent years the focus on developing intelligence has shifted toward developing artificial intelligence (AI) in computers that allow them to perform tasks that otherwise required the efforts “intelligent beings;[1]” particularly human beings.
These efforts have been undertaken with “colossal computing firepower and brainpower[2]” and financial resources, as the world's largest and most valuable technology companies are engaged in an ambitious race to develop advanced AI systems. In the US, Companies such as Anthropic (Claude), Apple, Microsoft (Copilot), Amazon, Google (Gemini), Meta (formerly Facebook), OpenAI (ChatGPT), and X (Grok) - with a staggering combined market value of $7+ trillion - are pouring massive resources into AI research to support fully embodied AI.
The UAE has just agreed to invest $1.4 trillion in the US, including into AI, over the next decade[3]. Peter Diamandis estimates that $1 billion+ per day is being invested in AI, and that data centers are being built everywhere[4]. In June 2025, Meta invested billions in a new superintelligence initiative[5]. By 2030, we are likely to see a 10,000x increase in model training compute[6].
Beyond investment, there is purchase spending on AI. Worldwide generative AI (GenAI) spending is expected to total $644 billion in 2025, an increase of 76.4% from 2024, according to a forecast by Gartner, Inc[7].
Sovereign AI will drive greater demand for secure onshore data centers, cutting-edge GPUs, resilient energy supplies, and home-grown AI talent as governments scramble to keep strategic models within their borders[8]. Sovereign AI, first popularized by NVIDIA, means a nation can develop, train, and deploy artificial-intelligence systems entirely with its own infrastructure, data, workforce, and business networks, protecting sensitive information while embedding local languages and values.
In Europe, leaders have pledged €20 billion for four “AI factories” and partnered with NVIDIA and Mistral to install more than 3,000 exaFLOPs of Blackwell compute, positioning the bloc for digital independence[9]. ndia’s new IndiaAI Mission selected Sarvam AI to train a sovereign multilingual LLM covering the country’s 22 official languages[10]. The UAE’s Technology Innovation Institute released the Falcon-H1 model as an NVIDIA NIM micro-service so Gulf agencies can run chatbots on air-gapped hardware[11]. Japan’s AI Promotion Act funds a national cloud to host foundation-model R&D under its strict privacy rules[12]. Canada’s Sovereign AI Compute Strategy is subsidizing domestic GPU clusters[13], while the U.S. Department of Defense awarded OpenAI a $200 million “OpenAI for Government” contract to build classified-ready models[14]. Even smaller nations such as Denmark have installed national supercomputers like “Gefion” to train culturally specific models, underscoring how the quest for AI autonomy is reshaping global compute markets[15].
Since the first version of this book was written, many of the original employees have left the company that built ChatGPT, OpenAI, to form their own AI companies that have astronomical evaluations. Ilya Sutskever, OpenAI’s Chief Scientist, left to start Safe Superintelligence, which is now valued at $30 billion. Mira Murati, OpenAI’s Chief Technical Officer (CTO), just announced her new venture – ThinkingMachines. On March 3, 2025, Anthropic announced a new raise of $3.5 billion on a $60 billion valuation.
In China, companies such a Baidu (ErnieBot), DeepSeek, and Alibaba (Qwen) have all released impressive AI tools. Europe lags but is famous for Mistrial. One of the most notable additions to Mistrial is "Flash Answers," which can generate responses at up to 1,000 words per second. Mistral AI says this makes Le Chat the fastest AI assistant currently available.
The global AI in education market alone was estimated to be between $4.8 billion and $5.88 billion in 2024[16]. Industry projections suggest substantial growth, with varying forecasts indicating the market could reach $12.8 billion by 2028[17], $32 billion by 2030[18], $41 billion by 2030[19], $54 billion by 2032[20], or even $75 billion by 2033[21].
Why are companies (and countries) investing so much in AI? Because the impact is expected to be greater than that of fire and electricity. Why? Because AI will also be able to create, including creating more sophist card versions of AI, on its own[22]. We’ve never previously had a technology that could do that.
Already, the investments have paid off[23]. “AI factories” are “generating intelligence” while computer scientists make progress not only toward developing machines that can think and develop super-human “logical” intelligence, but they are also making progress in at least simulating traits such as empathy[24] and creativity[25] that were once thought to be "uniquely human.[26]” Are you feeling down? AI is even better than people at reframing negative situations to pick you up[27], as it can now visually detect and react to nuances in conversation[28]. Therapy and companionship have become the number one uses of AI[29].
Investment Translates to Human+-Level Intelligence
As a result of these efforts and the lack of similar attention to advance and augment intelligence capabilities in humans, computers are now projected to equal or surpass human intelligence in all domains in which humans are intelligent[30], something often referred to as (AGI). Many believe this could happen within five years[31]. Ray Kurzweil, the oldest living AI scientist, recently noted that his original prediction of 2029 for AGI is now conservative[32]. Many of the world’s top experts (Brin[33], Hassabis[34], Amodei[35], Altman[36]) are starting to coalesce their AGI predictions at slightly before or slightly after 2030). Amodei envisions a seismic shift in artificial intelligence within the next two to five years, with 2026 possible for “powerful AI”[37]. He believes AI models will become so immensely capable that they will effectively transcend human control or oversight.
Some believe that an “AI takeoff,” where AIs start making significant contributions to building AIs, could happen as early as 2026[38] or 2028[39]. What will these capabilities mean? According to Meeker et al[40],
Predictions 10 years out are more unreliable, but still shocking, even if in the ballpark.
Half of American adults already believe that AI is smarter than them[41]. Sam Altman, OpenAI’s Ceo, recently wrote:
(W)e have recently built systems that are smarter than people in many ways… ChatGPT is already more powerful than any human who has ever lived… 2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world[42].
Before we have computers that are more intelligent than us in all ways, we will see computers surpass human intelligence in particular areas, which, as noted, is already starting to happen, even if the development of autonomous intelligence may be far off[43]. We have artificial intelligence bots (“AIs”) that have more content knowledge than any single human, have greater accuracy in making predictions in humans, are undertaking aggressive efforts to improve their factual accuracy, have developed vision, and have learned how to learn, so they can acquire new knowledge on their own with minimal human intervention. They can engage in written and verbal conversations with us in more than 200 different languages and can produce incredible output in image and video as well[44].
According to a recent report from the Stanford Center for Human-Centered AI, AI already surpasses human performance in image classification, visual reasoning, and English understanding[45]. It’s more likely to persuade[46] us or manipulate us[47] than a human. A new NVIDIA-MIT model can even reason across images, learn in context, and understand videos[48]. A recent study shows AIs can “outperform financial analysts in its ability to predict earnings changes” and can “generate useful narrative insights about a company's future performance,” suggesting that AIs “may take a central role in decision-making[49]. We can even send AIs to meetings on our behalf to represent us[50].
Academically, AI tools are performing exceptionally well on tests that are traditional measures of intelligence. ChatGPT4, which is basically a dated model, scored a 1410 on the SAT and got 5s on most AP exams.[51] It also did well on the Uniform Bar Exam[52], the Dutch national reading exam (8.3/10[53]), and it passed the US National Medical Licensing Exam[54] and most of the Polish Board certification examinations[55]. Gemini (Google)[56], offers some modest improvements in the abilities that produced the GPT4 scores[57]. Grok, a new model from X.ai performs reasonably well on multiple academic texts[58]. This is just Grok 2; Grok 3 was released in February 2025. AlphaGeometry2, an improved version of AlphaGeometry, has achieved gold-medal level performance in solving IMO geometry problems. It solved 42 out of 50 problems from the IMO competitions between 2000 and 2024, surpassing the average score of human gold medalists[59]. The success of AlphaProof and AlphaGeometry2 demonstrates the rapid progress in AI's ability to tackle advanced reasoning tasks. These systems exhibit skills in logical reasoning[60], abstraction, hierarchical planning, and setting subgoals, which are challenging for most current AI systems. Creating AI systems that can solve challenging mathematics problems could pave the way for human-AI collaborations, assisting mathematicians in solving and inventing new kinds of problems.
Practically, a new study has shown that a human working with an AI can already work as well as a two-person teams and human teams that use AI can already significantly exceed all human teams[61]. And this study was done with technology that was available in July 2024.
Creativity
There is also ample evidence that AI is creative, as proven by several tests, including the Torrance Tests of Creative Thinking, the Unusual Uses Test, and the Remote Associates Test. The Torrance Tests assess fluency (the ability to generate many ideas), flexibility (considering a variety of approaches), originality (coming up with unique and uncommon ideas), and elaboration (adding details to initial ideas)[62]. AI systems have shown the ability to generate many novel ideas, combine concepts in unexpected ways, and expand on initial prompts - potentially demonstrating creativity on these measures.[63] The Unusual Uses Test requires thinking of creative uses for a common object[64], which AI language models have proven capable of by proposing innovative repurposing ideas[65]. The Remote Associates Test presents three words and requires finding a fourth word that relates to all three[66] - an ability AI has displayed by making insightful analogy connections[67].
Creativity is the driving force behind innovation and economic growth. It fuels the development of new products, services, and business models that meet evolving consumer needs and desires[68]. Creative minds conceive groundbreaking ideas that disrupt industries, spark technological advancements, and open entirely new markets. From the light bulb to the smartphone, many of humanity's most transformative inventions stemmed from creative thinking. And creativity powers industries like entertainment, fashion, and advertising that rely on generating fresh concepts to captivate audiences and influencers.
Images
The quality of images generate from text has simply exploded[69].
Video
The growth of video tools such as Sora (OpenAI), Kling, and Veo3 (Google/Gemini) are enabling the production of incredible short clips, commercials, and even movies.
Understanding AI Agents: From Conversation to Action
The Evolution from "Say" to "Do"
Traditional AI systems like ChatGPT primarily respond to questions—they "say" things through text or voice. But AI agents represent a fundamental shift: they can act in the digital world to accomplish tasks autonomously.
As Mustafa Suleyman (Microsoft's VP of AI and DeepMind co-founder) explained in August 2023, we're moving beyond AI that just talks to AI that actually does things.
What Are AI Agents?
Google defines AI agents as autonomous software systems with three core capabilities:
1. Perceive - They gather information from their environment through sensors or data inputs
2. Reason - They analyze what they've perceived and make decisions based on their knowledge
3. Act - They take concrete actions in their environment to achieve specific goals
In simple terms: An AI agent is a goal-oriented smart program that can work independently, making decisions and taking actions without constant human guidance.
Michael Lieben created this nice flow chart.
Real-World Example: Anthropic's Claude Research Agent
To understand how this works in practice, consider Anthropic's Claude Research agent, released in June 2025[70]:
The Multi-Agent Architecture
Instead of one AI trying to handle everything, the system uses a team approach:
· Lead Agent (Claude Opus 4): Acts as the project manager, interpreting your request and creating a research plan
· Specialized Sub-Agents (Claude Sonnet 4): Work in parallel as research specialists, each focusing on different aspects of the search
· Orchestrator: Coordinates the entire team's efforts
· Memory Module: Combines all findings into a comprehensive, well-sourced report.
In internal tests, this multi-agent setup outperformed a standalone Claude Opus 4 agent by 90.2%. Claude Opus 4 serves as the lead coordinator, while Claude Sonnet 4 powers the sub-agents. To evaluate performance, Anthropic uses an LLM-as-judge framework, scoring outputs based on factual accuracy, source reliability, and effective tool use. They argue this method is more consistent and efficient than traditional evaluation approaches, positioning large language models as managers of other AI systems.
Why This Approach Works Better
This multi-agent system outperformed a single Claude Opus 4 agent by 90.2% in internal testing. The key advantages:
· Speed: Multiple agents work simultaneously rather than sequentially
· Specialization: Each sub-agent can focus on what it does best
· Comprehensiveness: Parallel research covers more ground more thoroughly
· Quality: Better coordination leads to more accurate, well-sourced results
Measuring Success
Anthropic evaluates their agents using an "LLM-as-judge" framework, scoring outputs on:
· Factual accuracy
· Source reliability
· Effective tool usage
This represents a shift toward AI systems managing other AI systems—a glimpse into how complex AI workflows might operate in the future.
The Bigger Picture
AI agents mark a transition from AI as a conversational tool to AI as an autonomous digital workforce, capable of perceiving, reasoning, and acting to accomplish complex, multi-step tasks with minimal human oversight.
As of now, most agents, at least. Outside of coding, can only operate autonomously for 30 minutes, but already in the area of coding, Agents can operate for up to 8 hours and can operate in narrow areas (Pokemon Go) for 24 hours. These capabilities are available in the new Claude model, but also in others. The time at which AI agents can operate autonomously in a reliable way doubles every 7 months, so even if we take the minimum of 30 minutes, we can see 30-1 hr>2>4>16>32. So, within 3-4 years, we can at least expect agents to operate autonomously across many domains for 1.5 days.
Investment Translates to Use of Non-Human Intelligence
Investment has not only translated into greater abilities but also into greater use. According to Meeker et al (2025), even only looking at ChatGPT, AI use has skyrocketed.
Here you can see comparisons to other technologies.
And this use is not limited to the US or the Western world.
AI Brains Enter Robots
Now we are starting to see these “artificial brains” placed in robotic bodies. These brains are being placed into robotic bodies that can move, manipulate objects, and interact with the physical world. This combination allows robots to adapt to new environments, perform useful tasks, and learn from experience. It marks a major step toward embodied intelligence, where machines no longer just think but act. As these systems enter homes, hospitals, and workplaces, they raise new possibilities—and new challenges—for society.
Fei‑Fei Li isn’t just pushing AI to “see” faces—she’s championing spatial and physical intelligence, the ability for machines to perceive, understand, and interact within 3D environments. Her new startup, World Labs, is developing Large World Models that go beyond pixel-level vision to internalize object dimensions, motion, and spatial relationships—skills fundamental to both facial awareness and physical manipulation[71]. Meanwhile, Meta’s V‑JEPA 2 (Video Joint Embedding Predictive Architecture 2) offers a powerful example of this embodied intelligence in action, training on millions of hours of video and refining skills with just 62 hours of robot interaction to enable zero-shot pick-and-place and action-conditioned planning[72].
Together, these advances represent a leap forward: robots that not only recognize a smile or the structure of a face but also predict and respond in real time to physical environments. This marriage of facial intelligence (recognizing human expressions) and physical intelligence (reasoning about objects and movements) is a key step toward robots that understand not just who we are, but how we live—and can assist accordingly.
In this demonstration, the robot chooses the apple when the person asks for something to eat among the items in front of it[73].
Recently (February 2025) Figure's new Helix AI system represents a breakthrough in robotics, enabling humanoid robots to perform complex movements through voice commands without requiring specific training for each object. The system combines a 7-billion-parameter language model serving as the "brain" with an 80-million-parameter AI that translates instructions into precise movements, allowing simultaneous control of 35 degrees of freedom while requiring only 500 hours of training data.
This development holds significant technological importance as it addresses a fundamental challenge in robotics: creating machines that can adapt to unfamiliar environments rather than requiring reprogramming for each new task. By enabling robots to understand natural language commands and interact with previously unseen objects, Helix potentially brings us closer to practical household robotics applications, while also highlighting a strategic shift in the industry as Figure moves away from OpenAI collaboration to develop specialized AI systems for high-speed robot control in real-world situations.
Helix's remarkable efficiency is demonstrated by its training requirements of just 500 hours of data—far less than similar robotics AI systems. By operating on embedded GPUs directly within the robots, the system achieves the computational performance needed for real-world commercial applications. 1X technologies/Redwood AI has recently started testing robots for home use[74].
These abilities will only grow. AI “godfathers” Geoffrey Hinton and Yoshua Bengio, as well as others, recently noted:
(C)ompanies are engaged in a race to create generalist AI systems that match or exceed human abilities in most cognitive work. They are rapidly deploying more resources and developing new techniques to increase AI capabilities, with investment in training state-of-the-art models tripling annually. There is much room for further advances because tech companies have the cash reserves needed to scale the latest training runs by multiples of 100 to 1000. Hardware and algorithms will also improve: AI computing chips have been getting 1.4 times more cost-effective, and AI training algorithms 2.5 times more efficient, each year. Progress in AI also enables faster AI progress—AI assistants are increasingly used to automate programming, data collection, and chip design. There is no fundamental reason for AI progress to slow or halt at human-level abilities. Indeed, AI has already surpassed human abilities in narrow domains such as playing strategy games and predicting how proteins fold. Compared with humans, AI systems can act faster, absorb more knowledge, and communicate at a higher bandwidth. already surpassed human abilities in narrow domains such as playing strategy games and predicting how proteins fold (see SM). Compared with humans, AI systems can act faster, absorb more knowledge, and communicate at a higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions. We do not know for certain how the future of AI will unfold. However, we must take seriously the possibility that highly powerful generalist AI systems that outperform human abilities across many critical domains will be developed within this decade or the next.
This is all occurring as human test scores in reading, math, and science decline internationally[75].
AI as a General-Purpose Technology
The ramifications of the development of AI for society will be large. AI is an advanced omni-use[76] or “general purpose” technology because it has the potential to transform and disrupt many industries simultaneously. AI exhibits versatility and broad applicability, with the ability to perform a wide variety of cognitive tasks at or above human levels. It can be applied to diverse domains such as healthcare, education, finance, manufacturing, and more. Like other general-purpose technologies throughout history, AI is expected to lead to waves of complementary innovations and opportunities. It will drastically change how we live and work, and drive economic growth, like how the steam engine and the internal combustion engine.
Of course, there have been general-purpose technologies in the past. The difference between AI and previous general-purpose technologies like the steam engine, electricity, and the internal combustion engine, is that those technologies took a long time to permeate and transform society due to the need for building extensive new infrastructure and hardware. The steam engine required building entire new transportation networks of railroads and steamships. Electricity necessitated wiring buildings and constructing power plants. ChatGPT, on the other hand, took 60 days to reach to reach 100 million users[77], making it the fastest adopted technology in history. It gained 1.6 billion users by June 2023[78]. It currently has 400 million weekly users[79]. But it’s just one AI; companies such as Anthropic, OpenAI, Google, and others discussed above are virtually seamlessly updated into software, such ss Microsoft Office and Google docs, running on current hardware people already own and use daily. Meta, for example, recently released Llama3, its latest large language model, and integrated it into the Meta AI assistant across its major social media and messaging platforms, instantly distributing the technology to its 3.19 billion daily users[80].
Computers as Employees, Productivity, and Economic Growth
Application of these growing abilities in the workplace should enable productivity to soar beyond the current impact they are already having.[81] Ethan Mollick points out that “Early studies of the effects of AI have found it can often lead to a 20 to 80 percent improvement in productivity across a wide variety of job types, from coding to marketing. By contrast, “when steam power, that most fundamental of General Purpose Technologies, the one that created the Industrial Revolution, was put into a factory, it improved productivity by 18 to 22 percent[82].” In a recent report, Andrew McAffee from MIT notes that “close to 80% of the jobs in the U.S. economy could see at least 10% of their tasks done twice as quickly (with no loss in quality) via the use of generative AI[83].” According to a recent study, customer support agents could handle 13.8 percent more customer inquiries per hour. Business professionals could write 59 percent more business documents per hour, and programmers could code 126 percent more projects per week[84]. This could generate trillions[85] of additional dollars in economic growth that could drive the development of new industries and jobs and/or potentially be shared across society. Every day people are starting to use these technologies in their workflows[86]. Berkeley Computer Scientist Stuart Russell anticipates this will produce $14 Quadrillion in economic growth[87].
And it’s not just simple digital AI assistants. Archie, the flagship product of P-1 AI, is designed to function as a full-fledged “junior engineer” embedded directly in industrial firms’ existing workflows. Instead of selling a discrete software license, P-1 offers Archie as a Slack- or Teams-native colleague who takes design tickets, runs CAD/CAE tools, iterates with human reviewers, and steadily improves by digesting proprietary data behind the customer’s firewall. The system couples physics-aware synthetic training data with graph neural networks and language models, enabling it to automate routine variant engineering tasks—starting with narrowly scoped use-cases like data-center cooling equipment and scaling each year to more complex sectors such as automotive and, eventually, aerospace. P-1 positions this labor-based pricing model as a way to map onto engineering head-count budgets, arguing that Archie frees senior engineers for novel work while capturing institutional know-how in machine memory, thereby accelerating hardware innovation in much the same way code copilots have sped up software development[88].
Once computers can engage in advanced reasoning and planning, they will be able to overtake more day-to-day tasks, including work responsibilities that require those skills[89], such key job functions in finance, law, production of creative works, and mid-level management.
Last year, significant efforts were made to enhance AI capabilities beyond simple inference reasoning (and there is some debate as to whether they can currently engage in basic inference reasoning (a comprehensive review of the entire reasoning debate can be found in Sun et al[90])), including with meaningless fillers to enable complex thinking[91], contrastive reasoning[92], aiming to enable advanced reasoning skills such as abstract, systems, strategic, and reflective thinking[93]. Recent advancements in natural language processing (NLP) have centered around enhancing large language models (LLMs) using novel prompting strategies, particularly through the development of structured prompt engineering. Techniques like Chain-of-Thought, Tree of Thoughts, and Graph of Thoughts, where a graph-like structure guides the LLM's reasoning process, have proven effective[94]. This approach has markedly improved LLMs' abilities in various tasks, from complex logical and mathematical problems to planning and creative writing[95]. Additional training that allows models to abstract more nuanced knowledge have also proven effective[96]. As the strength of models advance, this debate will continue, and a new benchmark – CaLM (Causal Evaluation of Language Models) has been developed to assess reasoning claims[97]. As we will discuss, adding vision and other capabilities to the models that make them “multimodal” further enhance their abilities.
Leading computer scientists expect that within three to ten years AI will be equipped to perform such complex reasoning and learn to plan based on goals[98]. AI reasoning models are rapidly advancing, with key players like OpenAI, Google DeepMind, and Cohere releasing new models with enhanced capabilities OpenAI has launched o3Pro improving coding, science, and complex problem-solving. Google introduced Gemini 2.0 Flash Thinking Experimental, focusing on multimodal tasks and structured problem-solving, as well as GeminiPro2.5. Cohere has also updated its Command R model series, enhancing their abilities in planning, tool querying, and question-answering. These models break down prompts, analyze contexts, and synthesize responses, marking a significant step towards AI systems that can handle complex tasks with improved accuracy. AI systems are increasing their "thinking time" by breaking down complex questions into smaller tasks and integrating knowledge from various sources to make logical connections and synthesize information across different domains.
It’s Not Just “Hype”
Some find comfort in claims that this is all “hype” and that education, for example, hasn’t yet been “transformed” by AI. But as Jerome Presenti notes, “I don’t think Gartner’s hype cycle applies here. Generative AI is taking education by storm because the consumers, the students, are adopting the technology on their own – not through their schools or their teachers or any kind of edtech B2B offering[99].” ChatGPT, and many other applications, are already displaying impressive abilities as a tutor[100] and homework completion assistant.. The current educational system, like many legacy companies, may very well never adapt to the AI world, but that simply means it won’t survive, as it is replaced by new approaches.
And it’s not just in schools. As Armand Ruiz, VP of Product – AI Platform, noted in May of 2024: “AI is not hype. At IBM we've completed 1,000+ Generative AI projects in 2023, prioritizing business applications over consumer ones[101]. These include projects in Customer-facing functions; HR; finance; Supply-Chain functions; IT development and operations; and core business functions[102].” JP Morgan just unveiled IndexGPT with the help of OpenAI to create thematic indexes for investments[103]. The band Washed Out released the first official music video using Open AI’s text to video tool, that was commissioned by an artist[104].
Current abilities alone are beginning to disrupt education, work, and day-to-day life[105]. Even if AI never advances beyond its current capabilities, it will radically change our world[106]. Current AI “tools” can already replace many job functions and generate image and video that is often indistinguishable from reality to the “naked eye.” They have already “demonstrated expert-level performance at tasks requiring creativity, analysis, problem solving, persuasion, summarization, and other advanced skills[107].” Texas has replaced 4,000 human scorers of its STARR written tests with AIs[108], and others envision robotic snails supporting infrastructure maintenance[109]. The Secretary of the Air Force flew in a plane without a pilot, and the military plans on having more than 1,000 pilotless aircraft in the skies by 2028[110]. AI can automatically generate and test social science hypotheses even though most people do not know how to do that[111]. Companies are investing resources in developing AIs that can engage in autonomous scientific discovery[112].
Human Intelligence Slips
While machine intelligence grows, we also see disengaged students who are disappearing from school, and the scores of humans on knowledge-based exams that we often use to measure our students’ intelligence, our “proxy for educational quality,” are declining while the scores of machines on those same exams rise[113]. AI can grade the STARR exams many students can’t pass. It can automatically generate and test social science hypothesis even though most people do not know how to do that[114]. We have students entering the workforce without skills and capabilities employers need, while employers increasingly turn to machines to complete work humans previously did.
The simple reality is that the educational system, largely speaking, has no plan to prioritizing develop students’ intelligence so they can live and work in a world of highly intelligent machines. Only a small percentage of the nation’s students are receiving training related to understanding or using AI[115], a technology that will completely define their future. Despite JP Morgan training every new hire on AI, only twenty six percent of the nation’s teachers have been trained on it[116].
Preparing Students for the AI World vs Using AI in the Classroom
And even training on AI tools can be question-begging, as we must first decide how to best prepare students for the AI World before deciding how to maximize AI tool use in the classroom. If we launch straight into “AI-tool training,” we risk begging the question—assuming we already agree on what success in an AI-saturated society looks like. Yet that vision is precisely what is still unsettled. Do we want graduates who merely operate today’s apps, or citizens who can interrogate algorithms, remake them, and decide when not to use them? Until the destination is clear, prescribing any particular suite of classroom tools amounts to building the ship before choosing the port.
Recent policy work underscores the stakes. UNESCO’s 2024 AI Competency Framework for Students asks ministries first to define the human capacities—ethical reasoning, socio-emotional judgement, systems thinking—that will remain non-automatable, and only then map tools to those aims[117] .Likewise, the OECD’s Future of Skills initiative argues that curriculum must pivot from discrete “how-to-code” objectives toward “learning-to-learn” agility, so that students can ride successive waves of model upgrades rather than drown beneath them[118]. Put differently: decide what flourishing means in an AI world, then reverse-engineer the pedagogy and hardware.
Framing the problem this way surfaces three design choices every system now faces:
1. Shallow vs. deep integration – Will AI remain a productivity add-on (drafting lesson plans, grading quizzes), or will it become an intellectual partner that reshapes inquiry-based learning and student agency? UNESCO’s 2024 guidance on generative AI calls the second path “human-centred augmentation,” but pursuing it demands new assessment models that value judgement, collaboration, and originality over speed.
2. Core literacies vs. just-in-time skills – Teaching prompt-craft today is useful; teaching epistemic humility, data ethics, and multi-modal reasoning prepares students for tools we cannot yet name. Cutting-edge devices will change yearly, but dispositions travel.
3. Equity architecture – Tool-first approaches often lock schools into proprietary ecosystems, widening the gap between well-funded districts and everyone else. Outcomes-first planning begins by asking what ALL students need to thrive, then leverages open resources, accessible interfaces, and culturally responsive examples to meet that bar.
Seen through this lens, “how to best prepare students for the AI World” is not a side issue; it is the curriculum question of our era. Everything else—whether we teach prompt engineering, adopt a new chatbot, or allocate a third of class time to AI studies (as some economists now urge[119] —flows from that upstream decision. Clarify the destination, and coherent choices about tools, teacher training, and policy will follow; skip the step, and education risks chasing every shiny model while leaving learners unmoored in the very world we claim to be preparing them for. Professor Rose Luckin recently noted:
The Critical Question for me is: Are we building students' capacity to thrive in an AI-integrated world, or are we simply making traditional education more efficient? The technology is advancing rapidly, but we need to be intentional about whether we're using AI to enhance 20th-century educational models or to prepare students for 21st-century realities. The stakes are too high to simply optimize the status quo[120].
Preparing students for this world should not be a controversial idea. It’s no different from preparing students in the 19th century, at the end of the agricultural era, to thrive in the 20th century industrial era. We stopped teaching young people how to use oxen and horses to plow fields and taught them how to use steam-powered tractors and threshing machines. We should first think through what and how we want to teach to prepare students for an AI world before we rush into using AI as much as possible in the classroom. We certainly don’t want to use AI to help students learn how to use oxen and horses to plow fields.
In a recent article in Forbes, Allison Salisbury notes, that workers earning less than about $38,000 a year are 14 times more likely to lose their jobs to all types of automation. In response, she’s seeing a renewed focus on durable skills from large employers. Durable skills are those that she describes as “less about what you know, and more about how you learn and work in the world. They involve things like self-management, working with others, and generating ideas. They cross every imaginable career, and as the name implies, they’re skills that should serve you well no matter what AI-fueled world ultimately serves up[121].”
Artificial Intelligence as More than A Technology
Unlike any tool that has come before, AI constantly evolving and improving itself. Fed with vast amounts of data, AI systems uncover hidden patterns, make predictions, and generate insights that surpass the limits of human cognition. They learn from every interaction, every success, and every mistake, growing smarter and more capable with each passing day.
Through the magic of natural language processing, it can understand and communicate with us in our own words, providing real-time support and engagement across every sector. Whether it's a virtual assistant helping you navigate a complex healthcare system, a financial advisor optimizing your investment portfolio, or a tutor helping you understand your math, AI is there, ready to lend its unique blend of intelligence and adaptability. And unlike a Smart Board, it can make decisions, solve problems, and, with emerging AI agents, act on its own. Want to return your shoes? Just tell the AI and it will know to look up where you got them in your emails, find the return instructions, and print the shipping label[122]. We’ve never had a technology like this, and they are certainly more than paper mills.
Scholars and other leaders are already arguing that simply thinking of this is as a technology is inadequate. Wharton professor Ethan Mollick has referred to it as a non-sentient “alien mind[123]” and argues we should treat “treat AI as if it were human because, in many ways, it behaves like one.[124].” Mustafa Suleyman argues that thinking about it solely as a technology fails to capture its abilities. Instead, he, argues, “I think AI should best be understood as something like a new digital species…I predict that we'll come to see them as digital companions, new partners in the journeys of all our lives.”
As acknowledged by Mollick, this perspective highlights the need to be cautious about anthropomorphizing artificial intelligence systems. While AIs may exhibit human-like behaviors and capabilities in many ways, it is crucial to recognize that AIs are fundamentally a different form of intelligence, one created by human ingenuity rather than a biological, sentient being. Anthropomorphizing AI risks underestimating the profound differences between our "alien minds" and human cognition. At the same time, failing to appreciate AI’s advanced capabilities and treating it merely as inanimate technologies is also misguided. The truth likely lies somewhere in the middle -Ais are a new kind of entity, one that blurs the line between human and machine. As we continue to evolve and become more integrated into human lives and endeavors, developing an appropriate understanding, and set of ethical principles for relating to artificial intelligences will be crucial. While some may consider such advances to be futuristic science fiction, the level of technology we have is already impressive.
Artificial Intelligence as More than “EdTech”
The framing of AI merely as an "edtech" product significantly underestimates its revolutionary potential and explains why many academic institutions are struggling to adapt appropriately. Traditional educational technology tools are static instruments with defined parameters - they're created to solve specific problems within established educational frameworks. But AI represents something fundamentally different: an evolving, adaptive intelligence that continuously improves through interaction. By treating AI as just another tool in the edtech toolkit (like smart boards, learning management systems, or digital textbooks), academic institutions are applying outdated mental models to a qualitatively different technology.
Unlike traditional edtech that simply executes commands, AI can actively participate in the learning process - challenging assumptions, offering new perspectives, and adapting to individual learning styles. The educational impact of AI extends far beyond formal learning environments, blurring the boundaries between classroom and real-world learning and creating opportunities for continuous, contextualized education. AI doesn't fit neatly into existing academic silos; its true potential emerges when integrated across disciplines, requiring institutional structures that facilitate collaboration. As Mollick suggests, we're dealing with something that behaves in human-like ways while remaining fundamentally different - an "alien mind" that requires new frameworks for understanding and integration. The most forward-thinking academic institutions will move beyond viewing AI as merely another educational technology product and instead recognize it as a transformative force that requires reimagining core educational practices, institutional structures, and even the fundamental relationship between humans and knowledge.
Consciousness
Joseph Reth of Lossless Research is working on creating artificially conscious systems to unravel the mysteries of human consciousness. The implications of such technology would be profound. According to Reth, "Artificially conscious systems will be able to engage with the world in a fundamentally different way, driven by an intrinsic desire to learn, explore and understand, rather than just optimizing for specific tasks." Transitioning consciousness from a philosophical concept to a scientific pursuit represents a significant shift in thinking. "The mission to construct artificially conscious systems is incredibly meticulous," Reth explained. "We're constantly grappling with the challenge of ensuring our systems can perceive the world in a manner that mirrors human consciousness while also navigating critical ethical considerations." Despite the challenges, Reth remains optimistic: "I believe artificial consciousness is possible, and in the next few years, we will see AI systems that are serious candidates for consciousness[125]."
So, the question is more when it will exceed human intelligence in all domains, not if it will happen (Hinton). These advances, combined with accelerating developments in virtual and augmented worlds, 5/6G, blockchain, and synthetic biology, are leading us into a “social-technical revolution that will dramatically change who we are, how we live, and how we relate to one another[126].” The world of 2044, when our students are adults, will be nothing like the world of 2024. Nothing at all.
AI, Future of Humanity, and the Future of Knowledge
In the coming era of exponential technology, artificial intelligence is poised to become not just a tool but a co-evolutionary partner in human destiny – an extension of our minds and bodies that could fundamentally redefine what it means to be human. Visionaries like Ray Kurzweil foresee a technological singularity where the boundary between man and machine dissolves: tiny neural nanobots may connect our neocortex to a cloud of intelligence, merging biological and artificial thought into a unified whole[127]. Even today, Elon Musk’s Neuralink aspires to this symbiosis – implanting brain chips so that humans won’t be left behind by superintelligent AI[128]. Such integration promises to amplify our capabilities beyond natural limits, opening doors to realms of cognition and creativity previously unimaginable. With AI augmenting our brains, we could experience an **expansion of consciousness – Kurzweil suggests that a connected brain could vastly “expand our palette for emotion, art, humor, creativity,[129]” leading to new depths of genius and individuality. At the same time, this intimate fusion of human and machine forces us to grapple with profound questions: when our thoughts are co-processed by algorithms, or our memories backed up to the cloud, where does “self” end and technology begin? The very notions of identity and consciousness blur in this brave new reality – are you still you if your mind is partly digital?[130]
Philosophers and futurists from Yuval Noah Harari[131] to Max Tegmark[132] and Nick Bostrom[133] have explored these transformative possibilities, debating whether AI will erode our humanity or elevate it to new heights. Harari, for instance, envisions Homo Deus – humans ascending to godlike status by harnessing biotechnology and AI. Indeed, transhumanist thinkers embrace the idea that AI could usher in an era of augmented beings and even digital immortality: by merging with AI, humans wouldn’t be supplanted but transcended, perhaps attaining ageless, omniscient, near-divine capacities. This provocative vision of human-AI convergence is imbued with both poetic optimism and philosophical depth. It imagines a future in which AI is not an alien antagonist but the next stage of human evolution – a digital symbiont entwined with our very essence, amplifying our intellect and spirit, and propelling us into uncharted realms of consciousness and meaning. Each step toward this future challenges us to reimagine ourselves, as humanity stands on the cusp of becoming something more than human, hand in hand with our own intelligent creations.
The moment neural nanobots splice cloud cognition into our synapses, knowledge ceases to be something we acquire and becomes an atmosphere we breathe. Kurzweil’s forecast of cortex-to-cloud links converges with today’s rapid AI breakthroughs to create a living exocortex in which discovery and learning unfold at the speed of thought. Agentic research suites such as Microsoft Discovery already automate hypothesis generation and simulation, compressing years of benchwork into days[134], while physics-informed generative models at Cornell sketch novel materials as easily as we now sketch ideas[135]. A new “AI-for-Science” paradigm is emerging in which algorithms jointly reason, experiment, and iterate, recasting the scientist as conductor of an orchestra of synthetic intellects[136]. On the learning front, intelligent tutors that map every learner’s quirks are shifting education from broadcast to whisper, leveling obstacles for dyslexic and neurodiverse students alike[137] and forecasting bespoke learning paths before a question is even asked[138]. When these systems fuse with brain-computer interfaces[139] and the ambient datasphere envisioned by Tegmark[140], research and schooling give way to continuous co-creation: learners converse with an ever-present cohort of AI “co-selves,” and the frontier of human understanding expands not by passing books hand to hand but by streaming insight neuron to neuron—rendering the library, the laboratory, and perhaps even language itself into relics of a pre-merged mind.
Policymakers and Educators Respond
Policymakers are certainly starting to pay attention to AI’s growing AI capabilities. In early November 2023, an “AI Safety Summit” involving industry leaders and government officials kicked off in the UK[141] and the U.N. launched a high-level body on AI[142]. In early December, the EU adopted the EU AI Act[143], which was approved by its members in May[144]. In the US, on October 30th, 2023, the Biden administration issued a comprehensive Executive Order[145] that covered many areas related to concerns about AI that have been expressed in numerous meetings[146] with lawmakers and in Congressional testimony.. In April 2024, A new Artificial Intelligence Safety and Security Board was also established by the Department of Homeland Security to evaluate the utilization of AI technologies in critical infrastructure systems.
A bipartisan group of Senators released long-awaited guidance on an AI legislative roadmap – Driving US Innovation in Artificial Intelligence[147] --for the fall of 2024. Their priorities include.
· Boosting funding for AI innovation
· Tackling nationwide standards for AI safety and fairness
· Using AI to strengthen U.S. national security
· Addressing potential job displacement for U.S. workers caused by AI
· Tackling so-called “deepfakes” being used in elections, and “non-consensual distribution of intimate images”
· Ensuring that opportunities to partake in AI innovation reach schools and companies
Upon returning to office, President Trump quickly moved to dismantle the AI protections and policies established by the Biden administration, signaling a shift towards unregulated AI development. Trump's actions include revoking Biden's executive order on AI and initiating the development of a new AI action plan. The Trump administration's approach favors less regulation to encourage innovation, addressing problems as they arise.
Key aspects of the Trump administration's reversal of AI policies -
Rescinding Biden's AI Executive Order. Trump revoked the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" executive order issued by President Biden in 2023. This action eliminated the safeguards and requirements for responsible AI development and deployment that the Biden administration had put in place.
Halting Implementation of AI Guidance. Trump's rescission extended to the National Institute of Standards and Technology’s AI Risk Management Framework and the Office of Management and Budget’s (OMB) guidance on the management of AI systems used by federal agencies.
New AI Action Plan. Trump directed the development of a new AI action plan to promote American AI dominance. This plan is to be developed within 180 days by White House officials, including David Sacks, the special advisor for AI and crypto. The aim is to create AI systems "free from ideological bias or engineered social agendas."
Revising OMB Memoranda. The Director of the OMB, in coordination with the Assistant to the President for Science and Technology, is required to revise OMB memoranda on federal AI governance (M-24-10) and federal AI acquisition (M-24-18) to align with the new policy.
Emphasis on Deregulation. The Trump administration is emphasizing minimal barriers to foster AI innovation and maintain U.S. leadership in the field. This approach involves close collaboration with tech leaders, including the appointment of a tech investor and former executive as the White House AI & Crypto Czar.
Reliance on the Private Sector. Trump's policy resets AI to rely on the private sector. The lifting of Biden’s guardrails and the six-month wait for a new action plan has companies in areas ranging from health care to talent recruitment wondering how to proceed.
The Trump administration's approach has raised concerns about the potential for unchecked AI development and deployment, especially since we could see the development of human-level+AI that could set its own goals within the time-frame of Trump’s presidency. Without strong guardrails, AI tools could create real-world harms in sensitive decision-making processes, such as hiring, lending, and criminal justice. Some argue that the emphasis on speed and deregulation could lead to unfair or inappropriate AI applications.
In response to the federal government's policy shift, state legislatures are considering AI legislation to address issues such as chatbots, generative AI transparency, and the energy demands of AI development
States such as Pennsylvania are getting involved in AI regulation and policymaking.[148] At the local level, Seoul[149], New York City[150], and Singapore[151] (a “city-state” that just rolled out a learning bot that supports student-teacher-AI collaboration for 500,000 students[152]) have launched major initiatives around AI.
At the education level, seven states have issued AI Guidance reports, which we summarized and analyzed in a recent report of our own[153]. A few districts have followed with their guidance documents, including the Santa Ana Unified School District (CA); a document we had the opportunity to draft.
The Need to Support Educational Institutions
Although there has been an exponential rise in the intelligence of machines and that is pushing us to the development of human-level robotics in both mind and body, as well as efforts to manage and integrate it, we do not see “colossal firepower” combined with astronomical financial investments, being invested in the development and amplification of intelligence in humans in the ways needed for our students to be able to productively collaborate with people and intelligent machines. Even under the Biden administration, the few million dollars committed to helping schools adapt to this era paled in comparison to the hundreds of billions that have been invested in creating it. Now, Trump is gutting the Department of Education, the institution through which most money usually flows.
Many of America’s tuition-driven colleges and universities, which are also under fire due to Trump administration cuts, will also need support, not only to help integrate AI into their schools but to engage fundamental questions about how education will need to be restructured for a world where machine intelligence is at least competitive with human intelligence and machines will be able to do many current jobs for a fraction of the cost to an employee.
A growing body of evidence suggests that the Class of 2024-25 is colliding with a shrinking rung on the career ladder: AI now performs many of the rote tasks that once justified hiring juniors. LinkedIn analysis notes that “true” entry-level listings made up barely 2.5 % of U.S. tech postings by April 2024—down sharply as firms lean on generative-AI tools and senior staff augmented by co-pilots instead of onboarding novices[154]. Meanwhile, a June 2025 Business Insider investigation finds 41.2 % of recent graduates underemployed, with economists warning that AI-driven automation is erasing starter roles in fields from communications to paralegal work and leaving young job-seekers “barreling toward a career cliff[155].” The British telecom company BT Group is considering even deeper job cuts as automation accelerates[156].
While these bills will not pass in the current political climate, they are worth considering. On May 23, 2024, Senators Jerry Moran (R-KS) and Maria Cantwell (D- WA) introduced the NSF AI Education Act of 2024[157], a bipartisan bill introduced by that aims to bolster the U.S. workforce in AI and related emerging technologies like quantum computing.
The key provisions include:
■ Authorizing the National Science Foundation (NSF) to award scholarships and fellowships at various levels for studying AI, quantum computing, and blended programs, with a focus on fields like agriculture, education, and advanced manufacturing.
■ Establishing AI "Centers of Excellence" at community colleges to collaborate with educators on developing AI instructional materials.
■ Directing NSF to develop guidance and tools for introducing AI into K-12 classrooms, especially in rural and economically disadvantaged areas.
■ Launching an "NSF Grand Challenge" to devise a plan for educating at least 1 million workers in AI-related areas by 2028, with emphasis on supporting underrepresented groups like women and rural residents
■ Providing grants for research on using AI in agriculture through land-grant universities and cooperative extension services.
■ The overarching goal is to expand educational opportunities and workforce training in AI from K-12 through graduate levels to maintain U.S. leadership in this transformative technology across various sectors.
Universities and K-12 Schools Start to Act
US Universities
Across U.S. higher education, whole systems and flagship campuses are racing to embed AI in everything from coursework to statewide research infrastructure. In California, the 23-campus CSU just unveiled a system-wide “AI-Powered Initiative” that gives its 460 000 students and 63 000 faculty access to generative tools and professional-development modules[158], while the University of California is backing large-scale basic science with $18 million in new grants and institutes such as UC Riverside’s RAISE, which spans robotics, cybersecurity, and social-impact research[159]. New York’s SUNY network has adopted the STRIVE strategic plan and, with a fresh $5 million appropriation from Governor Hochul, is creating eight campus-based Departments of AI & Society and a shared “Empire AI” resource to democratize computing power for students across the state[160].
George Mason University is positioning itself as a hub for “inclusive, responsible AI,” blending research, workforce skilling, and community engagement from its Northern Virginia campus[161]. Florida State’s interdisciplinary data-science program now anchors an annual AIMLX expo that spotlights classroom and industry AI projects across the Southeast[162]. Penn State’s AI Hub unites ethics, education, and outreach through its spring “AI Week,” and the University of Pennsylvania’s month-long “AI and Human Well-Being” series convenes seminars and hackathons across medicine, engineering, and the humanities as part of its AI Hub[163]. Ohio State recently announced that every student will be required to use AI in class[164]. Wharton recently announced Wharton Human-AI Research[165]. The University of Mary Washington is working on a Center for the Humanities and AI.
US K-12
By mid-2025 the United States has experienced a rapid shift from ad-hoc experimentation to formal rule-making around classroom AI. A running tracker from Teach AI shows that 26 state education agencies now publish stand-alone AI guidance or frameworks —up from just six a year earlier[166].
At the local level, the Center on Reinventing Public Education estimates that roughly 20 % of the nation’s 13,000 school districts have approved AI-use or procurement rules[167], with momentum accelerating as generative tools proliferate Federal attention is reinforcing the trend: a White House initiative launched in spring 2025 is pairing competitive grants with public-private toolkits to push AI literacy into every classroom by 2026, prompting districts to codify acceptable-use and data-privacy expectations sooner rather than later[168].
Districts that started early are now seen as national templates. California’s Desert Sands USD posts bilingual AI Guidance documents that embed “human-first prompting” checklists and professional-learning modules for staff**, all tied to its Portrait of a Graduate competencies[169]. Ohio’s Department of Education and Workforce, working with InnovateOhio and aiEDU, released an AI Toolkit and a statewide strategy that require every district to name an AI lead and offer staff training by fall 2026[170]. Michigan Virtual’s new partnership with aiEDU is running a year-long train-the-trainer program expected to coach 500 teachers and seed classroom pilots across all 900-plus Michigan districts[171].
Alternative school structures are also emerging. Alpha School’s microschools in Texas and Florida compress core academics into two AI-tutor-driven hours, freeing afternoons for projects and mixed-age mentoring[172]. The Innovation Academy of Excellence, opening this August on the Tallahassee State College campus, is billed as Florida’s first “AI-integrated” middle school, with ethical AI tools woven into every STEM-rich lesson.
Action Outside the US
The UAE is making artificial-intelligence a compulsory subject from kindergarten through Grade 12 in 2025-26, pairing age-appropriate coding with explicit ethics lessons for roughly 400 000 public-school pupils[173]. In the United Kingdom, the Department for Education has issued detailed “Generative AI in Education” safety guidance and a free professional-development toolkit so teachers can embed large-language-model tools while protecting data and academic integrity[174]. Singapore’s new “AI for Fun” electives—five- to ten-hour, hands-on modules that will be available in every primary and secondary campus from 2025—already give more than 50 000 students a year a playful introduction to machine learning[175]. South Korea is scaling a nationwide network of AI high schools and rolling out adaptive digital textbooks to personalize learning and ease cram-school pressure[176]. Japan’s updated school guidelines come with new textbooks that encourage practical generative-AI projects while warning about bias and misinformation[177]. Australia’s June 2025 national program will weave baseline technical and ethical AI skills into every classroom, backed by university–industry partnerships and teacher upskilling[178]. Canada, the Alberta-based Amii institute is distributing free K-12 AI-literacy kits, coaching sessions, and classroom visits to help teachers integrate AI “across the timetable[179].” India’s CBSE board, which oversees more than 20 000 schools, has told classes 6-12 to adopt AI skill modules immediately so that students graduate with vocational credentials in areas such as data science and coding. these initiatives signal a shared global judgment: helping students co-create with—and critically oversee—AI systems is now a core duty of general education, not a niche tech elective.
Human Deep Learning to Prepare Students for an AI World
This book emphasizes the importance of changing educational approaches to incorporate academic human deep learning, highlighting the importance of learning fundamental knowledge, cultivating expertise, and developing critical thinking skills rooted in real-world experiences. This can support the “social basis of intelligence and human development”, the “interpersonal activity that is essential to human thinking…as advanced human intelligence...the type of intelligence that we need as we progress through the 21st century, an intelligence that is human that emanates from our emotional, sensory, and self-effective understanding of ourselves and our peers[180].” As Stanford computer scientist and “Godmother” of AI explains:
Simply seeing is not enough. Seeing is for doing and learning. When we act upon this world in 3D space and time, we learn, and we learn to see and do better. Nature has created this virtuous cycle of seeing and doing powered by “spatial intelligence.”… And if we want to advance AI beyond its current capabilities, we want more than AI that can see and talk. We want AI that can do… As the progress of spatial intelligence accelerates, a new era in this virtuous cycle is taking place in front of our eyes. This back and forth is catalyzing robotic learning, a key component for any embodied intelligence system that needs to understand and interact with the 3D world[181].
Shouldn't "human education "create a virtuous cycle of seeing, learning, and doing?
And this is not only Fei Fei Li’s idea. All of world’s leading computer scientists (Hinton, LeCun, Bach, etc., as will be discussed), neuroscientists[182], and psychologists[183] believe we need computers to develop “worldly knowledge” to develop “common sense[184]” and, eventually, higher-order reasoning[185] so they can plan,[186] predict[187], and talk about the world[188]. Planning, predicting, and developing higher-order reasoning abilities does require that they anticipate potential future possibilities and make choices from among them, the same way that humans make decisions[189].
These leading scientists believe it is possible to incorporate the human “lived experience” that children develop through “active, self-motivated exploration of the real external world[190]” into machines and to make it possible for them to potentially achieve human-level intelligence. Some (Altman, Murati, Sutskever, Bach; cited elsewhere throughout this book) believe this could potentially be accomplished through current[191] multi-modal models[192] supported by text, audio, and video (vision)[193], but others (LeCun, Choi) cited elsewhere through this report, and Yiu et al[194] believe a greater focus on these new, objective-driven “world models are needed.”
A shift toward greater experiential learning will be a challenge, as education continues to be grounded in educational theories that became dominant at the turn of the 20th century, over 100 years ago, learning facts and knowledge and then, in the 1990s, starting to think about it and analyze that knowledge. Due to this gap between how students are taught and how the world works, students are ill-prepared for a rapidly changing world and have not been for quite some time[195].
We only have ourselves to blame for this situation. While efforts were able to advance intelligence in machines, we did not pay attention and begin efforts to change. While scientists and industry worked on deep learning models that enabled machines to learn largely on their own, using principles of initial supervised learning and then self-learning combined with fine-tuning and reinforcement learning with human feedback[196] (in a classroom, we think of this as a “guide on the side[197]”), education largely continued along the trajectory established in the 1920s and reinforced by A Nation at Risk in the early 1980s. Relying largely on a “supervised,” sage on-the-stage[198] teaching process that fills students’ heads with more predetermined content, insists they reorganize it in some manner, and then tests to assess it. As a result, we keep teaching students to do what machines can do best and will forever be better at than us: learning a lot of content, analyzing it, and passing tests. Almost ironically, some students are passing these tests by only predicting the answer the teacher expects and parroting it on the test without necessarily needing any level of understanding, something critics are quick to criticize today’s AIs for, even though their abilities to “understand” are arguably starting to develop and almost certainly will exist in the future[199].
Today, however, the stakes are much higher, as the number of jobs that exist for high school and college graduates who have not developed higher-order thinking skills will be very limited. A substantial number of lower-level administrative and factory/warehouse jobs will be lost to automation, and even mid-level management jobs will be at-risk as machines learn how to reason and plan. The demand for employees who are capable of higher-order thinking and have the skills needed to interact both with humans and intelligent machines[200] will be difficult to meet without fundamental adaptions by the educational system.[201] Ultimately, teams solve hard problems[202], and now those teams include AIs.
Photo from Fei Fei Li’s talk cited above.
Students who are educated through the deep learning approaches we advocate, combined with AI literacy and who have experience with human-AI collaboration, will be better prepared for both the present and a future. In the future, more than 60-100% of jobs will involve using AI tools, and many current jobs could be entirely replaced by AIs. To prepare students for the future, education can no longer ignore these bots that “have evolved from being topics in intellectual discussions to challenging realities[203]” and instead must help students develop the metacognitive capacities and AI skills needed to interact with them.
Students (and all of us) need to strengthen our abilities to collaborate with one another, both as co-pilots and as all “forms” of co-workers. Based on the current level of technology advancement, AIs depend on to co-pilot with them, but as they become more autonomous and develop their own agency, the same way we hope our students will, we will also need to communicate and collaborate with them[204].
The book stresses the importance of prioritizing academic programs that foster human deep learning and integrate AI tools, particularly in ways that encourage collaboration between humans and AI agents to exchange, enhance, and preserve knowledge. This kind of human-computer interaction will elevate human intellectual abilities, helping individuals flourish even in a future where machines are likely to surpass human cognitive capacities. The book suggests that these advancements in AI may occur quickly, potentially by the time today's ninth graders graduate from high school.
In addition, schools need to nurture fundamental human qualities such as courage, kindness, love, and patience—traits vital for success in a world dominated by highly intelligent machines. However, as AI continues to evolve, the landscape of civilization is transforming rapidly, and it may soon surpass human capabilities in many areas. As a result, traditional jobs in fields like law and medicine may become obsolete, as AI can perform these tasks more accurately and efficiently. This shift calls for a complete rethinking of educational approaches, moving away from outdated systems designed for another era. Without such change, we risk utilizing AI tools merely to scale educational techniques from the early 20th century that are ill-suited for a future shaped by intelligent machines. To thrive, individuals must learn to adapt independently in a world of constant change, where AI challenges both human employment and intellectual dominance.
The book underscores the need to prioritize academic programs that cultivate deep human learning while also integrating AI tools in ways that allow humans and AI agents to preserve, exchange, and improve knowledge. By promoting such human-computer interaction, we can amplify every facet of human intelligence, ensuring that people continue to thrive even though machines may soon surpass our intellectual capabilities in virtually all domains—possibly within just a few decades, or even by the time today’s first graders finish high school. In addition, we must recognize that competing directly with AI is futile: as AI’s proficiency in tasks like fact-checking grows, the job market for human AI fact-checkers will likely vanish, forcing us to focus on the uniquely human strengths that machines struggle to replicate.
These strengths include qualities such as courage, kindness, love, and patience—traits that will be essential for human success regardless of AI’s progress in mimicking or exceeding our intelligence. To foster these attributes effectively, we must overhaul the outdated “grammar of education,” rather than merely scaling up early twentieth-century methods with modern technology. Indeed, clinging to antiquated approaches is inadequate for a future in which we live and work alongside multiple machines that possess superior intelligence and where the pace of change is relentless. Only by reimagining education can we equip individuals with the capacity for continual self-directed learning, empathy, and resilience—elements that remain irreplaceably human.
We recognize that educational leaders who push these deep learning approaches grounded in experiential and self-learning approaches, as well as to help students learn with AI tools, are often met with both resistance (“I have no voice in the curriculum and feel that all the teacher-autonomy is being choked out of us[205]”) and lip service, leaving students and parents to hold bake sales to fund important deep learning opportunities like debate teams, bands, and robotics clubs[206], while funding is invested in traditional classroom approaches focused on learning content and is not correlating with work place preparation. Of course, this is no different than how the original deep learning computer scientist researchers had to fight for funding or use their own resources to advance their deep learning approaches to AI[207], but as they learned: resistance is not futile, and at the end of the report we outline practical steps for positive change.
Jerry Almanderez, Superintendent of the Santa Ana Unified School District, stated it’s time to imagine the unimaginable for education. It is time for that. It’s time for what we call a “moonshot,” and not just for AI[208], but for education as well[209].
Chapter Overview
The book begins in Chapter 2 by looking at some of the fundamental ideas and “grammars” that have shaped education over the past 100 years, showing how early ideas that might have been appropriate for earlier times persisted despite a variety of social changes, including revolutionary technological advancements that reshaped society and the workplace and we begin making our case for helping educators adapt to this era[210].
In Chapter 3, we discuss how the current AI capabilities, including the fact that AIs can already learn information by reading, listening, looking at photos, and watching video as the world moves on from language models to multimodal language models, which now represent all of the current frontier models. Already, AI can analyze a significant amount of knowledge faster than any human, and the potential development of machines with at least human-level intelligence over the ensuing decades or sooner, will have a significant impact on our educational system and society.
It includes a discussion of how scientists believe computers need to incorporate the lived experience of humans to develop human-level intelligence. These include concepts borrowed from cognitive science[211] that stress the importance of understanding objects with three-dimensional properties[212], scenes with “spatial structure and navigable surfaces[213],” and agents with beliefs and desires[214] that work to replicate what the reader might think of as “social knowledge[215].” Like empiricist philosophers[216] and educators who support more project-based, experiential approaches to learning, computer scientists who support the development of objective-driven world AI models argue that to achieve human-level intelligence, computers must interact with the environment, learn from it, and build models of the world.
In Chapter 4, we review the major models that are publicly accessible – ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google), Llama (Meta), Grok (X), as well as applications in semantic search - Bing and Perplexity. We also look discuss how to download open models (LM Studio) and access them through tools such as Groq.com. Finally, we look at popular integrations into the application layer such as Copilot, school “wrappers (Magic School, SchoolAI, Flint, Khamigo), and others. We conclude with a review of image and video generators.
In Chapter 5, we concentrate on challenges the educational system is dealing with, such as those brought on by the widespread consumer adoption of AI, especially generative AI. We also look at non-AI related concerns, as any suggestion for systemic change must consider how the proposed change will impact all challenges schools are facing.
In Chapter 6, we unpack how our deep learning approach to education will help students prepare to thrive in this world. Deep learning in education is an instructional approach developed in response to the changing needs of the information economy. It focuses on developing students' higher-order skills such as critical thinking, problem solving, collaboration, and communication that allow students to learn on their own and strengthen their intelligence. It provides students with some supervised learning but focuses on developing their capacities to learn on their own, with “fine tuning’ support and feedback from other humans and even AIs. These are the essential abilities and skills humans have always needed since at least the development of the information economy and are now critical to adapt and thrive in a world of accelerating machine intelligence that, in a completely unpredictable way, many render any knowledge or skills learned today completely irrelevant tomorrow.
Will Richardson explains:
I think one thing we're going to have to come to grips with in education is that no amount of teaching is going to prepare children for the world that's coming at them. I mean, that's always been true to some extent. But it feels even more the reality now.
Kids are going to have to learn their way through their lives. Change is only going to speed up, and if they're relying on what they've been "taught," they'll soon find themselves struggling to keep up and to make sense of what's happening.
And all of that will be centered on inquiry. Their learning will be driven by asking relevant questions and having the dispositions, skills, and literacies to suss out the answers.
I'll say it again: If we're putting kids out into the world who are waiting to be told what to learn, when to learn it, how to learn it, and how to be assessed on it, good luck to them.
As the philosopher Eric Hoffer is thought to have written, "Learners will inherit the earth, while the learned will be beautifully equipped for a world that no longer exists."
And it's not just individual learning. We need to learn together, both as individuals and as societies. As change buffets all of our lives, our beliefs, worldviews, and our values need to be relearned collectively. And again, inquiry is the path forward.
Unfortunately, most schools are still teacher-centric institutions that see their role as delivering an education. What we need are learner-centric spaces where adults and children do the daily work of interrogating the current reality and creating paths forward that serve the best of what it means to be human and the health of all living things on the planet[217].
We know the AIs will not stop learning. Rose Luckin adds:
(W)e need to remember that artificial intelligence does not get tired of learning, and the fact that AI is always learning means it is always improving. We must therefore accept that we must continually learn. Learning is the holy grail of success and intelligence. If we are good at learning, the world is our oyster, and we can continually progress[218].
We already have the methods we need to tackle this challenge, and we can start immediately. For example, imagine the following scenario developed by ChatGPT4:
The Solar-Powered Stories Project
Objective: Understand the historical evolution of energy sources and their representation in literature, emphasizing the transition to sustainable energy like solar power.
Historical Context: Students begin by exploring the timeline of humanity's energy sources, from the discovery of fire, the use of windmills and worwater wheels, to the coal-driven industrial revolution and the present-day shift towards renewables.
Literary Exploration: Students read excerpts from literature across different eras that reference or are influenced by the predominant energy sources of their times. For instance, they might read descriptions of coal-driven London in Charles Dickens' novels or references to windmills in Don Quixote.
Creative Writing: Drawing inspiration from their historical and literary studies, students are tasked with writing a short story or a scene set in a future solar-powered world. How do these energy sources influence daily life, culture, or even conflicts in their imagined world?
School's Solar Potential: As a practical application, students assess areas in their school that could theoretically benefit from solar energy. They don't need to go into technical details but should think about how such a transition might influence school life, routines, and even the local community.
Local Connection: The class invites a local historian or author to discuss how the environment and technology have influenced literature and historical narratives in the region.
Presentation: Students share their solar-powered stories, highlighting historical and literary influences. They also present their vision for a solar-powered school, discussing potential changes in school life and routines.
Throughout the project, the teacher ensures that students are making connections between history, literature, and the potential future of solar energy, emphasizing critical thinking and creativity.
Image generated from MidJourney. The prompt was to ask it to generate an image based on the lesson, with the full lesson being inserted as a prompt into MidJourney.
In Chapter 7, we articulate how experience with deep learning will best prepare students to amplify their intelligence by working collaboratively with both machines and people throughout the deep learning process. This approach is grounded in the work of educational theorists and researchers, philosophers, psychologists, neuroscientists, and computer scientists who have spent decades studying how humans learn and prioritize immersive experiences in the real world, including real virtual worlds. Beyond what we hope are valuable suggestions for helping to develop the capacity of humans to thrive in our new world, we hope this book contributes to the synthesis of knowledge related to learning across these fields and can serve as a springboard for redefining contemporary approaches to learning.
In Chapter 8, we outline reasons why integrating deep learning across a school in a way that augments human intelligence with AI will help reduce some of the challenges schools are facing. Given the significant efforts and investments being made in the development of artificial intelligence, we also need to concentrate on creating a symbiotic relationship between humans and AIs. To do this, we need to create an environment where students, faculty, and staff are trained to use AI, including potentially super intelligent AI, as collaborators that can enhance their own cognitive capacities and productivity. This approach aims to fortify human intelligence and amplify it through collaboration with AIs and the larger world, thereby unlocking unprecedented possibilities for the advancement of human intelligence in a world preoccupied with the development of intelligence in machines.
As much as we believe that focusing on human deep learning will strengthen our cognitive capacity, we do not believe it is likely that we will be able to compete with AIs or otherwise retain enough unique attributes to win a long-term competition against these machines. Eventually, there is a good chance that machines will meet or exceed human-level intelligence capabilities in all domains in which humans are intelligent, and the rate of change is just too fast; even important capabilities that we thought were “uniquely human” such as empathy[ccxix] and creativity[ccxx], are now exhibited to some degree by machines. In the same way that humanity has adapted to using every previous technology, we must learn how to use AIs to enhance our capabilities.
Students who regularly engage in deep learning and develop skills in communication, collaboration, creativity, and communication, among others, will be best prepared for a world where they will thrive based on their continued interaction with both AIs and humans. And just as machines have learned how to learn, these approaches will enable students to learn how to learn and to thrive in a constantly changing and unpredictable world. We can’t have regular daily announcements of billions of dollars being invested in AI while we don’t have any announcements of even millions being invested to help schools and students prepare for the AI World.
In Chapter 9, we review some of the most influential guidance related to the adoption of AI polices and guidance related to K-12 schools.
In Chapter 10, we offer practical advice for educational administrators who want to start the process of incorporating deep learning and AI into the classroom immediately in a way that improves human capabilities without magnifying current harms. We, and other experts, recognize that this will require radical changes in both what students learn and how they are taught. There is no better time to begin than now, especially since these changes will be needed even if the technology does not move beyond where it is today. Public education needs a “moonshot” and it’s a moonshot that must succeed. We can’t have regular daily announcements of billions of dollars being invested in AI[ccxxi] while we don’t have any announcements of even millions being invested to help schools and students prepare for the AI World.
In Chapter 11, we outline a longer-term vision for the kind of revolutionary change we believe is needed. Without this type of change, we will simply use our new AI magic to reify and hyperscale an educational system that was designed for society 100 years ago and magnify the most significant problems that education is facing. As Harvard professor Chris Dede notes, “Changing the outcomes of education is even more important because otherwise we're training people to think in ways that are not necessary anymore because AI is going to take them over.[ccxxii]”
Chapter 12 focuses on what one needs to do to lead this type of significant change.
In Chapter 13, we look back at the difficult moral questions surrounding the growth of both human and artificial intelligence, including the problematic ways intelligence has been used in history. We propose that education may also want to emphasize developing "non-intelligence-based" human attributes like responsibility that can also be learned through human deep learning. At this time, as we enter the largest technological change in human history at a substantially faster rate than any previous one has occurred, we believe that all doors should be opened and that those who both understand and practice deep learning will be most prepared to thrive in this new world.
We conclude the book in Chapter 14 with a call to action, including a public ‘moon shot’ to support an educational transformation.
Our objectives are to equip readers with the knowledge they need to engage in the conversation, to engage in critical thinking about our shared future, to provide suggestions for working collaboratively, and to engage in the mindset shift required to address the use of AI to enhance our capabilities.
The book also includes an overview of the radical and continued advances in AI that we are currently experiencing, including imminent developments such as autonomous AI agents that will be able to make their own choices and undertake their actions when given goals. We conclude with a practical roadmap for schools to immediately begin implementing changes so that all students can succeed in the AI world. Such changes will require educators to embrace significant change, overcome entrenched grammars, and attract funding, but it is no different than the struggle deep learning computer scientists endured for decades: opponents who thought computers had to be fed hand-crafted instructions that would help models make decisions rather than allowing them to largely learn on their own. Eventually, the computer scientists who promoted deep learning approaches that allowed computers to largely learn on their own won out and changed the world forever. The same changes are possible in education, though they may require a moonshot, and they will certainly require immediate and impactful leadership. But like the AI scientists advocating deep learning approaches to machine intelligence we will eventually succeed on advocating for schools to prioritize deep learning approaches to human intelligence.
Without this change in mindset and approach, we risk using AIs to simply “hyperscale” educational practices that contribute to our students’ failure to thrive in a world that requires humans to engage in higher-order thinking to flourish.
Of course, we do not have all the answers, or maybe any of the answers. Never in human history have we invented and used artificially intelligent machines in any area of society, let alone in education. But we do hope that by reading this book you will be able to have the knowledge you need to think through some critical questions.
A Note on Rapid AI Development
When this book was first published (May 2024), the cutting edge of AI, at least in terms of generative AI, was multimodal AI, some flirtation with reasoning, and a small amount of agentic emergence.
Now in June 2025, we see, Multimodal AI has progressed beyond its initial stages, now seamlessly integrating text, voice, images, and veo3. This advancement has led to more sophisticated and versatile AI systems capable of processing and interpreting multiple types of data simultaneously. For example, Google's Lumiere demonstrates the ability to incorporate video processing capabilities alongside natural language processing and computer vision.
Major breakthroughs have occurred in robotics, where generative AI is revolutionizing how robots are trained. This paradigm shift allows robots to learn new tasks almost instantly by combining various data sources, including sensor data, teleoperation, and internet-scraped images and videos. This advancement is already being applied in commercial spaces like warehouses and is laying the groundwork for more intelligent home-assist robot. Companies are even promoting robots with self-contained neural networks for in-home use.
The rise of specialized AI agents working together under human supervision has become a prominent trend. These "AI teams" are tackling complex problems in healthcare, education, and finance, with humans providing high-level guidance and focusing on creativity and critical thinking
Sakana and Google Research[ccxxiii] has introduced an AI co-scientist[ccxxiv], a virtual scientific collaborator designed to help researchers generate novel hypotheses, draft research proposals, and fast-track scientific and biomedical discoveries
Microsoft has unveiled Majorana 1, the world's first quantum chip powered by a revolutionary Topological Core architecture, bringing us closer to quantum computers capable of solving industrial-scale problems.
These advancements demonstrate that AI has rapidly progressed from promise to practice, with its impact deepening in unprecedented ways across various industries and applications. The integration of AI into everyday tools and processes continues to drive significant efficiencies and unlock new possibilities for businesses and individuals. New models that study human behavior can master marketing and generative AI is widely considered to be more persuasive than humans. Scientists are using new AI techniques called "attention mechanisms" to better understand brain signals measured through electroencephalography (EEG). By applying attention mechanisms, researchers can now extract more meaningful information from these signals.
Perhaps most exciting is how these methods are enabling the combination of brain data with other biological signals, like heart rate or eye movements. This multimodal approach creates a more complete picture of what's happening in the brain and body. The practical implications are far-reaching: more accurate diagnosis of neurological conditions, more responsive prosthetic limbs controlled by thought, and potentially even new ways for people with severe disabilities to communicate. While this technology is still developing, it represents an important bridge between artificial intelligence and neuroscience that could transform how we interact with computers and understand the human brain[ccxxv].
[1] OECD. (2023). OECD Employment Outlook 2023: Artificial Intelligence and the Labour
Market. OECD Publishing, Paris. https://doi.org/10.1787/08785bba-en.
[2] Martin Wolk. October 20, 2023. Why this AI pioneer is calling for ‘human centered’ computing. Los Angeles Times. https://www.latimes.com/entertainment-arts/books/story/2023-10-20/fei-fei-li-the-worlds-i-see-artificial-intelligence
[3] Nabaparna Bhattacharya. March 22, 2025. Tech, Energy, And Manufacturing: Trump And Sheikh Tahnoon Strengthen Strategic Ties. MSN. https://www.msn.com/en-us/money/economy/uae-pledges-1-4-trillion-investment-in-us-tech-energy-and-manufacturing-trump-and-sheikh-tahnoon-strengthen-strategic-ties/ar-AA1BsDTW?ocid=BingNewsSerp
[4] Peter Diamandis. June 13, 2025. AI Experts Debate: Overhyped or Underhyped? (Opposite Opinions) Mo Gawdat & Steven Kotler. AI Experts Debate: Overhyped or Underhyped? (Opposite Opinions) Mo Gawdat & Steven Kotler.
[5] Ben Berkowitz. June 10, 2025. Meta launching AI superintelligence lab with nine-figure pay push, reports. Axiossayhttps://www.axios.com/2025/06/10/meta-ai-superintelligence-zuckerberg
[6] EpochAI. August 30, 2024. Can AI Scaling Continue Through 2030?. https://epoch.ai/blog/can-ai-scaling-continue-through-2030
[7] Grtner. March 31, 2025. Gartner Forecasts Worldwide GenAI Spending to Reach $644 Billion in 2025. https://www.gartner.com/en/newsroom/press-releases/2025-03-31-gartner-forecasts-worldwide-genai-spending-to-reach-644-billion-in-2025
[8] Konstantinos Komaitis, Esteban Ponce de León, Kenton Thibaut, Trisha Ray, and Kevin Klyman. July 26, 2024, The sovereignty trap. https://www.atlanticcouncil.org/blogs/geotech-cues/the-sovereignty-trap/
[9] NVIDIA. June 11, 2025. Europe Builds AI Infrastructure With NVIDIA to Fuel Region’s Next Industrial Transformation. https://nvidianews.nvidia.com/news/europe-ai-infrastructure?utm_source=chatgpt.com.
[10] Sarvam. No date. https://www.sarvam.ai/indias-sovereign-large-language-model.
[11] Technology Innovation Institute. June 11, 2025. Technology Innovation Institute Announces Falcon-H1 model availability as NVIDIA NIM to Deliver Sovereign AI at Scale. https://www.tii.ae/news/technology-innovation-institute-announces-falcon-h1-model-availability-nvidia-nim-deliver.
[12] Caitlin Andrews. June 4, 2025. https://iapp.org/news/a/japan-passes-innovation-focused-ai-governance-bill. Japan passes innovation-focused AI governance bill.
[13] Government of Canada. Canadian Sovereign AI Compute Strategy. https://ised-isde.canada.ca/site/ised/en/canadian-sovereign-ai-compute-strategy
[14] Kanishka Singh, June 16, 2025. Reuters. OpenAI wins $200 million US defense contract https://www.reuters.com/world/us/openai-wins-200-million-us-defense-contract-2025-06-16/.
[15] David Hogan. October 23, 2024. Denmark Launches Leading Sovereign AI Supercomputer to Solve Scientific Challenges With Social Impact. https://blogs.nvidia.com/blog/denmark-sovereign-ai-supercomputer/
[16] "AI in Education Market Trends and Revenue Forecast." GlobeNewswire, 13 Dec. 2024, https://www.globenewswire.com/news-release/2024/12/13/2996850/28124/en/AI-in-Education-Market-Trends-and-Revenue-Forecast-2024-2032-by-Component-Application-End-User-and-Country-with-Detailed-Company-Analysis.html.
[17] "AI In Education Market Size, Share & Trends to 2022-2028." KBV Research, 31 July 2022, https://www.kbvresearch.com/ai-in-education-market/.
[18] AI In Education Market Size & Share | Industry Report, 2030." Grand View Research, 1 Oct. 2024, https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-education-market-report.
[19] Artificial Intelligence in Education Market Report 2024-2029." Yahoo Finance, 18 Nov. 2024, https://finance.yahoo.com/news/artificial-intelligence-education-market-report-090100907.html.
[20] AI in Education Market Size, Share, Growth Report 2024-33." IMARC Group, 1 Jan. 2024, https://www.imarcgroup.com/ai-in-education-market.
[21] AI in Education Market Size, Share, Growth Report 2024-33." IMARC Group, 1 Jan. 2024, https://www.imarcgroup.com/ai-in-education-market.
[22] Sundar Pichai. June 5, 2025. Sundar Pichai: CEO of Google and Alphabet.
[23] Monica Chatterjee. August 17, 2023. Top Artificial Intelligence Investors of 2023. https://www.mygreatlearning.com/blog/artificial-intelligence-investors/
[24] Amanda Ruggeri. March 6, 2024. The surprising promise and profound perils of AIs that fake empathy. New Scientist. https://www.newscientist.com/article/mg26134810-900-the-surprising-promise-and-profound-perils-of-ais-that-fake-empathy/
[25] Surabhi S. Nath, Peter Dayan, Claire Stevenson. May 1, 2024. Characterizing the Creative Process in Humans and Large Language Models. https://arxiv.org/abs/2405.00899
[26] James Hutson, Daniel Plate. (2024). Disrupting Algorithmic Culture: Redefining the Human(ities). Pp. 1-30. In Generative AI in Teaching and Learning. IGI Global.
[27] Matthias Bastian. April 30, 2024. GPT-4 can outperform humans at reframing negative situations, study finds. The Decoder. https://the-decoder.com/gpt-4-can-outperform-humans-at-reframing-negative-situations-study-finds/
[28] Richard Ord. May 3, 2024. OpenAI Unveils GPT-4o: A Paradigm Shift in AI Capabilities and Accessibility. Web Pro News. https://www.webpronews.com/openai-unveils-gpt-4o-a-paradigm-shift-in-ai-capabilities-and-accessibility/
[29] Lance Eliot. May 14, 2025. HBR’s Top 10 Uses Of AI Puts Therapy And Companionship At The No. 1 Spot. https://www.forbes.com/sites/lanceeliot/2025/05/14/top-ten-uses-of-ai-puts-therapy-and-companionship-at-the-1-spot/
[30] Yann L: There is no questions that, eventually, machines will eventually surpass human intelligence in all domains. Objective Driven AI. January 24, 2024. https://www.ece.uw.edu/wp-content/uploads/2024/01/lecun-20240124-uw-lyttle.pdf
[31] Tegmark, Max. Hassabis, Demis. Bengio, Youshua. Song, Dawn. Zhang, Ya-Qin. February 2025. Do we NEED International Collaboration for Safe AGI? Insights from Top AI Pioneers | IIA Davos 2025.
[32] Ray Kurzweil. June 4, 2025. Ray Kurzweil with David S. Rose: The Singularity is Nearer.
[33] Alex Kantrowitz. May,, 207, 2025. DeepMind CEO Demis Hassabis + Google Co-Founder Sergey Brin: AGI by 2030?.
[34] IBID
[35] Shirin Gaffrey. Ctober 18, 2024. Anthropic CEO Thinks AI May Outsmart Most Humans By 2026. Bloomberg. https://www.bloomberg.com/news/newsletters/2024-10-18/anthropic-ceo-thinks-ai-may-outsmart-most-humans-as-soon-as-2026
[37] Amodei, Dario. Machines of Loving Grace. October 2024. https://darioamodei.com/machines-of-loving-grace.
[38] Beth Barnes. June 12, 2025.
The most important graph in AI right now | Beth Barnes, CEO of METR.
[39] Kokotajlo, D., Alexander, S., Larsen, T., Lifland, E., & Dean, R. (2025, April). AI 2027. https://ai-2027.com/ai-2027.pdf
[40] Mary Meeker / Jay Simons / Daegwon Chae / Alexander Krey. Trends – Artificial Intelligence (AI). https://www.bondcap.com/report/pdf/Trends_Artificial_Intelligence.pdf.
[41] Mathias Bastian, March 17, 2025. Nearly half of U.S. adults believe LLMs are smarter than they are. https://the-decoder.com/nearly-half-of-u-s-adults-believe-llms-are-smarter-than-they-are/
[42] Sam Altman. The Gentle Singularity. June 10, 2025. https://blog.samaltman.com/the-gentle-singularity.
[43] Zhan, Jingtao, Jiahao Zhao, Jiayu Li, Yiqun Liu, Bo Zhang, Qingyao Ai, Jiaxin Mao, Hongning Wang, Min Zhang, and Shaoping Ma. "Intelligence Test." arXiv preprint arXiv:2502.18858 (2025). https://arxiv.org/abs/2502.18858
[44] Sri Viswath. Vibhor Khanna. Yijia Lang. November 2023. The AI Revolution. https://drive.google.com/file/d/1gQhYT7j6b2wJmrFZHNeQgTiWPyTsjOfX/view
[45] Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024. https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_2024_AI-Index-Report.pdf
[46] Salvi, Francesco, Manoel Horta Ribeiro, Riccardo Gallotti, and Robert West. "On the conversational persuasiveness of large language models: A randomized controlled trial." arXiv preprint arXiv:2403.14380 (2024). https://arxiv.org/abs/2403.14380
[47] Study Finds staff. May 12, 2024. Deceitful tactics by artificial intelligence exposed: ‘Meta’s AI a master of deception’ in strategy game. https://studyfinds.org/metas-ai-master-of-deception/
[48] Nikhil. May 4, 2024. Researchers at NVIDIA AI Introduce ‘VILA’: A Vision Language Model that can Reason Among Multiple Images, Learn in Context, and Even Understand Videos. Market Post. https://www.marktechpost.com/2024/05/04/researchers-at-nvidia-ai-introduce-vila-a-vision-language-model-that-can-reason-among-multiple-images-learn-in-context-and-even-understand-videos/
[49] Kim, Alex G. and Muhn, Maximilian and Nikolaev, Valeri V., Financial Statement Analysis with Large Language Models (May 20, 2024). Chicago Booth Research Paper Forthcoming, Fama-Miller Working Paper, Available at SSRN: https://ssrn.com/abstract=4835311 or http://dx.doi.org/10.2139/ssrn.4835311”
[50] Hu, L., Yuan, S., Qin, X., Zhang, J., Lin, Q., Zhang, D., ... & Zhang, Q. (2025). MEETING DELEGATE: Benchmarking LLMs on Attending Meetings on Our Behalf. arXiv preprint arXiv:2502.04376. https://arxiv.org/abs/2502.04376
[51] Brayden Lyndria. March 15, 2023. ChatGPT v4 aces the bar, SATs and can identify exploits in ETH contracts. https://cointelegraph.com/news/chatgpt-v4-aces-the-bar-sats-and-can-identify-exploits-in-eth-contracts
[52] IBID.
[53] de Winter, J.C. Can ChatGPT Pass High School Exams on English Language Comprehension?. Int J Artif Intell Educ (2023). https://doi.org/10.1007/s40593-023-00372-z
[54] Kung, Tiffany H., Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga et al. "Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models." PLoS digital health 2, no. 2 (2023): e0000198.
[55] Jakub Pokrywka, Jeremi Kaczmarek, Edward Gorzelańczyk. April 29, 2024. GPT-4 passes most of the 297 written Polish Board Certification Examinations. https://arxiv.org/abs/2405.01589
[56] Team, Gemini, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut et al. "Gemini: a family of highly capable multimodal models." arXiv preprint arXiv:2312.11805 (2023). https://arxiv.org/abs/2312.11805
[57] Sundair Pichai, Demis Hassabis. December 6, 2023. AI Introducing Gemini: our largest and most capable AI model. https://blog.google/technology/ai/google-gemini-ai/#sundar-note
[58] X.ai
https://x.ai/
. Announcing Gronk.
[59] Yirka, Bob. (February 10, 2025). DeepMind AI achieves gold-medal level performance on challenging Olympiad math questions. https://techxplore.com/news/2025-02-deepmind-ai-gold-medal-olympiad.html
[60] Cheng, F., Li, H., Liu, F., Rooij, R.V., Zhang, K., & Lin, Z. (2025). Empowering LLMs with Logical Reasoning: A Comprehensive Survey. https://arxiv.org/abs/2502.15652
[61] Dell'Acqua, Fabrizio and Ayoubi, Charles and Lifshitz-Assaf, Hila and Sadun, Raffaella and Mollick, Ethan R. and Mollick, Lilach and Han, Yi and Goldman, Jeff and Nair, Hari and Taub, Stew and Lakhani, Karim R.,
The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise
(March 21, 2025). Harvard Business School Strategy Unit Working Paper No. 25-043, Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 25-043, Harvard Business Working Paper No. No. 25-043, Available at SSRN: https://ssrn.com/abstract=5188231 or http://dx.doi.org/10.2139/ssrn.5188231
[62] Alabbasi AMA, Paek SH, Kim D, Cramond B. What do educators need to know about the Torrance Tests of Creative Thinking: A comprehensive review. Front Psychol. 2022 Oct 26;13:1000385. doi: 10.3389/fpsyg.2022.1000385. PMID: 36389550; PMCID: PMC9644186.
[63] The Conversation. September 1, 2023. Fortune. Researchers tested AI for creativity. It passed with flying colors https://www.fastcompany.com/90946301/researchers-tested-ai-for-creativity-it-passed-with-flying-colors
[64] Creative Huddle. February 18, 2021. The Alternative3 Uses test. https://www.creativehuddle.co.uk/post/the-alternative-uses-test#:~:text=or%20a%20paperclip.-,Designed%20by%20J.P.%20Guilford%20in%201967%2C%20the%20Alternative%20Uses%20Test,your%20ability%20to%20think%20creatively.
[65] Koivisto M, Grassini S. Best humans still outperform artificial intelligence in a creative divergent thinking task. Sci Rep. 2023 Sep 14;13(1):13601. doi: 10.1038/s41598-023-40858-3. Erratum in: Sci Rep. 2024 Feb 20;14(1):4239. PMID: 37709769; PMCID: PMC10502005.
[66] Wu CL, Huang SY, Chen PZ, Chen HC. A Systematic Review of Creativity-Related Studies Applying the Remote Associates Test From 2000 to 2019. Front Psychol. 2020 Oct 23;11:573432. doi: 10.3389/fpsyg.2020.573432. PMID: 33192871; PMCID: PMC7644781.
[67] Klein, Ariel, and Toni Badia. "The usual and the unusual: Solving remote associates test tasks using simple statistical natural language processing based on language use." The Journal of Creative Behavior 49, no. 1 (2015): 13-37.
[68] Stefan F.Dieffenbacher. December 13, 2023. Importance of Creativity and Innovation in Business Environment. Digital Leadership. https://digitalleadership.com/blog/creativity-and-innovation-in-a-business-environment/
[69] Gu, J., Chen, T., Berthelot, D., Zheng, H., Wang, Y., Zhang, R., ... & Zhai, S. (2025). STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis. arXiv preprint arXiv:2506.06276. https://arxiv.org/pdf/2506.06276
[70] Matthias Bastian. June 14, 2025. Anthropic shares blueprint for Claude Research agent using multiple AI agents in parallel. The Decoder. https://the-decoder.com/anthropic-shares-blueprint-for-claude-research-agent-using-multiple-ai-agents-in-parallel/
[71] Anna Tong and Katie Paul. September 13, 2024. 'AI godmother' Fei-Fei Li raises $230 million to launch AI startup. Reuters. https://www.reuters.com/technology/artificial-intelligence/ai-godmother-fei-fei-li-raises-230-million-launch-ai-startup-2024-09-13/?utm_source=chatgpt.com
[72] Meta. June 2025. A self-supervised foundation world model. https://ai.meta.com/vjepa/
[73] Programming with Mosh. April 2024. This Robot Has a ChatGPT Brain!.
[74] Jason Chung. June 13, 2025. 1X Technologies’ Redwood AI Previews the Future of Home Robotics. Teche Blog. https://www.techeblog.com/1x-technologies-redwood-ai-home-robotics/
[75] Yoshua Bengio et al. ,Managing extreme AI risks amid rapid progress.Science0,eadn0117DOI:10.1126/science.adn0117
[76] Mustafa Suleyman. September 28, 2023. Mustafa Suleyman Says We Need to Contain AI. How Do We Do It? https://www.humanetech.com/podcast/mustafa-suleyman-says-we-need-to-contain-ai-how-do-we-do-it.
[77] Krystal Hu. February 2, 2023. ChatGPT sets record for fastest-growing user base - analyst note. Reuters. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
[78] James Manyika and Michael Spence November/December 2023. The Coming AI Economic Revolution
Can Artificial Intelligence Reverse the Productivity Slowdown?. Foreign Affairs. https://www.foreignaffairs.com/world/coming-ai-economic-revolutionfmanyika
[79] Reuters. February 20, 2025. OpenAI's weekly active users surpass 400 million. https://www.reuters.com/technology/artificial-intelligence/openais-weekly-active-users-surpass-400-million-2025-02-20/
[80] Casey Newton. April 19, 2024. Platformer. How Meta is paving the way for synthetic social networks. https://www.platformer.news/llama-3-meta-release-synthetic-social-network/
[81] Andulkar, M., Le, D. T., & Berger, U. (2018). A multi-case study on Industry 4.0 for SME’s in Brandenburg, Germany. Proceedings of the 51st Hawaii International Conference on System Sciences. https://opus4.kobv.de/opus4-UBICO/frontdoor/index/index/docId/21570; Burda, M. C., & Severgnini, B. (2018). Total factor productivity convergence in German states since reunification: Evidence and explanations. Journal of Comparative Economics, 46(1), 192–211. https://doi.org/10.1016/j.jce.2017.04.002; JavaPoint. (2022). Future of Artificial Intelligencehttps://www.javatpoint.com/future-ofartificial-intelligence; Lee, J. W., Kwak, D. W., & Song, E. (2022). Can older workers stay productive? The role of ICT skills and training. Journal of Asian Economics, 79(C). https://ideas.repec.org//a/eee/ asieco/v79y2022ics1049007821001664.html; Tortorella, G. L., Cawley Vergara, A. M., Garza-Reyes, J. A., & Sawhney, R. (2020). Organizational learning paths based upon industry 4.0 adoption: An empirical study with Brazilian manufacturers. International Journal of Production Economics, 219, 284–294. https://doi. org/10.1016/j.ijpe.2019.06.023
[82] Mollick, Ethan. (2024). Co-Intelligence (p. xvii). Penguin Publishing Group. Kindle Edition.
[83] Andre McAffee. April 25, 2024. https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/Generally_Faster_-_The_Economic_Impact_of_Generative_AI.pdfGenerally Faster The Economic Impact of Generative AI.
[84] Nielsen Norman Group (2023). AI Improves Employee Productivity by 66%. https://www. nngroup.com/articles/ai-tools-productivity-gains/
[85] Huang, Sonya, Pat Grady and GPT-3, Generative AI: A Creative New World, Sequoia Capital, 2022. https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/
[86] Marc Zao-Sanders. March 19, 2024. How People Are Really Using GenAI. Harvard Business Review. https://hbr.org/2024/03/how-people-are-really-using-genai
[87] Russell, Stuart. "AI: What If We Succeed?” April 25, 2024." biocomm.ai, 9 Jun. 2024, https://blog.biocomm.ai/2024/04/25/16043/.
[88] Business Wire. April 10, 2025. P-1 AI Comes Out of Stealth, Aims to Build Engineering AGI for Physical Systemshttps://www.businesswire.com/news/home/20250425073932/en/P-1-AI-Comes-Out-of-Stealth-Aims-to-Build-Engineering-AGI-for-Physical-Systems.
[89] Jennifer Liu. November 27, 2019. High-paid, well-educated white collar workers will be heavily affected by AI, says new report. https://www.cnbc.com/2019/11/27/high-paid-well-educated-white-collar-jobs-heavily-affected-by-ai-new-report.html
[90] Sun, Jiankai, Chuanyang Zheng, Enze Xie, Zhengying Liu, Ruihang Chu, Jianing Qiu, Jiaqi Xu et al. December 26, 2023. "A Survey of Reasoning with Foundation Models."
[91] Pfau, Jacob, William Merrill, and Samuel R. Bowman. "Let's Think Dot by Dot: Hidden Computation in Transformer Language Models." arXiv preprint arXiv:2404.15758 (2024). https://arxiv.org/abs/2404.15758
[92] Yao, Liang. "Large Language Models are Contrastive Reasoners." arXiv preprint arXiv:2403.08211 (2024).
[93] Melanie Mitchell: Reasoning about abstract concepts (e.g., Chollet’s ARC challenge [7]): This is a fundamental human ability that AI systems have not yet mastered in any general way. For example, humans are able to solve (at least a large percentage of) problems in Chollet’s “Abstraction and Reasoning Corpus,” [6] which tests for few-shot abstraction abilities and general understanding of “core concepts”. No AI system today comes clos. Goldblum, Micah, Anima Anandkumar, Richard Baraniuk, Tom Goldstein, Kyunghyun Cho, Zachary C. Lipton, Melanie Mitchell, Preetum Nakkiran, Max Welling, and Andrew Gordon Wilson. "Perspectives on the State and Future of Deep Learning--2023." arXiv preprint arXiv:2312.09323 (2023).
[94] Taryn Plumb. March 27, 2024. With Quiet-STaR, language models learn to think before speaking. Venture Beat. https://venturebeat.com/ai/with-quiet-star-language-models-learn-to-think-before-speaking/
[95] Besta, M., Memedi, F., Zhang, Z., Gerstenberger, R., Blach, N., Nyczyk, P., Copik, M., Kwa'sniewski, G., Muller, J., Gianinazzi, L., Kubíček, A., Niewiadomski, H., Mutlu, O., & Hoefler, T. (2024). Topologies of Reasoning: Demystifying Chains, Trees, and Graphs of Thoughts. https://arxiv.org/abs/2401.14295
[96] Anthropic. March 4, 2024. I https://www.anthropic.com/news/claude-3-family ntroducing the next generation of Claude.
[97] Chen, Sirui, Bo Peng, Meiqi Chen, Ruiqi Wang, Mengying Xu, Xingyu Zeng, Rui Zhao, Shengjie Zhao, Yu Qiao, and Chaochao Lu. "Causal Evaluation of Language Models." arXiv preprint arXiv:2405.00622 (2024). https://arxiv.org/abs/2405.00516
[98] Yann LeCun. December 12, 2023. Yann LeCun, Jerome Pesenti: AI, Extinction or Rennaissance? - TLF 2023.
[99] Presenti, Jerome in Victor Rovero, February 29, 2024. The Future of Learning: A Free AI Tutor for Everyone. https://www.edtechdigest.com/2024/02/29/the-future-of-learning-a-free-ai-tutor-for-everyone/
[100] Stefan Bauschard. May 13, 2024. ChatGPT's New 4-o Model: A free and advanced personal tutor for (almost) everyone. https://stefanbauschard.substack.com/p/chatgpts-new-4-o-model-a-free-and
[101] Armand Ruiz. May 5, 2024. https://www.linkedin.com/feed/update/urn:li:activity:7192840567771258880/
[102] Armand Ruiz. May 5, 2024. https://www.linkedin.com/feed/update/urn:li:activity:7192840567771258880/
[103] Business Times. May 3, 2024. JPMorgan unveils IndexGPT in next Wall Street bid to tap AI boom. https://www.businesstimes.com.sg/companies-markets/banking-finance/jpmorgan-unveils-indexgpt-next-wall-street-bid-tap-ai-boom
[104] Washed Out – The Hardest Part. May 1.
[105] Minevich, M. (2023). AI Is Forever Changing Our Jobs And Reinventing The Way We Work.
Forbes. https://www.forbes.com/sites/markminevich/2023/03/31/ai-is-forever-changingour-jobs-and-reinventing-the-way-we-work/
[106] Yann LeCun. December 12, 2023. Yann LeCun, Jerome Pesenti: AI, Extinction or Rennaissance? - TLF 2023.
[107] The Decoder. April 26, 2024. Mastering human-AI interaction poised to become a critical job skill across professions. https://the-decoder.com/mastering-human-ai-interaction-poised-to-become-a-critical-job-skill-across-professions/
Maximilian Schreiner. April 26, 2024. The Decoder. AI and society. April 26, 2024. Mastering human-AI interaction poised to become a critical job skill across professions. https://the-decoder.com/mastering-human-ai-interaction-poised-to-become-a-critical-job-skill-across-professions/
[108] Keaton peters. April 9, 2024. Texas Tribune. Texas will use computers to grade written answers on this year’s STAAR tests. https://www.texastribune.org/2024/04/09/staar-artificial-intelligence-computer-grading-texas/
[109] Maximilian Schreiner. May 3, 2024. The future of robot swarms is... Snails? How mollusk-inspired bots could tackle tough jobs. https://the-decoder.com/the-future-of-robot-swarms-is-snails-how-mollusk-inspired-bots-could-tackle-tough-jobs/
[110] Craig Wehner. May 5, 2028. US Air Force Secretary Kendall flies in cockpit of plane controlled by AI. Fox News. https://www.foxnews.com/tech/us-air-force-secretary-kendall-flies-cockpit-plane-controlled-ai
[111]Matthias Bastian.. April 21, 2024. Researchers unveil LLM-based system that designs and runs social experiments on its own. https://the-decoder.com/researchers-unveil-llm-based- system-that-designs-and-runs-social-experiments-on-its-own/
[112] Xiaoxin Yin. May 22, 2024. "Turing Tests" For An AI Scientist. https://arxiv.org/abs/2405.13352
[113] OECD (Organisation for Economic Co-operation and Development), PISA 2022 Results (Volume I): The State of Learning
and Equity in Education, 2023. https://www3.weforum.org/docs/WEF_Shaping_the_Future_of_Learning_2024.pdf
[114] Matthias Bastian.. April 21, 2024. Researchers unveil LLM-based system that designs and runs social experiments on its own. https://the-decoder.com/researchers-unveil-llm-based-system-that-designs-and-runs-social-experiments-on-its-own/
[115] OECD (Organisation for Economic Co-operation and Development), PISA 2022 Results (Volume I): The State of Learning
and Equity in Education, 2023. https://www3.weforum.org/docs/WEF_Shaping_the_Future_of_Learning_2024.pdf 43 Bloomberg, May 24, 2024. JPMorgan says every new hire will get training for AI.
[116] Walton Family Foundation. May 2024. AI Chatbots in Schools
Findings from a Poll of K-12 Teachers, Students,
Parents, and College Undergraduate. https://8ce82b94a8c4fdc3ea6d-b1d233e3bc3cb10858bea65ff05e18f2.ssl.cf2.rackcdn.com/bf/24/cd3646584af89e7c668c7705a006/deck-impact-analysis-national-schools-tech-tracker-may-2024-1.pdf
[117] Miao, F., & Shiohira, K. (2024). AI competency framework for students. UNESCO Publishing. https://www.unesco.org/en/articles/ai-competency-framework-students
[118] ECD (2025). Artificial Intelligence and the Future of Skills. https://www.oecd.org/en/about/projects/artificial-intelligence-and-future-of-skills.html
[119] Emma Yeomans. May 29, 2025. The Times. Spend a third of school lessons on AI, says economist. https://www.thetimes.com/uk/education/article/spend-third-of-lessons-teaching-ai-says-economist-xv07vtd57
[120] Rose Luckin, June 11, 2025. https://www.linkedin.com/posts/rose-luckin-5245003_skinnyonaied-ai-edtech-activity-7338405785263132672-kqAW?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAG2ccwBzv9cDlrg5l0-IxF3n_nDwytJiuE
[121] Alison Dulin Salisbury. May 14, 2024. Forbes. AI Has Upped The Ante On Durable Skills. https://www.forbes.com/sites/allisondulinsalisbury/2024/05/14/ai-has-upped-the-ante-on- durable-skills/?sh=44d732bf2db8
[122] Google Keynote. May 14, 2024.
.
[123] Mollick, Ethan (2024). Co-Intelligence (p. 193). Penguin Publishing Group. Kindle Edition..
[124] Ibid, p. 66
[125] Asia Tech Daily. April 20, 2024. The Quest for Artificial Consciousness: Joseph Reth’s Vision at Lossless Research. https://asiatechdaily.com/the-quest-for-artificial-consciousness-joseph-reths-vision-at-lossless-research/
[126] Sam Altman in Sam Altman: Interview on the future of AI. October 2023.
[127] Miles, K. (2015, October 1). Ray Kurzweil: In the 2030s, nanobots in our brains will make us “godlike”. Noema Magazine. https://www.noemamag.com/ray-kurzweil-in-the-2030s-nanobots-in-our-brains-will-make-us-godlike/
[128] Baer, D. (2015, November 18). 8 shocking predictions for life after 2020 from Google’s genius futurist. Business Insider. https://www.businessinsider.com/ray-kurzweil-most-extreme-predictions-2015-11
[129] Peter Diamandis. October 12, 2015. Ray Kurzweil’s Wildest Prediction: Nanobots Will Plug Our Brains Into the Web by the 2030s. https://singularityhub.com/2015/10/12/ray-kurzweils-wildest-prediction-nanobots-will-plug-our-brains-into-the-web-by-the-2030s/
[130] Kurzweil, R. (2024). The singularity is nearer: When we merge with AI. Viking.
[131] Harari, Y. N. (2017). Homo Deus: A brief history of tomorrow. Harper.
[132] Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.
[133] Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
[134] Wiggers, K. (2025, May 19). Microsoft wants to tap AI to accelerate scientific discovery. TechCrunch. https://techcrunch.com/2025/05/19/microsoft-wants-to-tap-ai-to-accelerate-scientific-discovery/
[135] Kacapyr, S. (2025, May 19). Smarter, faster AI models explored for molecular, materials discovery. Cornell Chronicle. https://news.cornell.edu/stories/2025/05/smarter-faster-ai-models-explored-molecular-materials-discovery
[136] Nature Portfolio. (2025). AI for Science 2025. Nature. https://www.nature.com/articles/d42473-025-00161-3
[137] Booth, R. (2025, June 10). AI can “level up” opportunities for dyslexic children, says UK tech secretary. The Guardian. https://www.theguardian.com/technology/2025/jun/10/ai-can-level-up-opportunities-for-dyslexic-children-says-uk-tech-secretar
[138] Makesh, L. (2025, May 1). How AI is transforming personalized learning in 2025 and beyond. eLearning Industry. https://elearningindustry.com/how-ai-is-transforming-personalized-learning-in-2025-and-beyond
[139] Sriram, A., & Ghosh, K. (2024, January 30). Elon Musk’s Neuralink implants brain chip in first human. Reuters. https://www.reuters.com/technology/neuralink-implants-brain-chip-first-human-musk-says-2024-01-29/
[140] Sriram, A., & Ghosh, K. (2024, January 30). Elon Musk’s Neuralink implants brain chip in first human. Reuters. https://www.reuters.com/technology/neuralink-implants-brain-chip-first-human-musk-says-2024-01-29/
[141] Peter Sayer. October 30, 2023. AI Safety Summit: What to expect as global leaders eye AI regulation. CIO. https://www.cio.com/article/657466/ai-safety-summit-what-to-expect-as-global-leaders-eye-ai-regulation.html
[142] United Nations. 2023. High-Level Advisory Body on Artificial Intelligence https://www.un.org/techenvoy/ai-advisory-body
https://artificialintelligenceact.com/
[144] Ryan Browne, May 21, 2024. TECH
World’s first major law for artificial intelligence gets final EU green light. CNBC. https://www.cnbc.com/2024/05/21/worlds-first-major-law-for-artificial-intelligence-gets- final-eu-green-light.html
[145] White House. October 30, 2023. FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
[146] Gabe Miller. October 25, 2023. US Senate AI “insight Forum” Tracker. https://techpolicy.press/us-senate-ai-insight-forum-tracker/
[147] Bipartisan Senate AI Working Group. May 2024. Driving US Innovation in Artificial Intelligence. politico.com/f/?id=0000018f-79a9-d62d-ab9f-f9af975d0000
[148] Odia Kagan. October 31, 2023. Linked In. https://www.linkedin.com/posts/odiakagan_dataprivacy-dataprotection-privacyfomo-activity-7125201103578611712-G4CR
[149] Glory Kabaru. October 29, 2023. Seoul Digital Foundation Unveils Ambitious AI Plans for Public Safety, Education, and Ethics. https://www.msn.com/en-us/news/technology/seoul-digital-foundation-unveils-ambitious-ai-plans-for-public-safety-education-and-ethics/
[150] New York City Government. October 16, 2023.
Mayor Adams Releases First-of-Its-Kind Plan For Responsible Artificial Intelligence Use In NYC Government. https://www.nyc.gov/office-of-the-mayor/news/777-23/mayor-adams-releases-first-of-its-kind-plan-responsible-artificial-intelligence-use-nyc#/0
[151] Singapore Government. No date. The Next Frontier of Singapore’s Smart Nation Journey. https://www.smartnation.gov.sg/initiatives/artificial-intelligence/. “By 2030, we see Singapore as a leader in developing and deploying scalable, impactful artificial intelligence (AI) solutions, in key sectors of high value and relevance to our citizens and businesses.” It is worth noting that by 2030 many experts believe AI will reach human-level intelligence.
[152] Dwayne Matthews. November 1, 2023. LinkedIn. https://www.linkedin.com/feed/update/urn:li:activity:7125546467561271296?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7125546467561271296%2C7125559155184066562%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%287125559155184066562%2Curn%3Ali%3Aactivity%3A7125546467561271296%29
[153] Bauschard, S. & Quidwai, S. April 16, 2024. From Insight to Implementation: How to Create Your AI School Guidance. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4784207
[154] Mohamed Yasser. April 27, 2025. Gen Z, Generative AI, and the New Job Landscape (2025). https://www.linkedin.com/pulse/gen-z-generative-ai-new-job-landscape-2025-mohamed-yasser-8vszf/.
[155] Allie Kelly. June 5, 2025. Ben Z is Hurtling Toward a Career Cliff. Business Insider. https://www.businessinsider.com/gen-z-unemployed-dream-jobs-hiring-college-degree-graduation-2025-6
[156] Maximillian Schreiner. BT boss: AI could lead to even greater staff cuts. June 16, 2025. https://the-decoder.com/bt-boss-ai-could-lead-to-even-greater-staff-cuts/.
[157] A BILL To support National Science Foundation education and professional development relating to artificial
intelligence.https://www.moran.senate.gov/public/_cache/files/5/a/5a49e3e8-e6fb-46d1-8d6e- 9ca767fcfe70/20B01CAA2681AD68D5089ADDC665E05A.bom24263.pdf
[158] The California State University. February 5, 2025. CSU Announces Landmark Initiative to Become Nation’s First and Largest AI-Empowered University System. https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powered-Initiative.aspx
[159] Julia Busiek. May 15, 2025. UC awards $18 million to scale up the ambition and impact of AI in science. https://www.universityofcalifornia.edu/news/uc-awards-18-million-scale-ambition-and-impact-ai-science
[160] SUNY Strive. 2025. SUNY STRIVE ARTIFICIAL INTELLIGENCE
STRATEGIC PLAN. https://www.suny.edu/media/suny/content-assets/documents/research/SUNY-STRIVE-Artificial-Intelligence-Strategic-Plan.pdf
[161] ttps://www.gmu.edu/AI
https://ai.fsu.edu/
https://ai.psu.edu/
[164] Katie Millard. June 9, 2025. Ohio State announces every student will use AI in class. https://www.nbc4i.com/news/local-news/ohio-state-university/ohio-state-announces-every-student-will-use-ai-in-class/
https://ai.wharton.upenn.edu/
[166] https://www.teachai.org/policy-tracker
[167] Center on Reinventing Public Education. (2024, October). Districts and AI: Tracking early adopters and what this means for 2024-25. Center on Reinventing Public Education. https://crpe.org/districts-and-ai-tracking-early-adopters-and-what-this-means-for-2024-25/
[168] U.S. White House. (2025, April 23). Advancing artificial intelligence education for American youth [Executive Order]. https://www.whitehouse.gov/presidential-actions/2025/04/advancing-artificial-intelligence-education-for-american-youth/
[169] Desert Sands Unified School District. (n.d.). AI guidance (English & Spanish). https://www.dsusd.us/departments/educational_services/technology
[170] Ohio Department of Education and Workforce. (2025). AI in Ohio’s education. https://education.ohio.gov/Topics/AI-in-Ohio-s-Education. Thompson, K. (2025, February 15). A comprehensive new toolkit from the State of Ohio and aiEDU helps educators responsibly and effectively integrate AI into education [Press release]. aiEDU. https://www.aiedu.org/aiedu-blog/a-comprehensive-new-toolkit-from-the-state-of-ohio-and-aiedu-helps-educators-responsibly-and-effectively-integrate-ai-into-education
[171] Kelly, R. (2025, June 3). Michigan Virtual, aiEDU partner to expand AI support for teachers. THE Journal. https://thejournal.com/articles/2025/06/03/michigan-virtual-aiedu-partner-to-expand-ai-support-for-teachers.aspx
[172] Davis, K. (2025, February 27). Alpha School uses AI to teach students academics for just two hours a day. Alpha School. https://alpha.school/news/alpha-school-uses-ai-to-teach-students-academics-for-just-two-hours-a-day-2/
[173] Warner, K. (2025, May 7). Why the UAE has mandated AI learning in schools. Semafor. https://www.semafor.com/article/05/07/2025/why-the-uae-has-mandated-ai-learning-in-schools
[174] Department for Education. (2025, June 10). Generative artificial intelligence (AI) in education. GOV.UK. https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education/generative-artificial-intelligence-ai-in-education
[175] Ministry of Digital Development and Information & Ministry of Education (Singapore). (2024, October 1). New “AI for Fun” modules for students [Media factsheet]. Singapore Government Press Centre. https://www.sgpc.gov.sg/api/file/getfile/7%20New%20AI%20for%20Fun%20modules%20for%20Students%20Factsheet_FINAL.pdf
[176] Asim, S., Kim, H., & Aedo, C. (2024, October 30). Teachers are leading an AI revolution in Korean classrooms. World Bank Blogs. https://blogs.worldbank.org/en/education/teachers-are-leading-an-ai-revolution-in-korean-classrooms
[177] Latif, A. (2025, March 26). Japan expands artificial intelligence teaching in high school education. Anadolu Agency. https://www.aa.com.tr/en/artificial-intelligence/japan-expands-artificial-intelligence-teaching-in-high-school-education/3520671
[178] Education Daily. (2025, June 13). Surge in AI literacy initiatives as Australian schools prepare students for the future. EducationDaily.au. https://educationdaily.au/artificial-intelligence/surge-in-ai-literacy-initiatives-as-australian-schools-prepare-students-for-the-future/
[179] Alberta Machine Intelligence Institute. (2025, April 28). Empowering the next generation with AI literacy at the EPSB Student AI Conference. https://www.amii.ca/updates-insights/student-ai-conference
[180] Rose Luckin, 2018, Machine Learning and Human Intelligence.
[181] Fei Fei Lee.. April 2024. With spatial intelligence, AI will understand the real world. https://www.ted.com/talks/fei_fei_li_with_spatial_intelligence_ai_will_understand_the_real_world/transcript
[182] Yildirim, I., Siegel, M. & Tenenbaum, J. B. Physical Object Representations. in The Cognitive Neurosciences, 6th edition (ed. Poeppel, G. M.) 399 (MIT Press, 2020); Epstein, R. A., Patai, E. Z., Julian, J. B. & Spiers, H. J. The cognitive map in humans: spatial navigation and beyond. Nat. Neurosci. 20, 1504–1513 (2017).Jara-Ettinger, J., Gweon, H., Schulz, L. E. & Tenenbaum, J. B. The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology. Trends Cogn. Sci. 20, 589–604 (2016).
[183] Spelke, E. S. Core knowledge. Am. Psychol. 55, 1233–1243 (2000). Battaglia, P. W., Hamrick, J. B. & Tenenbaum, J. B. Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences 110, 18327–18332 (2013); Baker, C. L., Jara-Ettinger, J., Saxe, R. & Tenenbaum, J. B. Rational quantitative attribution of beliefs, desires and percepts in human mentalizing. Nat. Hum. Behav. 1, 1–10 (2017); Jones, C. R. & Bergen, B. The Role of Physical Inference in Pronoun Resolution. Proceedings of the Annual Meeting of the Cognitive Science Society 43, (2021).
[184] Spelke, E. S. Core knowledge. Am. Psychol. 55, 1233–1243 (2000).
[185] Battaglia, P. W., Hamrick, J. B. & Tenenbaum, J. B. Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences 110, 18327–18332 (2013); Zhiting Hu, Tianmin Shu (2023). December 8. Language Models, Agent Models, and World Models: The LAW for Machine Reasoning and Planning. https://arxiv.org/abs/2312.05230
[186] Baker, C. L., Jara-Ettinger, J., Saxe, R. & Tenenbaum, J. B. Rational quantitative attribution of beliefs, desires and percepts in human mentalizing. Nat. Hum. Behav. 1, 1–10 (2017).
[187] LeCun, Yann. June 27, 2022. A path towards autonomous machine intelligence. https://openreview.net/pdf?id=BZ5a1r-kVsf
[188] Jones, C. R. & Bergen, B. The Role of Physical Inference in Pronoun Resolution. Proceedings of the Annual Meeting of the Cognitive Science Society 43, (2021).
[189] LeCun, Yann. June 27, 2022. A path towards autonomous machine intelligence. https://openreview.net/pdf?id=BZ5a1r-kVsf
[190] Yiu, E., Kosoy, E., & Gopnik, A. (2023). Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet). Perspectives on Psychological Science, 0(0). https://doi.org/10.1177/17456916231201401
[191] Current models are not all fully multimodal, but over the last month (October) we’ve seen ChatGPT4 allow text-to-image, text-to-text, text to voice, voice to text, image-to-text, and voice to text. Many other applications support text-to-video and video-to-text. This puts us very close to a full multimodal experience: Matthias Bastian. October 29, 2023. The Decoder. GPT-4 evolves into a more flexible "supermodel" with OpenAI's latest ChatGPT update. https://the-decoder.com/gpt-4-evolves-into-a-more-flexible-supermodel-with-openais-latest-chatgpt-update/
[192] Fei, Nanyi, Zhiwu Lu, Yizhao Gao, Guoxing Yang, Yuqi Huo, Jingyuan Wen, Haoyu Lu et al. "Towards artificial general intelligence via a multimodal foundation model." Nature Communications 13, no. 1 (2022): 3094.
[193] Huatao Xu, Liying Han, Mo Li, Mani Srivastava. October 14, 2023. Penetrative AI: Making LLMs Comprehend the Physical World. https://arxiv.org/abs/2310.09605
[194] Yiu, E., Kosoy, E., & Gopnik, A. (2023). Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet). Perspectives on Psychological Science, 0(0). https://doi.org/10.1177/17456916231201401
[195] Ben Yu. October 30, 2023. How Education Must Change to Adapt to an AI World. AI Authority. https://aithority.com/machine-learning/how-education-must-change-to-adapt-to-an-ai-world/
[196] Ouyang, L. et al. Training language models to follow instructions with human feedback. Adv.
Neural Inf. Process. Syst. 35, 27730–27744 (2022).
[197] Peter Stanton. August 17, 2019. “Sage on the Stage” vs. “Guide on the Side” Education Philosophy. https://peterwstanton.medium.com/sage-on-the-stage-vs-guide-on-the-side-education-philosophy-f065bebf36cf
[198] IBID
[199] Australia: "(N)early 70% of student tasks involved superficial learning -- simple questions and answers, taking notes, or listening to teachers." Vosniadou, Stella, Michael J. Lawson, Erin Bodner, Helen Stephenson, David Jeffries, and I. Gusti Ngurah Darmawan. "Using an extended ICAP-based coding guide as a framework for the analysis of classroom observations." Teaching and Teacher Education 128 (2023): 10413
[200] Maximillian Schreiner. April 26, 2024. The Decoder. Mastering human-AI interaction poised to become a critical job skill across professions. https://the-decoder.com/mastering-human-ai-interaction-poised-to-become-a-critical-job-skill-across-professions/
[201] Marco Dondi, Julia Klier, Frédéric Panier, and Jörg Schubert. June 25, 2021. Defining the skills citizens will need in the future world of work. https://www.mckinsey.com/industries/public-sector/our-insights/defining-the-skills-citizens-will-need-in-the-future-world-of-work.
[202] Hackman, J. R. (2011). Collaborative intelligence: Using teams to solve hard problems.
Berrett-Koehler Publishers.
[203] Ismail, Fadhil, Eunice Tan, Jürgen Rudolph, Joseph Crawford, and Shannon Tan. Artificial intelligence in higher education. A protocol paper for a systematic literature review. Journal of Applied Learning and Teaching 6, no. 2 (2023). https://journals.sfu.ca/jalt/index.php/jalt/article/view/1239/675, p. 2
[204] Sri Viswath. Vibhor Khanna. Yijia Lang. November 2023. The AI Revolution. https://drive.google.com/file/d/1gQhYT7j6b2wJmrFZHNeQgTiWPyTsjOfX/view
[205] Killi Bivins, October 28. 2023. LinkedIn. https://www.linkedin.com/posts/kelli-bivins-b59a1054_the-glocal-yokel-river-conservation-activity-7124114005694058496-wMIf
[206] Belle Trevino, October 19, 2023. Durant High School debate team hosts forum fundraiser. https://www.kten.com/story/49864529/durant-high-school-debate-team-hosts-forum-fundraiser
[207] Fei Fei Li & Geoff Hinton. October 2023. Geoffrey Hinton and Fei-Fei Li in conversation.
[208] Erich Schmidt. May 14, 2024. Eric Schmidt: Why America needs an Apollo program for the age of AI. MIT Review. https://www.technologyreview.com/2024/05/13/1092322/why- america-needs-an-apollo-program-for-the-age-of-ai/
[209] Erich Schmidt. May 14, 2024. Eric Schmidt: Why America needs an Apollo program for the age of AI. MIT Review. https://www.technologyreview.com/2024/05/13/1092322/why- america-needs-an-apollo-program-for-the-age-of-ai/
[210] Michael Shankey, December 8, 2023. Future Campus: Professor Michael Sankey on Implications of The Fifth Industrial Revolution.
[211] Lake, B. M., Ullman, T. D., Tenenbaum, J. B. & Gershman, S. J. Building machines that learn and think like people. Behav. Brain Sci. 40, e253 (2017).
[212] Yildirim, I., Siegel, M. & Tenenbaum, J. B. Physical Object Representations. in The Cognitive Neurosciences, 6th edition (ed. Poeppel, G. M.) 399 (MIT Press, 2020).
[213] Epstein, R. A., Patai, E. Z., Julian, J. B. & Spiers, H. J. The cognitive map in humans: spatial navigation and beyond. Nat. Neurosci. 20, 1504–1513 (2017).
[214] Jara-Ettinger, J., Gweon, H., Schulz, L. E. & Tenenbaum, J. B. The Naïve Utility Calculus:
Computational Principles Underlying Commonsense Psychology. Trends Cogn. Sci. 20, 589–604 (2016).
[215] Turiel, Elliot. (1983). The development of social knowledge: Morality and convention. Cambridge University Press.
[216] Powers, John. 1994. “Empiricism and Pragmatism in the Thought of Dharmakīrti and William James.”
American Journal of Theology & Philosophy 15(1):59–85.
[217] Will Richardson, October 28, 2023. LinkedeIn. https://www.linkedin.com/posts/willrichardsonbqi_i-think-one-thing-were-going-to-have-to-activity-7124000141589565440-vLv2
[218] Rose Luckin. (2018). p. 21
[ccxix] September 23, 2023. Interview of Sam Altman and Daniela Rus by Teddy Lee for the 2023 MIT AI Conference
[ccxx] Uludag, Kadir. "Testing creativity of ChatGPT in psychology: Interview with ChatGPT." Available at SSRN 4390872 (2023); University of Montana. July 5, 2023. AI tests into top 1% for original creative thinking. Science. https://www.sciencedaily.com/releases/2023/07/230705154051.htm
[ccxxi] Hayden Field. October 27, 2023. Google commits to invest $2 billion in OpenAI competitor Anthropic. https://www.cnbc.com/2023/10/27/google-commits-to-invest-2-billion-in-openai-competitor-anthropic.html?utm_source=tldrai
[ccxxii] Chris Dede. August 7, 2023. If AI is the Answer, What is the Question: Thinking about Learning and Vice Versa.
[ccxxiii] Gottweis, Juraj, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky et al. "Towards an AI co-scientist." arXiv preprint arXiv:2502.18864 (2025). https://arxiv.org/abs/2502.18864
[ccxxiv] Machine Learning Street Talk. March 1, 2025. Can AI Improve Itself?
[ccxxv] Wang, J., Ye, W., He, J., Zhang, L., Huang, G., Yu, Z., & Liang, Z. (2025). Integrating Biological and Machine Intelligence: Attention Mechanisms in Brain-Computer Interfaces. arXiv preprint arXiv:2502.19281. https://arxiv.org/abs/2502.19281