Keeping up with everything that is going on with AI is challenging in two ways.
One, Although the advances are predictable (all heading in the direction of replicating all domains in which humans are intelligent), the rate of change, which is now exponential, is difficult for most. I think if you see all of the developments within this frame of AI racing toward human-level intelligence, the developments are less surprising. If I had one piece of advice, I would suggest planning for a world where machines become more and more intelligent, even if they never achieve general human-level intelligence in all domains.
Two, as a general-purpose technology such as electricity it can and is being utilized in everything people do from advertising to customer service, to infrastructure development, to paper writing, to lesson design and assessment. It’s easy to be drawn to all of the use cases, as they are interesting, but I think it’s important for individuals to start focusing on their specific areas (education and then instructional design and debate).
To understand these developments, I think there are a few helpful things.
Understanding how base generative models work. AI is more than generative AI, but a lot of the current focus on AI usage is on generative AI. This incredible 18-minute video will help you understand how these technologies are evolving and how they work.
Understanding the significance of "application layers.” A lot is written abot how different based models (ChatGPT, Gemini, etc.) compare to one another on different benchmarks, and people have different opinions as to their strengths, but the real future (the present in some cases) is in the application layer, which involves fine-tuning on specific domains. For example, in the legal arena, Harvey.ai “incorporated hundreds of millions of words of legal text and feedback from licensed expert attorneys.” It is interesting but not especially relevant how great a particular model is; what matters is how well AI works for you.
In education, we see that ChatGPT4 is already competitive with human essay graders (not quite there but close), and that is without any fine-tuning. Once these models are trained on the specific content of a course and how to evaluate essays, they will likely exceed human graders in some areas, especially reliability.
In the context of instructional design, Dr. Philippa Hardman explains her new research:
This video explains the significance of the application layer.
Adding agents. The strength of the models will be turbocharged by Agents that can plan, reason, research, write, and make autonomous decisions while also proofing their work to significantly reduce hallucinations.
If #2 above included autonomous agents, Epiphany’s score would likely have been closer to a 5.
I covered agents here and I substantially updated that section of our paper.
Understanding the significance of the challenge to education. AI tools that can produce products that students are assigned obviously creates challenges, and that challenge will grow, especially as these technologies essentially become digital twins that can mimic individual humans. Marc Watkins outlines this in detail in his great blog. AI writing detectors are not a solution.
Understanding the impacts across society with more depth. Ethan Mollick’s new book, Co-Intelligence, provides an excellent overview of generative AI and its implications. As he frequently says, the AI you are using today will be the worst AI you will ever use.
Understanding the implications for education. We updated our paper. It contextualizes change in education in terms of the different industrial revolutions, explains the technology and its advances (Chapter 2), and makes the case for human deep learning approaches (debate, portfolios, entrepreneurship programs) as a way to help educators prepare students for the AI World.
In our book, Chat (GPT): Navigating the Impact of Generative AI Technologies on Educational Theory and Practice: Educators Discuss ChatGPT and other Artificial Intelligence Tools, 35 experts examine the implications of generative AI in education and how the system can respond.
Developing school guidance policies. I updated our school guidance report — From Insight to Implementation: How to Create Your AI School Guidance (Canva link; PDF download at SSRN).
This report provides a comprehensive review and analysis of guidance documents issued by several U.S. states (California, Kentucky, North Carolina, Ohio, Oregon, West Virginia, Virginia, Washington), international organizations (OECD, UNESCO), and foreign governments (Australia, U.K.) concerning the integration of AI into K-12 educational settings. It examines the common themes addressed across these guidelines, including fostering AI literacy, preparing students for an AI-driven workforce, incorporating AI instruction, ensuring equitable access, safeguarding data privacy, and promoting the ethical use of AI.
The report highlights the strengths of the existing guidance, such as raising crucial issues for educators to consider and emphasizing the primacy of human decision-making. However, it also identifies several limitations, including a lack of specific implementation directives, an overly narrow concentration on generative AI models, inadequate guidance on emerging AI technologies, and an underestimation of the rapid advances in AI capabilities and what they mean for the world.
Finally, the report offers specific recommendations for developing comprehensive, values-driven AI policies and guidelines tailored to individual school contexts, accounting for the accelerating pace of AI progress across all types of AI. It underscores the importance of continuous professional development for educators, curriculum integration to prepare students for an AI-influenced future, and an approach grounded in educational goals and institutional values.
If you’d like any assistance, feel free to reach out! We are proud of our work!
Thank You!!