TLDR
*Developing people who can think critically is very important.
*We need to equitably integrate AI into schools and provide AI training and support for students and teachers.
*Humans should remain at the center of our efforts, which means (to me) that we have to focus on developing really awesome humans, and we are going to need to develop them in a way that enables human recursive self-improvement.
*We need to focus on developing students who can succeed in an AI world, not helping students use AI to succeed in the current world (me).
[Related: Virginia Guidance; North Carolina Guidance; WV Guidance]
The state of Washington has issued its “Human-Centered AI Guidance for K-12 Public Schools.”
The guidance focuses on how quickly AI is advancing, the importance of integrating AI into schools, and the need to train students and staff to work with AI.
What I find most intriguing about the guidance is that it emphasizes the importance of teaching students to develop “critical thinking” skills, something that is referenced eight times in the document. It is incredible to see this in the report, as it has arguably become the number one job skill in an AI World.
IBM’s CEO, Arvind Krishna, has identified critical thinking as the number one skill needed to future-proof a job in an AI World. Connor Grennan, Dean of Students at the NYU Stern School of Business, explained that generative AI now means you are hiring “critical thinkers.” All of the recent future ‘future of jobs” reports issued by organizations such as the World Economic Forum identify critical thinking as one of the most essential skills for future jobs in an unknown world.
The first time the report references critical thinking, it asks, “How will we use it (AI) in a way that empowers critical thinking?”
This is one of the central questions on educators’ minds, as many fear that students will simply use AI to do their work and not think critically. And the reality is that if educators do not adjust their assignments, students will use AI in a way that undermines critical thinking, as AI can complete many current assignments.
The critical challenge educators face will be to create new forms of assessment, such as debate that enable students to use AI without sacrificing critical thinking.
This is a place where the guidance report could be stronger. In an AI world, how will educators adapt instruction and assessment to build critical thinking while students use AI tools? The report makes a strong case for AI integration and the training needed to accomplish that goal (we need to provide “educators with the necessary resources, training, and support to incorporate these technologies in ways that enhance their instruction and, more importantly, nurture our students’ critical thinking”) but there isn’t a practical roadmap provided for how to integrate AI into the classroom and promote critical thinking at the same time. We do suggest some ideas in our free report (debate, entrepreneurship programs, STEM programs, gaming programs, portfolios), but there isn’t any guidance in this report as to how districts should accomplish the goal of integrating AI and critical thinking into the curriculum.
This emphasis on developing critical thinking skills in humans is essential because a central theme of the report is that humans should bookend the interaction with AI (Human-AI-Human). If that is going to hold (and it will be challenged whether we like it or not), we are going to have to create some awesome humans. If it is humans who are going to have the judgment, wisdom, and critical thinking skills needed to remain in control, schools are going to have to prioritize developing those in students. As a recent article in The Information points out, we need to train ourselves to be better humans, which is a central theme of our report.
One approach is to help students recursively improve. Recursive self-improvement in AI refers to a process where an artificial intelligence system is capable of improving its own algorithms and performance without human intervention. Given how quickly the world will change, we need to develop students who are capable of their own recursive self-improvement as they grow. This should become a central goal of any educational system focused on prioritizing human development.
___
The report contains many ideas that I hope will be implemented in many states.
AI-integration and usage. The Washington Guidance is another report that articulates the importance of integrating AI usage into the classroom: “Schools across Washington are already pioneering efforts to integrate AI into classrooms. With a full embrace of AI, Washington’s public education system will be at the forefront of innovation and excellence.” While bans remain in some places, it is great to see more and more guidance embracing using AI in the classroom.
Student AI literacy. The report stresses the importance of AI literacy, arguing that “Developing students’ AI literacy by helping them understand the concepts, applications, and implications
of AI in various domains, and empowering them to use AI as a tool for learning and problem-solving.” Learning how to use the tools is an important part of students’ AI literacy, but it is also important that the students learn how AI is and will continue to fundamentally change the world they are living in.
Another strike against AI writing detectors: “Software companies that claim products can detect content developed by another AI tool or its own AI tool are currently not reliable and should not be used as the sole way to determine whether cheating and plagiarism have occurred.”
AI is advancing quickly. As the report acknowledges, “AI is evolving at lightning speed.” This means that all of the above is a priority, and it’s why I keep stressing that we need to focus primarily on preparing students for the AI world, not on preparing students to use AI in this existing world.
The report hits the usual main points about personalization and saving teachers time, data security, privacy, and trying to reduce bias.
___
There are a few areas where this report could be clearer.
AI and generative AI. As with other reports, this report conflates GAI with all AI (at least in its description). And it suggests that all generative AI comes from large language models (LLMS).
There is more to AI than generative AI (even the new Google math tool combines generative AI with older symbolic approaches), and there is more to generative AI than LLMs.
Moreover, in the future, we may not use generative AI at all. As Yann LeCun frequently notes, within a few years, we may not use LLMs or even generative AI at all.
There is also a lot of space between a foundational LLM that was originally just a text predictor and a sentient, superhuman (in all ways) AI. Technologically, we are beyond the original simple text predictor and likely on to AIs with nascent but growing reasoning abilities, even though we are nowhere near a conscious, superhuman AI (though AIs can already do some things better than humans).
I bring this up because, as I’ve noted elsewhere, many of the concerns related to AI (privacy, data security) are more connected to other types of AI than generative AI, and many future AI advances are not tied to LLMs, though most expect them to continue to play significant roles.
Also, I think a more detailed understanding of AI will help people understand how it will develop in the immediate future.
Curriculum redesign. Like many other reports, I don’t think this one confronts the challenge of how much curriculum design may be needed, at least beyond K–6. When we educate a child today, we do not do so operating under the assumption that they’ll have multiple, hyper-intelligent AIs working with them as assistants all day long. As we get closer and closer to that world, and we are already in it to a degree, how will instruction need to change? Yes, we need to teach students how to use AIs, but students need to learn how to use AIs for the world of tomorrow, not the world of today. Imagine if the curriculum of today’s educational system focused on teaching students how to use AI to grow crops because most people used to work in agriculture. Just as education fundamentally changed in response to the end of the agriculture area, it is going to need to fundamentally change to prepare students for the AI World.
The challenge to human control. There are a couple of ways AIs will present a significant challenge to these ideas in the near future. AIs will start to present, for better or worse, challenges to assessment and the role of humans remaining in control (which is often how “human in the loop” is defined). While humans currently exceed the abilities of AIs in assessment related to fact-checking (at least using common LLMs), AIs that are capable of analyzing larger sets of data will arguably be able to offer more consistent scoring of written work across buildings, districts, and states. They may even end up being less biased than human scorers. When human assessors are challenged by AI results that are more consistent across districts and potentially less biased, interesting conversations will emerge. The report does acknowledge this.
Of course, we, as humans, can decide to always be in control (debates about AI “taking over” aside), but once AI gets better than us at doing certain things, we may want to default to the AIs. Individuals will certainly show up at our doors demanding that we default to the AI’s evaluation and judgment regarding a given student.
Citing AIs. The report references a common suggestion made in the spring that “any use (of AI) must be referenced.”
The guidance takes a strong position related to documenting any use of GAI output in schoolwork. Conceptually, this makes sense, but it is worth noting that GAI tools are now embedded in most products students and teachers use, including Google and Microsoft suites. In fact, when you start any new Google Doc, it asks you if you want help writing.
Given how integrated GAI is into the workflow, citing every instance becomes a practical challenge, as it wouldn’t make sense to highlight sentences or parts of sentences within a document and cite all the different tools that may have been used to aid in the development of those individual sentences. For example, I regularly use the following when writing: Bard, ChatGPT, Grammarly, Perplexity, and often isolated sentences, parts of sentences, ideas, explanations, rewrites, etc are written with the help of different AIs.
The “How to Cite” generative AI documents from MLA, APA, and Chicago came out in the spring of 2023, when GAI was more novel, accessible in only a few places (ChatGPT, for example), and GAI was being used to produce entire articles and school papers. In this case, I think it makes sense to cite it (“I wrote this paper (almost) entirely with ChatGPT” (I’m not sure a student would do that)), but citing every instance of generative AI usage when it is simply integrated as part of a workflow is probably not practical.
That said, I think it makes sense to cite GAI in a few instances.
(1) If a teacher is trying to work with a student to teach him or her how to use GAI, having them articulate exactly how they used it is important.
(2) If a student is relying on GAI output for a factual claim (I would never recommend this, as these systems cannot be relied on for factual information (though they are getting much better in this regard)).
(3) If a substantial portion of work or text is generated from it (perhaps a paragraph or more).
Conclusion
This report highlights that a pivotal goal of our educational endeavors should be the development of individuals who excel in critical thinking in an AI World. To achieve this in an increasingly AI-driven world, it is crucial to integrate AI into our schools equitably, ensuring both students and teachers have the necessary training and support. At the heart of our efforts must always be the human element - our aim is not merely to adapt humans to existing technology but to foster a generation of remarkable individuals capable of recursive self-improvement who can adapt to an AI World and and keep humans at the center of it.