“As we integrate these powerful tools into our schools and workplaces, we must urgently equip students and workers with the skills, knowledge, and competencies to harness AI responsibly and effectively. Our education system must adapt to prepare a workforce that can leverage AI to its full potential while safeguarding against its risk.”
Virginia introduced its GUIDELINES FOR AI INTEGRATION THROUGHOUT EDUCATION IN THE COMMONWEALTH OF VIRGINIA.
I love this guidance.
Why?
It’s short and sweet – 4.5 pages. Those of you who know me know that I tend to be verbose, but in our most recent report, I produced an Executive Summary that highlights all of the major points, and then there is a chapter on each for more detail. States issuing longer reports may want to consider something like that.
It rises to the occasion as to what is at stake. We need to equip the students with the skills they need to be successful and the skills our industry needs to compete. Virginia highlights the critical national security and military intelligence institutions in the United States that are headquartered in Virginia.
It isn’t limited to generative AI. All of the other guidance reports I’ve seen are focused largely or exclusively on generative AI, but there are a lot of other forms of AI, and AI systems used in surveillance rather than content generation in educational settings can intrusively monitor students' online activities and communications, potentially breaching their privacy by collecting and analyzing personal and sensitive data. Additionally, these systems might create detailed behavioral and academic profiles of students. I highlight privacy because many of the other reports also highlight privacy but then only discuss generative AI.
It identifies practical possibilities for upskilling. It identifies Virginia colleges and universities, including community colleges and employers, as “key partners and guides around building the skills and knowledge required to be successful in the new economy.”
It stresses the importance of providing practical upskilling opportunities for teachers. These include hands-on experience, professional development, micro-credentialing, and badging.
What’s the goal? The goal can’t just be to “learn about AI.” We need to know from “business leaders, educators, governing members, leaders, and families” what skills we want our graduates to have. The recently released US Department of Education’s “A Call to Action for Closing the Digital Access, Design, and Use Divides” articulates the importance of developing learner profiles that we tie course selection, learning objectives, and assessment to. In our report, for example, we reference San Ramon Valley’s learner profile (p. 163 approx.) and discuss how human deep learning (Jal & Mehta) approaches amplified by AI and AI skill development can best prepare students for an AI World.
Integrating across courses. Yesterday, I wrote about how I started my first teaching job in 1997 as a computer science teacher. The school required all freshmen to take "computer literacy." We covered the basics of how computers work and how they might impact society. I doubt the school still has the class, and in 6 years, when the FDA licenses the first AI doctor, we may no longer need it, as AI, I assume, will be woven into all courses, but right now we need AI literacy or an “Intro to AI” for all students.
I do like, however, how the Virginia report suggests including it across subject areas. I mean, can you teach economics today without discussing how AI will transform jobs and likely deflate the economy? Can you teach social studies without discussing the impact on the global balance of power? Can you teach any of the humanities without talking about how AI will challenge our assumptions about what it means to be human?
Can you teach politics without noting that fake Joe Bidens are making robocalls?
Yes, design assignments and assessments that encourage critical thinking, original thought, and human judgment., AI will be more difficult to regulate than this report appears to anticipate (see below), so the trick for educators will be to design work that promotes critical thinking, for example, even if students use AI. We suggest debate and oral presentations with questioning.
It hits the important notes about doing everything possible to protect the privacy, security, and confidentiality of personally identifiable information and reduce bias.
It hits on the highlights of adaptive and personalized learning.
What are some additional considerations?
Preparing students for the AI world is about more than learning about how to use AI. Teaching people what AI can do and how to use it is important because they can use it as a copilot to significantly augment their work, but if the focus of education becomes teaching people about how to use AI, then people won’t have any skills when AI becomes an autopilot and can go about its merry way without us. To be prepared for the future, students do not only need to learn about AI; they must also learn how to critically think and collaborate with others. Critical thinking and other durable skills are the skills in highest demand both now and in an AI world, with AI skills being a close second. To prepare students for an AI World, schools need to 10X all of these programs and curricular opportunities. We have many ideas for how to do that here and here. Its emphasis on broader skills is one of the things I really like about North Carolina’s Guidance.
The guidance notes that “The true art of teaching involves wisdom, judgment and interpersonal skills that machines cannot replicate…it will never replace teachers who provide wisdom, context, feedback, empathy, nurturing and humanity in ways that a machine cannot.” While it is an open question (see the next point) whether or not machines can provide these, even if machines can we must develop these skills in humans, and if it is believed that only humans can provide them, it is even more important that we develop them in humans.
Challenges to the idea of unique human attributes. AIs already can, and will only be further able to, excel at knowledge work. However, companies such as Inflection.ai (Pi) are working to develop feedback, empathy, nurturing, and humanity in machines. The more an AI knows about an individual, the more “context” it will be able to provide for feedback. Geoff Hinton, the AI scientist who was perhaps the main figure behind the development of these technologies, believes that computers are already empathetic (his view is not widely shared, but many believe they can simulate empathy), which isn’t surprising given that many people are starting to prefer AI therapists over human therapists.
While I do not want teachers to be replaced, some “tutoring” companies are developing AIs that aim to “nurture students’ critical thinking, values, and character development.”
Challenges to humans as the final decision-makers. There are a couple of ways AIs will present a significant challenge to these ideas in the (near) future. AIs will start to present, for better or worse, challenges to assessment and the role of humans remaining in control (which is often how “human in the loop” is defined). While humans currently exceed the abilities of AIs in assessment related to fact-checking (at least using common LLMs), AIs that are capable of analyzing larger sets of data will arguably be able to offer more consistent scoring of written work across buildings, districts, and states. They may even end up being less biased than human scorers. When human assessors are challenged by AI results that are more consistent across districts and potentially less biased, interesting conversations will emerge. The report does acknowledge this.
Of course, we, as humans, can decide to always be in control (debates about AI “taking over” aside), but once AI gets better than us at doing certain things, we may want to default to the AIs. Individuals will certainly show up at our doors demanding that we default to the AI’s evaluation and judgment regarding a given student.
Challenges to any policy-making. Schools should certainly develop guidance and policies in specific instances. Policies, even if they are not enforceable, can act to limit liability and at least restrain inappropriate use. Guidelines and policies that attempt to limit student use can be important. [Example; Clearly outline the school or system’s policies and protocols around…honor code, student code of conduct, acceptable use, and ethical considerations when using AI, including those related to plagiarism and proper use of secondary sources]. Schools need to be aware, however, that AIs can be easily trained to write in student voice, and it is becoming very difficult to distinguish AI work from student work, especially if a student makes any effort to disguise the writing as his or her own. Policy-making is important, but schools should understand that it will have a limited impact on student use of this ubiquitous and adaptable technology. Ultimately, instruction and assessment in many areas should probably start with the assumption that students are using these tools.