AI Literacy: The Immediate Need and What it Includes
An AI Literacy plan for your school should be one of your highest priorities
In April 2023, Bill Gates wrote that we are now living in an AI World. Leading philosophers and theorists such as Max Tegmark and Stephen Hawking have said that AI will introduce the biggest changes in the history of human civilization (introducing an “alien civilization” and “the biggest event in the history of our civilization,” respectively).
While there is some argument over the details of where this technology will take us and how quickly we will get there, there is little doubt that this technology will have a dramatic impact on the world, and it already has. To understand this new world, it is important to develop basic literacy related to the technologies.
The idea that students need to develop literacy in AI is not a new one. Even before the radical explosion in AI tools, education theorists were stressing the importance of students developing AI literacy skills.
The term “AI literacy” was first coined by Burgsteiner et al. (2016) and Kandlhofer et al. (2016) who describes the competencies needed to understand the basic knowledge and concepts about AI. On top of this, Long and Magerko (2020) defined it as a set of competencies that enables individuals to critically evaluate, communicate, and collaborate effectively with AI and use AI as a tool online, at home, and in the workplace. Further, Ng et al. (2021a, b) added AI to every student's twenty-first century digital literacy in work settings and everyday life and proposed it as a fundamental skill for everyone, not just for computer scientists. He widely incorporated the model of Bloom Taxonomy, Technological Pedagogical Content Knowledge (TPACK), and AI concepts, practices, and perspectives into the instructional design of AI literacy education. Ng 2021.
The new reality of AI playing a role in our everyday lives makes this an even more essential and pressing concern.
In this essay, I will cover the key literacies I think students need to develop. If you wish to learn more, I’ve developed a new AI Literacy Course with Dr. Anand Rao that is aimed at developing AI literacy in middle school and high school students. Dr. Sabba Quidwai will join us for live instruction and is also contributing key content.
We believe it is important for students and adults to develop this literacy before they start the next academic year, not only so they can properly use AI tools but also so they can live well and make responsible decisions in this AI world. And in the most practical way, so they use the tools properly and ethically in school. With parents already using ChatGPT as tutors for their children, it is critical that students learn to use it and other similar tools properly and in a way that maximizes its contribution to their education.
What are the AI literacies I believe students need?
The Absolute Basics
Students and educators should develop a basic understanding of what AI is, how we got to where we are today, and where things are headed in the near future.
AI works in different ways and performs different roles, so understanding the basic difference between surveillance AI, predictive AI, and generative AI is important.
While it is not important to know the technical details of how large language models work, it is useful to understand the basics of how they function so the strengths and weaknesses can be understood. They are useful for many things, but there is no way for them to reliably produce facts absent a significant change in their architecture. One lawyer who lacked AI literacy learned the hard way
It is also important to understand the differences in capabilities between the different tools, including the differences between ChatGPT3.5 and 4, what is available in Anthropic, Bing, Bard, Perplexity.ai, Stability.ai (AKA Stable Diffusion), You.com, and new releases by Meta, especially the latter’s incredible language translation tools. Understanding the different text-to-voice (e.g., Synthesia), text-to-image (e.g., Dalle-2, Midjourney), and text-to-video generators (e.g., Runway.ml, Stable Diffusion) and the differences between them, as well as what to use them for and not use them for. Students need to know which ones have added internet-based search and plug-ins and which ones do not.
It is also useful to understand that AI is being developed in ways that will eventually push machines towards human-level intelligence (artificial general intelligence); this will all likely be achieved in the next 5–10 years, though some say it could be as long as 20 years (a lot of the time frame depends on what to include in various definitions of intelligence). Students should also understand that new models are being developed that will lead to even more radical advances in aiding machine advancements beyond human-level intelligence (LaCun, 5/24/23) (superintelligence). Imagine what it will be like to have five AI assistants that are smarter than you working for you at all times when you get a bit older.
There are also some additional basic terminology and acronyms that are important to understand.
Practical Applications
Practical use applications include the basics of prompting, the length of prompts available in different applications (e.g., ChatGPT3.5 vs. ChatGPT4 vs. Anthropic vs. Bing), managing hallucinations and factual inaccuracies, proper use of plug-ins, and co-pilot tools available in specific applications.
Students should also learn how to use the tools to support academic research, generate outlines, brainstorm ideas, write better emails, and organize information.
Students should also understand the importance of continuing to develop important foundational knowledge and that they need to follow the policies of their school and related organizations (AP, IB) that determine how single artifact assignments (essays, papers) should be judged based on how they have been assigned. Students need to understand that if they are going to learn foundational academic knowledge and skills they will need in life, they still need to do the work in a way that is consistent with the school and/or classroom policy. Currently, students are often struggling to learn how to use AI tools without any guidance, are often using them improperly, and are sometimes even punished for their use even though they have not been taught to properly use them.
Societal Impacts
AI works to replicate one of humanity’s core abilities: intelligence. This will have radical impacts on society, and we cannot hide these changes from students; they need to understand them. Education systems now need to train students at the K–12 level to live in a society where they must interact with AI (Lorena Casal-Otero).
There are many potential impacts, but the immediate ones students need to understand are probably concentrated in a few areas.
Employment disruption. It is a reality that many jobs will be lost to AIs and that many jobs that exist now will not exist in the future. We do hope, and many believe, that many new jobs will exist as a result of AI, but educators need to understand that the world we are entering will radically disrupt employment, and the knowledge and skills required by employers will radically change. Employers are already starting to replace workers with AI, and 90%+ of employers want students with ChatGPT skills. The Hill (4/18/23):
Universal basic income. A commonly-proposed solution to the unemployment that will likely result is to guarantee everyone a basic monthly income. As AI-driven unemployment rises (at least in the short term), this will become an important political issue. Students should develop an understanding of the issue and the overall structural economic and technological forces that impact wealth and economic distribution.
Deep Fakes and Societal Disruption One of the biggest immediate concerns the world is facing is that images and videos can be generated that are so “real” that people cannot distinguish them from what is fake; a recent study shows that 39% of the time people can’t distinguish real photos from fake photos. This has the potential for tremendous social and political disruption. To give a concrete example, so many believed that this initial photo was real that the stock market temporarily dropped.
It’s even starting to make it difficult for people to believe what is actually real, which is known as the liar’s dividend.
Manipulation through language and bot connections. Language plays a significant role in human communication, and generative AI makes it possible to present a system of information that has been language-configured in a dynamic and interactive way that makes it more compelling and increases the potential for manipulation. Historian Yuval Noah Harari argues that the 2024 presidential election will be the last democratic election because we will be completely manipulated before voting in the future.
AI has gained some remarkable abilities to manipulate and generate language. … AI has thereby hacked the operating system of our civilization. On the eve of a presidential election, politicians are using it in the short term to control what you think, to end your independent judgment, and to end democracy. Quoted in Boon, 5/17/23 Full lecture
Emotional connections to bots. The ability of AIs to communicate like humans and to be represented as humans (anthropomorphized) makes it possible for individuals to develop relationships with these AIs, carrying all the potential emotional baggage with them. This problem plagues adults, but it is becoming more and more of a problem for social media applications for students.
More on Snapchat risks from Amber Mac
Cyberbullying. Generative AI increases the risk and impact of cyberbullying.
Generative AI allows for both the automatic creation of harassing or threatening messages, emails, posts, or comments on a wide variety of platforms and interfaces and their rapid dissemination. Its impact historically may have been limited since it takes at least some time, creativity, and effort to do this manually, with one attack after another occurring incrementally. That is no longer a limitation since the entire process can be automated. Depending on the severity, these generated messages can then lead to significant harm to those targeted, who are left with little to no recourse to stem the voluminous tide of abuse or identify the person(s) behind it. Reports indicate that harassment via automated troll bots is a significant problem, and it seems only a matter of time before real-time, autonomous conversational agents take over as the primary vehicle for, or driver of, harassment. At best, it is incredibly annoying and at worst, it can overwhelm victims, greatly amplify the impact of the harassment and cyberbullying, create a hostile online environment, and lead to substantial psychological and emotional harm (Cyberbullying Research Center)
Superintelligence and existential risks The issues mentioned above are not speculative; they are creating real problems in the real world. The most brilliant minds in AI are split about the near-term chances of the development of superintelligence and the risks associated with it, but those who are concerned (Yudkowsky, Hinton) make the basic argument that beings who are more intelligent than us are likely to kill us if they find it convenient to do so, and they base their claim on historical precedent. Others reach similar conclusions.
These catastrophic risk claims are all over social media, and students are aware of them. During the Cold War, students had similar fears of nuclear annihilation. Approximately half of Americans have this fear. Educators need to accept the reality of these considerations and directly discuss them with students.
Students need to be aware of these issues not only in order to avoid the harms of these technologies but also in order to be informed citizens and actively participate in a meaningful democracy. Our democratic world is only 200 years old, and without active efforts, we don’t know if it will continue in an AI world, as individuals with anti-democratic designs could use these technologies to take control of populations. Helbing et al.
Broader Ethical Issues
The societal disruptions previously discussed obviously raise ethical issues, but there are a number of additional ethical issues to cover.
Discrimination. Since LLMs are trained on the corpus of human history, they can recreate patterns of discrimination. For example, since there have been more male CEOs than female CEOs throughout history, both text-based descriptions and images of CEOs are likely to be male. It is likely to represent teachers as women and black and Hispanic individuals as people living in poverty. These previous issues relate to generative AI. Predictive AI models may also contribute to discrimination by, for example, predicting that a black or Hispanic individual will be more likely to commit a crime.
Intellectual Property. LLMs are trained on the work of hundreds of millions of people throughout history. Revenue is generated from the output of the training, but no compensation is given to those whose original works were used. Moreover, art rendered through generative AI tools enjoys only limited protection under copyright law.
Energy. LLMs queries and returns require a lot of energy from computers that depend on electricity, increasing climate change.
Labor. There is criticism of OpenAI for employing thousands of people in the developing world and paying them only a few dollars an hour to look at horrible images and text passages and filter them out (Harrison). This is what prevents pornographic images and text from appearing in the output.
Privacy. LLMs create privacy concerns related to data collection and retention when users interact with the systems.
Education disrupted. New AI tools have significant potential to disrupt education because students are currently assessed based on many single artifact assessments (research papers, essays) that can be written more and more by AI tools. The quality of the output produced by AIs will only increase over time and the ability of teachers to determine if it was written by AIs is rapidly declining, as AIs can be trained to write in the student’s own voice.
AI denial. Generative AI bans in schools are arguably widening the gap between private schools that are allowing access and public schools that are denying it. The reality is that only the least well-off students who cannot access ChatGPT on any other device or network are excluded from the AI world. Are school bans on ChatGPT structural exclusions? Duffy 2023. If every job in the future will involve the use of generative AI, are schools obligated to teach students how to use it? If they don’t, are they relevant?
Ethical obligations to support the benefits. AI has the potential to deliver tremendous benefits in education, poverty reduction, and workload reduction. What ethical obligations exist to both distribute these benefits and train students to use them?
To act ethically in an AI world, students need to be aware of the ethical issues related to AI development. Students need to be aware of the more nuanced arguments and ideas related to these technologies and should discuss them.
Questions Related to What Makes Us Human
As machines develop more and more intelligence capabilities that approximate human-level intelligence capabilities, there will be more and more discussions about what makes us human (which has traditionally been defined as possessing extraordinary intelligence abilities). How are we different from AI, and why are we uniquely special?
And these questions extend beyond intelligence. Once machines develop sentience (Al-Sabai) and consciousness (BBC), the question will become even more complicated, especially since there are no clear definitions of these terms.
Potential Benefits of AI
While there are many potential downsides to AI, there are also many upsides.
Economic growth. Integration of AI is likely to result in massive gains in productivity that will radically increase economic growth, creating the potential for a global reduction in poverty.
Health care. AI is leading to medical innovations that will result in a radical increase in life spans through interventions such as the development of MRNA vaccines (Constantino) and non-invasive language-brain interfaces, which is another subject students are going to need to develop literacy in. Thanks to an AI-driven breakthrough, a paralyzed man was able to walk again.
Environment. AI has the potential to lead to advances in fusion energy that will radically reduce SO2 and CO2 emissions. It could lead to great advances in energy efficiency.
Education. Generative AI bots have the potential to make education available to hundreds of millions of students worldwide who lack access to it. These tools can also function as tutors to radically expand individual tutoring opportunities for hundreds of millions of students in the developed world. Beyond providing individual opportunities, the global impacts of developing the minds of hundreds of millions more individuals should not be underestimated (Gershenfeld).
When considering the overall benefits of AI, students need to be aware not only of the concerns but also of the benefits.
AI-Human Collaboration
Communication. AI is not simply another piece of technology. It is a technology that we will interact with, not simply something that produces output as a search engine does. Just as we need to learn to interact with other people, we are going to need to learn to interact with AIs. Perhaps we should think of AI as our children (Kellis) or our interns (given their current level of intelligence, Mollick), but they are not search engines. Students, at a minimum, are going to think of these bots as their tutors, and without AI literacy, we risk them thinking of them as their close friends.
Legal rights. If AIs become sentient and/or conscious, should they be afforded legal rights? That may sound outrageous, but machine consciousness will force us to think about why we afford rights and to whom we afford them. It may also be in our interest to give them rights, as it will be easier to compete with them for jobs if they can’t work 24 hours a day. This will become an issue in democratic societies as these technologies advance.
Computer skills
While not everyone will become a programmer, understanding the basics of coding can help students understand how AI works. Knowledge of Python can help with ChatGPT API integration.
Conclusion
There is a lot to learn. I hope your school is on-board. Since we are now all living in an AI-world, I think it is essential for students and faculty to learn how to use these technologies properly. There is no longer any time to wait or plan; it’s time to act.
As Peter Stone, a computer science professor at the University of Texas and chair of the One Hundred Year Study on Artificial Intelligence, wrote:
In terms of new courses, I think it would be very valuable for all K-12 students to have a course in artificial intelligence, both from a technical perspective—meaning what are the tools that are out there, how do they work, what can they do and what they can’t do—and also from the philosophical and ethical perspective. And maybe one course can mix the two of those. It would be a fantastic development if we could make sure that all students coming up through our education system were AI literate when they enter the workforce.
Bibliography
Casal-Otero, L., Catala, A., Fernández-Morante, C. et al. (2023, April 19).AI literacy in K-12: a systematic literature review. IJ STEM Ed 10, 29 https://doi.org/10.1186/s40594-023-00418-7
D.T.K. Ng, J.K.L. Leung, K.W.S. Chu, M.S. Qiao. (2021). AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology, 58 (1) (2021), pp. 504-509
D.T.K. Ng, J.K.L. Leung, S.K.W. Chu, M.S. Qiao. (2021). Conceptualizing AI literacy: An exploratory review. Computers & Education: Artificial Intelligence. https://www.sciencedirect.com/science/article/pii/S2666920X21000357
D. Touretzky, C. Gardner-McCune, F. Martin, D. Seehorn. (2019, July). Envisioning AI for K-12: What should every child know about AI. Proceedings of the AAAI Conference on Artificial Intelligence, 33 (1) (2019, July), pp. 9795-9799
Heikilla, M. (2023, April 12). AI literacy might be ChatGPT’s biggest lesson for schools. Technology Review. https://www.technologyreview.com/2023/04/12/1071397/ai-literacy-might-be-chatgpts-biggest-lesson-for-schools/
Kennedy, B. (2023, February 15. Public Awareness of Artificial Intelligence in Everyday Activities. https://www.pewresearch.org/science/2023/02/15/public-awareness-of-artificial-intelligence-in-everyday-activities/
Klein, A. (2023, May 10). AI Literacy, Explained. Education Week. https://www.edweek.org/technology/ai-literacy-explained/2023/05
Lanze, J. (2023, April 28). Former OpenAI Researcher: There’s a 50% Chance AI Ends in 'Catastrophe'. Former OpenAI Researcher: There’s a 50% Chance AI Ends in 'Catastrophe' - Decrypt
Prothero, Arianna (2023, March 23). ChatGPT Is All the Rage. But Teens Have Qualms About AI. Education Week. https://www.edweek.org/technology/chatgpt-is-all-the-rage-but-teens-have-qualms-about-ai/2023/03
S.C. Kong, W.M.Y. Cheung, G. Zhang. (2023 (26 (1)). Evaluating an artificial intelligence literacy programme for developing university students' conceptual understanding, literacy, empowerment and ethical awareness. Educational Technology & Society, 26 (1) (2023), pp. 16-30
Su, Jiahong (2023). Artificial Intelligence (AI) Literacy in Early Childhood Education: The Challenges and Opportunities. Computers and Education: Artificial Intelligence. Volume 4. Open Access.
Stefan Bauschard has been actively involved in issues related to generative artificial intelligence since January 2023, when he hosted one of the first webinars on AI and education, that drew more than 100 participants for the two-hour event. In March, he published a co-edited 1,000 page volume on AI and education. He has spoken at conferences in the US (NDCA) and in the U.K. (Cottesmore) about understanding AI, trends in AI development, and impending educational disruptions. He will soon speak on AI at AIXeducation and the National Communication Association Convention. Staying abreast of current developments, he is a regular contributor to podcasts (EdUp AI; MyEdTechLife; Coffee for the Brain; D.E.E.P. Teaching; and Coconut Thinking (Bangkok, forthcoming)). He’s acknowledged as a “Top Contributor” to the 3,000+ member Higher Education Discussions of AI Writing Group. He blogs about developments in AI related to education at stefanbauschard.substack.com, which has received more than 17,000 views in 30 days. He co-taught an AI course to debate coaches in February 2023 and has co-designed and co-taught an AI Bootcamp for education leaders with Dr. Sabba Quidwai. With Dr. Anand Rao, he also co-developed and taught an AI Literacy course for students in grades 6-12. He is currently working on a publication related to the use of debate as an instructional method in the world of AI. He’s familiar with the debates on the social issues related to AI and the work of the major thinkers and doers on the field, including Sam Altman, Mo Gawdat, Geoff Hinton, George Hotz, Andrej Kaparthy, Ray Kurzweil, Manolis Kellis, Yan Lecunn, Emad Mosteque, Max Tegmark, Stuart Russell, and Eliezer Yudkowvsky. He’s very proud of the children’s book he wrote with AI in 6 minutes and has enjoyed his time this summer working with debate coaches, teachers and school technology officers on integrating AI into their classrooms and schools.