May Department of Education Report on AI in the Classroom
We have a link to the report and quotes that highlight the main points
https://www2.ed.gov/documents/ai-report/ai-report.pdf
Yesterday, the US Department of Education’s Office of Science and Technology released its report on AI and the future of teaching and learning. Quotes that represent the main ideas are below.
*Expanding the Concept of “EdTech”
-AI is way more than traditional EdTech
“AI can be defined as “automation based on associations.” When computers automate reasoning based on associations in data (or associations deduced from expert knowledge), two shifts fundamental to AI occur and shift computing beyond conventional edtech: (1) from capturing data to detecting patterns in data and (2) from providing access to instructional resources to automating decisions about instruction and other educational processes. Detecting patterns and automating decisions are leaps in the level of responsibilities that can be delegated to a compute system.”
-AI is interactive; EdTech is not
“AI enabled educational systems will be desirable in part due to their ability to support more natural interactions during teaching and learning. In classic edtech platforms, the ways in which teachers and students interact with edtech are limited. Teachers and students may choose items from a menu or in a multiple-choice question. They may type short answers. They may drag objects on the screen or use touch gestures. The computer provides outputs to students and teachers through text, graphics, and multimedia. Although these forms of inputs and outputs are versatile, no one would mistake this style of interaction with the way two people interact with one another; it is specific to human-computer interaction. With AI, interactions with computers are likely to become more like human-to-human interactions (see Figure 4). A teacher may speak to an AI assistant, and it may speak back. A student may make a drawing, and the computer may highlight a portion of the drawing. A teacher or student may start to write something, and the computer may finish their sentence—as when today’s email programs can complete thoughts faster than we can type them.”
*General Benefits of AI for Students
-AI can strengthen individualized instruction
“AI may improve the adaptivity of learning resources to students’ strengths and needs. Improving teaching jobs is a priority, and via automated assistants or other tools, AI may provide teacher greater support. AI may also enable teachers to extend the support they offer to individual students when they run out of time.:
-Current personalization tools may automatically adjust the sequence, pace, hints, or trajectory through learning experiences.
“It is time consuming to customize curricular resources, and teachers are already exploring how AI chatbots can help them design additional resources for their students. An elementary school teacher could gain powerful supports for changing the visuals in a storybook to engage their students or for adapting language that poorly fits local manners of speaking or even for modifying plots to incorporate other dimensions of a teacher’s lesson.”
--AI supports tutoring
“Right now, many school systems are looking at high-intensity human tutoring to help students with unfinished learning. Human tutoring is very expensive, and it is hard to find enough high-quality human tutors. With regard to large-scale needs, if it is possible for an ITS (Intelligent Tutoring Systems) to supplement what human tutors do, it might be possible to extend beyond the amount of tutoring that people can provide to students.”
-AI can strengthen community curricular adaption
“Developing resources that are responsive to the knowledge and experiences students bring to their learning—their community and cultural assets—is a priority, and AI may enable greater customizability of curricular resources to meet local needs.”
-AI will find patterns teachers miss
“AI may help teachers make better decisions because computers notice patterns that teachers can miss. For example, when a teacher and student agree that the student needs reminders, an AI system may provide reminders in whatever form a student likes without adding to the teacher’s workload.”
*Specific Benefits of AI
-AI benefits neurodiverse learners
“AI models could help in including neurodiverse learners (students who access, process, and interact with the world in less common ways than “neurotypical” students) who could benefit from different learning paths and from forms of display and input that fit their strengths.”
-AI benefits students with special needs
-“AI-based tutoring for students as they solve math problems (based on cognitive learning theories), adapting to learners with special needs (based on the Universal Design for Learning framework and related theories).”
-AI benefits those with IEPs
“By nature, teaching requires significant time in planning as well to account for the breadth of needs across their rosters—especially for inclusive learning environments and students with IEPs and 504 plans. AI could help teachers with recommendations that are tuned to their situation and their ways of practicing teaching and support with adapting found materials to fit their exact classroom needs. For students with an IEP, AI could help with finding components to add to lesson plans to fully address standards and expectations and to meet each student’s unique requirements. Even beyond finding components, AI might help adapt standardized resources to better fit specific needs—for example, providing a voice assistant that allows a student with a visual difficulty to hear material and respond to it or permitting a group of students to present their project using American Sign Language (ASL) which could be audibly voiced for other students using an AI ASL-to-Spoken-English translation capability. Indeed, coordinating IEPs is time-consuming work that might benefit from supportive automation and customized interactivity that can be provided by AI.”
*Benefits for Teachers
-Teacher development and training
“Making teacher professional development more productive and fruitful. Emerging products already enable a teacher to record her classroom and allow an AI algorithm to suggest highlights of the classroom discussion worth reviewing with a professional development coach.37 AI can compute metrics, such as whether students have been talking more or less, which are difficult for a teacher to calculate during a lesson.”
“Toward automated feedback on teacher discourse to enhance teacher learning... Classroom simulation tools are also emerging and can enable teachers to practice their skills in realistic situations.39 Simulators can include examples of teaching from a real classroom while changing the faces and voices of the participants so that teaching situations can be shared and discussed among teachers without revealing identities.”
*Assessments
-Formative Assessments
“AI models and AI-enabled systems may have potential to strengthen formative assessments. In one example, a question type that invites students to draw a graph or create a model can be analyzed with AI algorithms,49 and similar student models might be grouped for the teacher to interpret. Enhanced formative assessment may enable teachers to better respond to students’ understanding of a concept like “rate of change” in a complex, real-world situation. AI can also give learners feedback on complex skills, such as learning American Sign Language50 or speaking a foreign language51 and in other practice situations where no person is available to provide immediate feedback….”
Researchers have also embedded formative assessments in games so that students can show how well they understand Newtonian physics as they play increasingly difficult levels of a game.54 If a student can more easily ask for and receive help when they feel frustrated or confused, reducing those feelings can feel encouraging. Student feelings of safety, confidence, and trust in the feedback generated by these AI-enabled systems and tools are essential to showcase their learning.
-Automated Essay Scoring
“One instructive example is Automated Essay Scoring (AES). To become strong writers, which is a valuable life skill, students need regular and specific feedback. However, reviewing and providing feedback on essays is very time consuming for humans. Hence, Ellis Page provided a first vision for computer programs that could review and provide feedback on student essays in 196657 , and much effort has gone into AES technologies in the intervening 56 years. Many research review articles are available to summarize the progress, which has been impressive.58 Further, some of today’s applications of AES technologies will be familiar to readers, such as Grammarly, Turnitin, and the various essay analysis engines used by publishers and assessment companies. Also note that while the traditional AES functionality emphasizes scoring or rating essays, newer AI-enabled products focus more on providing students with constructive criticism and developing their skills as writers. Writing is a life skill that is important to the pursuit of college and career ambitions, and developing writers require comprehensive feedback. If developers could inexpensively augment human feedback to developing writers with AI feedback, it’s possible that support for learning to write could become more equitable.”
*Concerns Related to AI
-AI Threatens Privacy
“A central safety argument in the Department’s policies is the need for data privacy and security in the systems used by teachers, students, and others in educational institutions. The
development and deployment of AI requires access to detailed data.. As AI models are not generally developed in consideration of educational usage or student privacy, the educational application of these models may not be aligned with the educational institution’s efforts to comply with federal student privacy laws, such as FERPA, or state privacy laws..”
-AI Produces Discrimination
Issues related to racial equity and unfair bias were at the heart of every listening session we held. In particular, we heard a conversation that was increasingly attuned to issues of data quality and
the consequences of using poor or inappropriate data in AI systems for education. Datasets are used to develop AI, and when they are non-representative or contain undesired associations or
patterns, resulting AI models may act unfairly in how they detect patterns or automate decisions. Systematic, unwanted unfairness in how a computer detects patterns or automates decisions is
called “algorithmic bias.” Algorithmic bias could diminish equity at scale with unintended discrimination. As this document discussed in the Formative Assessment section, this is not a new. conversation. For decades, constituents have rightly probed whether assessments are unbiased and fair. Just as with assessments, whether an AI model exhibits algorithmic bias or is judged to be fair and trustworthy is critical as local school leaders make adoption decisions about using AI to achieve their equity goals.
“In a simple example, if AI adapts by speeding curricular pace for some students and by slowing the pace for other students (based on incomplete data, poor theories, or biased assumptions about learning), achievement gaps could widen.”
-AI threatens evidence-based practices
“For example, the requirement to base decisions on evidence also arises in the Elementary and Secondary Education Act (ESEA), as amended, which introduced four tiers of evidence (see Figure 2). Our nation’s research agencies, including the Institute of Education Sciences, are essential to producing the needed evidence. The Blueprint calls for evidence of effectiveness, but the education sector is ahead of that game: we need to insist that AI-enhanced edtech rises to meet ESEA standards”
-AI threatens Machine-Human Conflicts
Every guardian is familiar with the problem: A person or computer may say, “Our data suggests your student should be placed in this class,” and the guardian may well argue, “No, you are using the wrong data. I know my child better, and they should instead be placed in another class.” This problem is not limited exclusively to AI systems and tools, but the use of AI models can amplify the problem when a computer uses data to make a recommendation because it may appear to be more objective and authoritative, even if it is not.
*Guiding Principles
-Human in the Loop
“To this end, a first recommendation in this document (in the next section) is an emphasis on AI with humans in the loop. Teachers, learners, and others need to retain their agency to decide what patterns mean and to choose courses of action. The idea of humans in the loop builds on the concept of “Human Alternatives, Consideration, and Fallback” in the Blueprint and ethical concepts used more broadly in evaluating AI, such as preserving human dignity. A top policy priority must be establishing human in the loop as a requirement in educational applications, despite contrary pressures to use AI as an alternative to human decision making.”
“These and other choices need to be debated openly. For example, we may want to define instructional decisions that have different kinds of consequences for a student and be very careful about delegating control over highly consequential decisions (for example, placement in a next course of study or disciplinary referrals). For human in the loop to become more fully realized, AI technologies must allow teacher monitoring, have protocols to signal a teacher when their judgment is needed, and allow for classroom, school, or district overrides when they disagree with an instructional choice for their students.”
-Health/safety/equity
AI does not have the broad qualities of contextual judgment that people do. Therefore, people must remain responsible for the health and safety of our children, for all students’ educational success and preparation for their futures, and for creating a more equitable and just society.
*Unique Role of the Teacher
“Today, for example, if an ITS (Intelligent Tutoring System) specializes in feedback as a student practices, a human teacher could still be responsible for motivating student engagement and self-regulation along with other aspects of instruction. In other contemporary examples, the computer ITS might focus on problem solving practice, while teachers work with students in small groups.”
“And yet, any teacher knows there is more to supporting learning than adjusting the difficulty and sequence of materials. For example, a good teacher can find ways to engage a student by connecting to their own past experiences and can shape explanations until they really connect in an “aha!” moment for that student. When we say, “meet the learner where they are,” human teachers bring a much more complete picture of each learner than most available edtech. The teacher is also not likely to “over personalize” (by performing like an algorithm that only presents material for which the learner has expressed interest), thereby limiting the student’s exposure to new topics. The nature of “teachable moments” that a human teacher can grasp is broader than the teachable moments today’s AI models grasp. From fixed tasks to active, open, and creative tasks. As mentioned above, AI models are historically better at closed tasks like solving a math problem or logical tasks like playing a game. In terms of life-wide and lifelong opportunities, we value learning how to succeed at open-ended and creative tasks that require extended engagement from the learner, and these are often not purely mathematical or logical. We want students to learn to invent and create innovative approaches.”
“To succeed with AI as an enhancement to learning and teaching, we need to always center educators (ACE). Practically speaking, practicing “ACE in AI” means keeping a humanistic view of teaching front and center. ACE leads the Department to confidently respond “no” when asked “will AI replace teachers?” ACE is not just about making teachers’ jobs easier but also making it possible to do what most teachers want to do. That includes, for example, understanding their students more deeply and having more time to respond in creative ways to teachable moments.”
*How AI Can Help Teachers
-AI voice assistant
“A voice assistant or other forms of an AI assistant could make it easier to stay organized by categorizing simple voice notes for teachers to follow up on after a classroom session ends. We are beginning to see AI-enabled voice assistants in the market, and they could do many simple tasks so that the teachers can stay focused on students.”
-Delegate homework help
“If the teacher can sit with the student for only three problems, perhaps they could delegate to an AI-enabled learning system to help with the rest. Teachers cannot be at their best if on call at all hours to help with homework, but perhaps they can indicate what types of supports, hints, and feedback they want students to receive while studying after school hours. An AI assistant can ensure that students have that support wherever and whenever they do homework or practice skills on their own.”
-IEP recommendations
By nature, teaching requires significant time in planning as well to account for the breadth of needs across their rosters—especially for inclusive learning environments and students with IEPs and 504 plans. AI could help teachers with recommendations that are tuned to their situation and their ways of practicing teaching and support with adapting found materials to fit their exact classroom needs. For students with an IEP, AI could help with finding components to add to lesson plans to fully address standards and expectations and to meet each student’s unique requirements.
*AI Literacy – Administrators and Teachers
“In education, decision makers will need more than notice—they will need to understand how AI models work in a range of general educational use cases, so they can better anticipate limitations, problems, and risks.”
*A Literacy – Students
(It) is also important that students learn about AI, critically examine its presence in education and society, and determine its role and value in their own lives and careers. We discuss risks across each section in this report. Here, it is important for students to become more aware of and savvy to the risks of AI—including risks of bias and surveillance—as they appear in all elements of their lives. In the recent past, schools have supported students’ understanding
One clear implication in our listening sessions was that efforts to improve AI literacy in education could be important and helpful to society more generally. For example, one panelist said that an overarching goal of improving AI literacy is necessary if they are to contribute to how those technologies are designed. Another researcher was interested in how edtech can provide environments where students can experience having difficult discussions across perspectives, an issue which is endemic to present society.
*Obligations of Educators
“Everyone in education has a responsibility to harness the good to serve educational priorities while also protecting against the dangers that may arise as a result of AI being integrated in edtech.”
Recommendations
1-“We start with a central recommendation throughout this report. This recommendation was a clear constituent favorite. Indeed, across more than 700 attendees in our listening sessions, the predominant discussion tackled how constituents can achieve a consensus vision for AI-enabled edtech where humans are firmly at the center.”
2-“Here we call upon educational policy and decision makers at the local, state, and federal level to use their power to align priorities, educational strategies, and technology adoption decisions to place the educational needs of students ahead of the excitement about emerging AI capabilities. We want to strengthen their attention to existing state, district, and school-level policies that guide edtech adoption and use, such as the four levels of evidence in ESSA, the privacy requirements of FERPA, and enhanced policies to come.”
3-“We call for the R&D sector to ensure that product designs are based on best and most current principles of teaching and learning. The first decade of adaptivity in edtech drew upon many important principles, for example, around how to sequence learning experiences and how to give students feedback.”
4-“Technology can only help us to achieve educational objectives when we trust it. Yet, our listening sessions revealed the ways in which distrust of edtech and AI is commonplace. Constituents distrust emerging technologies for multiple reasons.”
5-“Our listening sessions also asked for more specific direction on the question of what education leaders should do (see Figure 15). The most frequent responses fit three clusters: the need for guidelines and guardrails, strengthening the role of teachers, and re-focusing research and development. These are activities that constituents are asking for and that could expand trust. The recommendations that follow respond to these requests.”
6-“Research that focuses on how AI-enabled systems can adapt to context (including variability among learners) in instructional approaches and across educational settings is essential to answering the question of, “Do specific applications of AI work in education, and if so, for whom and under what conditions?”
Key Questions
1. What is our collective vision of a desirable and achievable educational system that leverages automation to advance learning while protecting and centering human agency?
2. How and on what timeline will we be ready with necessary guidelines and guardrails, as well as convincing evidence of positive impacts, so that constituents can ethically and equitably implement this vision widely?
WANT TO LEARN MORE?
Register for my Intro course with Dr. Sabba Quidwai. Our first live class is tomorrow! Join now and instantly access all of our training content!
Check out the book I co-edited: Chat(GPT): Navigating the Impact of Generative AI Technologies on Educational Theory and Practice.
Additional References
Holmes, W. & Porayska-Pomsta, K. (Eds.) (2022). The ethics of artificial intelligence in education. Routledge. ISBN 978-0367349721
White House Office of Science and Technology Policy (October 2022), Blueprint for an AI bill of rights: Making automated systems work for the American people. The White House Office of Science and Technology Policy. https://www.whitehouse.gov/ostp/ai-bill-of-rights/
European Commission, Directorate-General for Education, Youth, Sport and Culture. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756
Akgun, S., Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI Ethics, 2, 431–440. https://doi.org/10.1007/s43681-021-00096-7
Writing
Sharples, M. & Pérez y Pérez, R. (2022). Story machines: How computers have become creative writers. Routledge. ISBN 9780367751951
Tutoring
Dieterle, E., Dede, C. & Walker, M. (2022). The cyclical ethical effects of using artificial intelligence in education. AI & Society. https://link.springer.com/article/10.1007/s00146-022-01497-w 27
Mousavinasab, E., Zarifsanaiey, N., R. Niakan Kalhori, S., Rakhshan, M., Keikha, L., & Ghazi Saeedi, M. (2021). Intelligent tutoring systems: A systematic review of characteristics, applications, and evaluation methods. Interactive Learning Environments, 29(1), 142–163. https://psycnet.apa.org/doi/10.1080/10494820.2018.1558257
Van Lehn, K. (2011) The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197-221. https://doi.org/10.1080/00461520.2011.611369
Literacy
Forsyth, S., Dalton, B., Foster, E.H., Walsh, B., Smilack, J., & Yeh, T. (2021, May). Imagine a more ethical AI: Using stories to develop teens' awareness and understanding of artificial intelligence and its societal impacts. In 2021 Conference on Research in Equitable and Sustained Participation in Engineering, Computing, and Technology (RESPECT). IEEE. https://doi.org/10.1109/RESPECT51740.2021.9620549; Zhang, H., Lee, I., Ali, S., DiPaola, D., Cheng, Y., & Breazeal, C. (2022). Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory