Dr. Chris Dede and the Necessity of Training Students and Faculty to Improve Their Human Judgment and Work Properly with AIs
We need to stop using test-driven curriculums that train students to listen and to compete against machines, a competition they cannot win. Instead, we need to help them augment their Judgment.
Designing Schools AI Bootcamp; Educating4ai.com; Co-Editor Navigating the Impact of Generative AI Technologies on Educational Theory and Practice.
TLDR
*Even weak LLMs/narrow AI will radically change the way people work, we are already seeing it.
*Even weak LLMs/narrow AI will require changes in what and how students are taught in school (and outside of school).
*Even weak LLMs/narrow AI will require teachers and professors to upskill so they can learn to work with AI assistants. The alternative would be for AIs to take the place of humans.
*Effective LLMs that can develop human level intelligence (if they can be made) and/or new object models may require more radical changes in education if those models can achieve human-level intelligence, but we will be prepared if we start training students to improve human Judgment.
There were many interesting talks at the AIx Conference, but the one that had the greatest impact on me was the presentation by Chris Dede, a Senior Research Fellow at Harvard GSE. This talk changed how I’ve been thinking about AI in education.
It changed my thinking because there is often a big debate about what AI will/will not be able to do and how those abilities will impact jobs and education. Skeptics of the power of AI models and their ability to develop toward human level intelligence, often dismiss the abilities of AIs altogether, arguing we should simply carry-on as we are now in society and education because AIs will never be as good as intelligent humans.
But as Dede, who is skeptical of the abilities of LLMs, which largely drive current AI technology, points out, AIs, even with weaknesses in tow, will likely be able to do a lot of what we are teaching students to do in school. This will render that “Reckoning” knowledge pragmatically irrelevant because it sets students up to compete with machines rather than using the machines to augment the students’ abilities.
As Dede points out, and as we highlight in our AI Bootcamp, this is already starting to happen.
Let’s review the limitations of LLMs that Deed believes they currently have. These limitations include many of the common ones you are likely familiar with, including hallucinations (factual errors) and a tendency toward discriminatory output.
He also pointed out that the models are more like parrots, that they cannot understand what they are saying, that they lack common sense, and that they lack the abilities to learn, reason, engage in abstract thinking, be creative, plan, experience pain, demonstrate empathy, and love.
I think there is near universal consensus that these models cannot do most of what he says they cannot, though many people now believe they can engage in basic reasoning and have some creativity (some argue they are already more creative than humans). I outline these arguments in this essay.
Regardless of these larger weaknesses relative to humans, however, the hallucination problem can be (largely) overcome through training and basic advances.
Deed explains why these models are so important:
If we keep training students to be like Data, they will, as noted above, lose:
This means working with students to develop more Judgment skills.
Functioning this way, AI LLMs will augment our Judgment abilities, with their superior Reckoning abilities.
His concern is that schools are still focusing on developing Reckoning abilities/knowledge in students, teaching them to develop the knowledge and abilities that AI has/will have, even when only using the current models (or slightly better ones that are coming), with weaknesses in tow.
He even worries (it keeps him up at night) that schools will use current LLM technologies to further develop students’ Reckoning abilities.
We need to move education away from Reckoning and toward Judgement: “We are preparing students to lose to Reckoning machines.” Listen yourself —
This, he explains in the lecture, means having students do more things in school rather than listen.
This concern doesn’t just apply to what we teach students, but also how we teach them. Dede argues that we need to work on a feedback loop between AIs, students, and teachers.
He explains how the teachers, students, and AIs can work together:
In our AI Bootcamp, we focus on upskilling educational leaders and teachers to work well with AI assistants. In our AI Literacy course, we work with students starting in grade 6 to develop the same skills.
Judgement Abilities in Machines
As mentioned, Dede focuses on Judgment skills because he does not believe AIs cannot understand what they are saying, that they lack common sense, and that they lack the abilities to learn, reason, engage in abstract thinking, be creative, plan, experience pain, and demonstrate empathy. I think all or nearly all qualified AI scientists believe that.
As Dede acknowledges, however, there is a debate as to whether or not LLMs can develop these Judgment abilities. If they cannot, then it makes sense for humans to work with these AIs and for schools to emphasize the development of Judgment as a collaborative tool.
That is a good path based on where we are and will likely be with the technologies in the near future, but schools also need to start thinking ahead to a world where machines may develop these abilities and become smarter than us, perhaps by the time today’s first graders graduate from high school.
How might this happen? It is likely to happen because those who believe LLMs will not be able to overcome weaknesses that enable them to develop Judgement abilities are working on object-driven/”World” (instead of exclusively language-driven) models that will have these abilities. One of them is Yann Lecun, a Touring Prize winner, who is a professor at NYU and the Chief AI Scientist at Meta. This slide is from a talk he gave just before the AIX Conference.
In the lecture, he outlines how the system they are developing works, and you may want to listen to that, but for now I’ll just pull parts of the conclusion about how it will be able to develop objectives, reason, and plan.
And in a recent Munk debate he participated in, Lecun argued that these abilities will eventually allow AIs develop experience empathy.
LeCun has argued in other talks that this will enable them to develop consciousness.
So, it may be possible that we will also have a future, perhaps in 10-20 years,(though some say it is faster), machines will have all human-level intelligence abilities and even more. These could enable machines to surpass human-level intelligence, but it is not something we need to fear.
If/when these types of systems develop, the character Data will not exist, and there will be an artificial Captain Picard. Even though the artificial Captain Picard will be smarter than humans in every way, we don't need to feel threatened by this. As Lecun points out (above) society (and schools) will simply adjust, as they always do, their training to teach students to work with machines that are smarter than them.
Conclusion
For now, we are uncertain as to if and when computers will surpass human level intelligence, but we do need to acknowledge that they can already do some things better than us. And, regardless of their total intelligence, they will be able to do more and more things as well or better than us, which is why they will impact the job market. They already can do many of the things in the Reckoning category that we are training students to do, and that’s why we need to adjust education to focus more on the Judgment category. This will also serve us well if machines become smarter than us, as we will need strong Judgement to interact with machines that are smarter than us. We may need AI to help us augment our own judgement.