"Our job is to teach the students we have. Not the ones we would like to have. Not the ones we used to have. Those we have right now. All of them." -Dr. Kevin Maxwell
This adage from Dr. Maxwell is frequently repeated in educational circles.
I think it’s a good one.
As a debate coach and occasional computer science (and now “AI”) instructor, I haven’t always practiced it. Sometimes it’s been more tempting to cover content in a way that interests me. Other times, I’ve pushed my students farther than they were ready or wanted to go. Sometimes I started in a place they weren’t.
Despite my weaknesses, I’ve always recognized that the adage is the North Star. It’s hard to argue against it. If we don’t follow it, why is what we are teaching relevant and who does it benefit?
What does it have to do with AI?
Well, today’s students are AI-enabled. By now, almost everyone realizes that blocking AI tools on school devices only excludes the lowest SES students and that AI writing detectors are generally useless. In fact, they are so useless that I hope there is a class action lawsuit against these companies someday and that the schools can get back the money they robbed them of. Imagine if the money wasted on these detectors was invested into preparing students and faculty for the AI-World.
It’s just a reality that our cyborg students are here, and there are two brief articles that I read this weekend that inspired me to write this post about cyborg students.
The first is an interview with Sam Altman, where he points out the obvious.
One of the things that I think happens in general with technology and has been amazing to watch is how fluent young people, students, are with this technology and how they're adapting it into their lives.
And so, you know, I think it'd be great for the schools to do more, but young people are already doing so much with technology.
So, like us, students are using the technology, and they are going to continue to do so. Many think learning how to integrate it with work is essential to having any chance of being hired or developing a competitive business in the future. Those are both true, and many of their parents also believe them.
These students are developing what Ethan Mollick and others have called “co-intelligence.”
Science fiction writers refer to individuals with co-intelligence as intellectual “cyborgs.”
In our paper, we explore how this may even lead to a new conceptualization of life. Advances in synthetic biology make this real.
Reality can be scary.
The second is a blog post by the always-insightful Marc Watkins that explores the tension on campus between those who want to allow generative AI into the classroom and those who don’t.
My reaction to his post, informed by Altman’s insight that all the students are using the tools and they aren’t going to stop, is that we have to teach the students in a way that is relevant to them and the world we live in if we want formal education to remain an important part of their lives.
To me, it is both quite simple and quite difficult: figuring out how to educate student cyborgs who will be education's greatest challenge. Until recently, no one has ever taught a student who uses active co-intelligence. Hyper-intelligent AI isn’t a calculator or a smart board.
But, as we will see, it’s not that hard.
And we need to start by accepting reality: intellectual cyborgs are here. These are our students. How education responds to the challenge of educating cyborgs will impact how cyborgs develop. This is no different than how we choose to educate humans impacts how humans develop.
If we only teach human students and not cyborg students, we aren’t teaching the students we have.
And the students know that. And they are already starting to call our bluff. Many are writing (and otherwise working) with AI, and they know we can’t stop them.
And if we don’t teach them how to work with their bots, they’ll just learn more from the bots. The emerging teaching and tutoring systems are incredible—way beyond anything you’ve seen in ChatGPT4.
Does this mean we don’t have to teach them? No. Left on their own, the lowest SES students will be excluded, students will use the technology in inappropriate ways, and students who receive instruction in proper use will leapfrog over those who try to figure it out on their own.
And, well, educators will be irrelevant.
How much time do we have to prepare for this? Not a lot, especially given how slowly education moves (idea generation, planning committees, infighting, decisions, resource acquisition and allocation, implementation). Practically speaking, it may be the fall of 2025 before most schools adapt instruction to the AI World.
In his post, Marc Watkins notes: “But let's hope AGI isn't anywhere near on the horizon.”
It’s hard to say if AGI (human-level intelligence) is on the horizon. There is a big debate as to how to define AGI and how quickly we will achieve it (which is largely related to how it’s defined), but we don’t need to be anywhere close to AGI for AI to transform education and society.
AI has already disrupted (to say it nicely) written assessment. It can already write better than most people. It’s already read more than everyone. It already “knows,” at a broad level (hallucinations in tow), more than anyone. It is already competitive with doctors at tumor detection and will likely exceed them with that soon. It’s already better at hedging the equity markets than humans. It’s already competitive with human-level forecasting.
AI will soon be able to produce assessments that are more valid and reliable than humans can produce.
AI agents, which can already do a significant amount of academic and administrative work when given no more than a goal, are very close. By this time next year, most of us will be using them. Some kids will have them direct the development and production of their science projects.
We don’t need AGI for any of those things.
Our students and employees will work with AIs like these every day. Now imagine what they will work with in 5 years.
As educators, we aren’t going to win a battle against AI. Nor will our students.
I strongly believe that if it's a forced choice between AI and teachers, AI will win in most instances over time, especially in grades 9–16. Playing the AI vs. human game sets us up to fail.
The model that is needed is Student-AI-Teacher. Students and teachers, working together with “co-intelligence,” can thrive. The results for everyone will be spectacular, and I don’t really see any alternative if we want to teach the students we now have.
This morning, Michelle Kassorla posted how she’s doing that with writing.
Beyond preparing students for AI collaboration in specific areas, schools are actually well-prepared for instruction in an AI World. As Altman also noted:
The main thing that I learned in school was not how to use any particular thing or any particular piece of knowledge, but it was like the meta ability to learn new things, to be curious, to come up with new ideas, new ways to look at problems.
And I think fundamentally that's what schools should teach. Of course, it's helpful to make people more proficient with the current tools and what they're gonna need to work with as they enter adult life. But the lessons that stuck with me most from school were not about any specific tool or anything like that.
These are all fundamental to “AI literacy.”
Teachers and professors, working with students and AIs, can help students develop the meta ability to learn new things, to be curious, to come up with new ideas, and to think of new ways to look at problems. Hopefully, they are already doing that, though maybe they need to do more of it.
And they can help them work with their AIs to search, discover patterns, and establish priorities in a way that will help them solve problems and make better judgments.
There is no better time than in their youth for students to start envisioning new solutions to old problems that we adults have never been able to solve.
They aren’t blinded by our old-fashioned filters. Like AIs, they can see patterns we cannot see and make connections among ideas in ways we cannot.
They aren’t limited by workplace politics.
They believe in the change we’ve given up on.
They apply the tools we don’t want them to use.
They can learn anything they want. They aren’t bound to information that is in an old 400-page textbook that’s tied to a test that allows adults to cash out on what they tell the students they need to know.
Their “reality” isn’t shaped exclusively by textbooks and the sanitized evening news. They see the world in the raw, on Tik Tok and Twitter. Many probably learn more from the former than any school textbook.
They know we no longer have control over the information flow. They know the our ideas that we inscribed in their textbooks have accelerated climate change, growing inequality, government centralization, warfare, discrimination, and genocide.
Our cyborg students want something new.
We can work with them if we want. We are here at that school. Or we can ignore them and teach the different students we wish we had and leave them to figure things out for themselves.
Really helpful post Stefan, talking practical examples and practical steps.