September 2023: The Secret Intelligent Beings on Campus
Many of your students this fall will be enhanced by artificial intelligence, even if they don't look like actual cyborgs. Do you want all of them to be enhanced, or just the highest SES students?
Stefan Bauschard — Join Our Free Webinar on AI in Education (July 25)
Educating4ai.com (Students in grades 6-12, Cohort 1 starts this week); AI BootCamp (Adults, Enrolling Cohort II now). Contact stefanbauschard@globalacademic.org for institution-specific course adaptations and content licensing. Follow me on LinkedIn and check-out my co-edited 1,000 page volume on ChatGPT for educators.
When the school doors swing back open in the fall of 2023, there will be intelligent beings on campus who aren’t on the rosters: Secret AIs that can both tutor and complete much of the schoolwork that is assigned to students as well or better than many of them can.
They are intelligent beings with “silicon stack” brains (Hotz; 21:3) that many faculty and administrators (FA) don’t know about. Some FA will have heard of them; some may have interacted with them; some will think they banned them from the building(s) or the classroom, locking out the threat to traditional academics the same way they have locked out the security threats. They think the intimidating AI writing detectors can stand guard.
But the academic doors of today’s schools are porous, and the AIs will enter on home computers, phones, iPads, personal laptops, and even Apple watches. While schools are bastions of physical security, today’s intellectual networks are ubiquitous, and there is no controlling the AIs.
The intelligent machines are not deterred; they know the AI text detectors are becoming laughing stocks and can be defeated with two quick jitsu moves. Spring 2023 ChatGPT bans couldn’t even stop them. A June 2023 survey by YouthInsight showed 70% of 14–17-year-old Australians have used ChatGPT; 59% used it for schoolwork or study, and 42% for completing school assignments (Kovanovic & Dawson). Surveys show at least 22% of US college students have been using AI; those are just the ones who admit it. Informal conversations reveal the obvious: Many students are using AIs, and all the students know it, even if the FA are unaware.
How many are there? It’s hard to say; they are secret, but here’s a guess. If your building has 1,000 students and 50% of them are using AIs to help with school work, then take 500 and multiply it by the number of AIs they are using: 5. So, 2500. So, yes, in the fall of 2023, there will be at least three times as many intelligent beings in your school as there were last year, and you probably won’t even know they are there; you certainly won’t know about all of them. This percentage will grow over time.
Who are these Secret AIs?
Paper writers. Last spring, there was only one paper writer: ChatGPT, and some FA started to catch on. ChatGPT wrote in robotext with a lack of perplexity and burstiness that made him easy for the guards to catch. But he’s gotten a bit smarter and made some friends to expand his operations. Now he’ll write in the voice of individual students. And he teamed up with Conch.ai to write papers in disguise, and he’s going to hang out with the students in Microsoft Word to “help.” He’ll be there with his friend Bard from Google Docs, doing the same (“Help Me Write”). Perplexity will be hanging out over there as well, tossing some citations at the students when they need them.
Philosophy professors. Students can have conversations with the Pi bot about consciousness, the hard problem of consciousness, the differing views of Christof Koch and John Searle on the topic of the Singularity, different definitions of humanity, AI rights, and the future of education. The Pi bot is based on a Recurrent Neural Network (RNN).
History buffs. I continue to be fascinated by the students in my AI Literacy course. In one assignment, we ask students to consider a homework assignment they did last year and how they might have been able to do the assignment faster, yet properly, with ChatGPT. One student wrote about an assignment related to the rise of communism in Russia during WWI. In the school assignment, the students had to write six paragraphs with three pieces of evidence related to three significant events in the rise of Communism. As pointed out, “I could have stated the three main events of the rise of communism and asked AI to state the key details of each main event.” I tried this, and it worked quite well. And then I fact-checked after I tested the assignment; no hallucinations.
Math problem solvers. Photomath found its way into schools before ChatGPT; students can enter problems and it can do the work. Now Code Interpreter and Wolfram can show the work as well. Math is ordinarily assessed by testing, so the impact of any work it does on the accuracy of assessment is minimal, but students who are aware of these technologies will have pretty awesome tutors.
Social media influencers. AI has always driven social media exchanges by predicting what people want to see and prioritizing the display of the content. Now students can chat with “MyAI” on Snapchat on their phones about anything they want. Soon, these tools will be able to generate user content based on their own interests. All uses of this, including potential bullying, are not necessarily positive.
Counterfeiters. Human counterfeiters will impersonate the voice and image of individuals, including school administrators, creating difficult security and disciplinary issues.
How Smart are these AIs?
There is a huge debate about how intelligent AIs currently are and how smart they may get.
Almost no one believes they are conscious, and, at best, they have limited reasoning skills. But ChatGPT4 has an IQ of 152; that’s not bad compared to the average human IQ of 100. It can get a 1460 on the SAT and a 5 on most of the AP exams. Hell, even if they get 20% wrong, they are doing better than at least the bottom 50% of our students. Many students’ grades would improve if they obtained 80% accuracy.
Silicon brains are capable of processing and sharing a lot of information that is often (though certainly not always) accurate; they are at least somewhat creative; and they can do math with the help of their friends Wolfram and Code Interpreter. Despite the fact that they do not have content knowledge stored up and struggle with memory, silicon brains are able to process and share a lot of information. Their factual accuracy has improved, and they’ll check the internet for you. As they train on more narrow ranges of content (like subject teachers), their factual accuracy will improve dramatically.
They can do a lot, but not everything. I don’t think anyone thinks it currently has human-level intelligence, but many argue it has “sparks” of human intelligence and reasoning. Others argue AIs cannot yet, “perceive a situation, then plan a response and then act in ways to achieve a goal” (Yan Lecunn), and that they cannot physically manipulate the world (David Foster) or engage in abductive reasoning (Tim Scarfe).
Intellectually, I find this debate to be interesting, but from the perspective of an educator, I don’t think it matters that much. What I think matters is that AIs can do a lot of what students and teachers do in school: AIs cannot drive cars well, but they can write very strong English essays, do math problems, tutor students, and discuss the issues of the day, including artificial intelligence and economics. It can also already do what many people are doing at work. AIs are very good at knowledge work, which is one of the (perhaps the largest) cornerstones of modern schooling.
And, of course, IQ aside, these AIs have way more content knowledge than any single human.
The level of intelligence AIs have now, however you want to define it, is having a dramatic impact on education and will continue to do so. As its intelligence capabilities improve, the impact will grow.
There is a significant gap in intelligence from where we currently are (engage in natural language conversations, maybe engage in some basic reasoning) to perceiving a problem situation or problem, coming up with a solution (including by using abductive reasoning), planning a response, and responding (at which point most people believe it will be sentient or conscious).
It may take awhile to close that gap (there is a big debate about this), but I think it’s fair to say that everyone does agree this will eventually happen in 3–20 years (there is a new prediction of 2024) based on the potential for ChaGPT to read a billion+ tokens (approximately 750+ million words) in 2 seconds, which means it could read the entire internet in minutes. I think it’s fair to say that everyone also agrees that there could be unexpected developments that could trigger a radical breakthrough (such as reading a billion words in seconds) that could accelerate this rapidly. Google’s Deep Mind may release a new model in December that has the ability to solve problems and plan.
What Does this Mean?
It means four simple things.
You can think of machines as currently having enough intelligence to complete most school assignments and get reasonable grades on them.
You can think of machines as having enough intelligence to provide reasonable tutoring support and complete many school assignments.
These machines will be in your schools in the fall, and you can’t meaningfully restrict their presence.
The machines' ability to complete schoolwork and teach will improve over time; the question is how quickly this will happen.
Given the existence of these intelligent machines in your school, how will you respond?
Do you still expect the same level of work from students, or would you expect them to do better work? Enabled by Copilots, their ability to do high-quality work should radically expand.
Do you want every kid to have one of these tutors, or would you be satisfied knowing that only 50% of the students have one? Would you be okay with the fact that those who did were probably your higher SES students and were already performing in the top quartile academically?
Would you like to inform the teachers that they are present and assisting the students?
Do you think your school could be even better if the teachers worked with the AIs and the students rather than ignored them?
Do you want to start thinking about how education in your building should change as the intelligence of machines advances? Do you want to start talking with others in your school or building about that?
Do you want to start thinking about how education might need to change when everyone has an AI co-pilot at work?
Over time, do you think it will or should change the role of the teacher? ChatGPT does.
Do you think it’s time to spend more time thinking about this?
It seems to me that we need to teach AI-enabled humans differently than non-AI enabled humans. Many of your students this fall will be enhanced by artificial intelligence, even if they don't look like actual cyborgs.
We tell our students to prepare for the future, so should we.