Podcast with Adam Pacton: The Higher Education Crisis & the Future of Written Assesment
The essay has stuck around...Should it stick around? I don't think so.
I'm not a K-12 expert, but, one of the reasons the essay has stuck around for so long is that it has structural and semantic components that are easy to assess. And so we can, you know, say this person is here, this person is here. It's a great classificatory tool. Should it stick around? I don't think so. I think it's it's time that we we think about what it means to compose, what it means to do analysis. We look at how things are put together, how things are taken apart, and then how we can, you know, to use Aristotle's definition of rhetoric, how we can look at all of our available resources, to best accomplish our ends.
— Dr. Adam Pacton
Is your campus ready for the AI revolution—or waiting for someone else to figure it out?
In our latest AI x Higher Ed Podcast episode, Adam Pacton, PhD (inaugural Dean's Fellow for AI Literacy & Integration at ASU) explains why 2025–26 is a tipping point for higher education and what must happen now to avoid irrelevance.
Key insights you’ll hear:
- Why AI literacy is no longer optional for students and faculty
- How ASU is rolling out faculty AI upskilling and cross-campus integration
- The coming “AI digital canyon” and what it means for equity and access
- Why writing, assessment, and knowledge production are being redefined
- Practical steps to spark intrinsic motivation and protect student agency
Adam’s message is clear: if higher education doesn’t take intentional action now, students will learn AI elsewhere—and universities risk being left behind.
**[0:48] Stefan Bauschard:** Hi, welcome everybody. I'm Stefan Bauschard.
**[0:50] Anand Rao:** And I'm Anand Rao. Welcome back. We're really excited about the discussion we have today with Dr. Adam Pacton, certainly coming with a great deal of experience and from an institution that many of us are looking up to in terms of its policies with AI and education and really can't wait to get involved and get into this discussion.
**[1:08] Stefan Bauschard:** Yeah, me neither. Um, Adam, you know, first congratulations on your new role as the inaugural Dean Fellow for AI Literacy and Integration in the College of Integrative Science and Arts. For our listeners, what is the core mission of this new role and could you reflect on any aspects of that that are, you know, unique to CISA?
**[1:33] Adam Pacton:** Absolutely. So the core mission is to, as it as it sounds, to lead and promote AI literacy in the College of Integrative Sciences and Arts, or CISA, at ASU. What's really key here is in the name of the college and in the mission being integrative, of pulling things together. And I think that this captures, uh, what my two key priorities are for this year and what I think, uh, a lot of our priorities are for this year.
Um, first, I'm working on creating an AI upskilling program for faculty and staff, uh, within our college, which we're launching in just a few weeks. Um, what really makes this, uh, stand out from similar programs that I've seen, uh, even within my own institution, is that we're moving away from the "here's lots of learning assets" model. You know, go and and learn and now you do AI. We're moving much more towards getting people into Gen AI as fast as possible to find ways, uh, that are value-adds for their roles, to start with their roles, start with what they're working on, and then move to Gen AI rather than figuring out a way to shoehorn, uh, this technology into their, uh, roles and places. And the other key priority is creating and facilitating a recurring newsletter that collects sort of the disparate institutional information around AI. Like at ASU, we're a massive institution and so we have some people talking over here and working over here, some working over there, and sometimes those communication gaps, uh, can arise. So our newsletter is designed to bring those together, uh, and push out both to ASU but then beyond ASU as well. So we'll be featuring a lot of top voices in AI, recent publications, recent innovations. Uh, as you both know, LinkedIn has become this, uh, very interesting community of practice. So, being able to create feedback loops with some of the work that's on there and elsewhere and and really focus those channels of communication to make, uh, keeping up and catching up a lot more legible and easier for our faculty and staff.
**[3:52] Anand Rao: I definitely want to sign up for that newsletter.
**[3:55] Adam Pacton: It'll it'll be publicly available, so we'll, uh, I'll definitely share it with you.
**[3:58] Stefan Bauschard:** That's fantastic. Yeah, I love the way that you outlined in that first priority the importance of not just shoehorning something in. And I think for many of us, the first step is just trying to get a better understanding about how it fits within the roles that we have. I think that's a really great, little more sophisticated way of looking at it than most of us are doing it at our own institutions. That's great.
**[4:20] Anand Rao:** You know, and I think that that kind of speaks to the pivotal moment that we're really in. You know, we're starting to see such rapid growth of AI and more significant material action taken by universities. You've had a chance to really view it not only for a behemoth like ASU, but also to see what other schools are doing. Give us a 30,000-foot view. What's the state of AI in higher ed today? Are we in a state of panic? Is it productive innovation? Are we somewhere in between? You know, how would you chart it out for somebody that hasn't followed it closely?
**[4:48] Adam Pacton:** Yeah, I I I think it's it's an all of the above situation. Um, I've been telling, um, since last fall, uh, as many people as would listen that this was the critical year in AI, uh, AY 25-26. And we're seeing that reflected, uh, in lots of different ways from large government policy to investment, uh, to K-12 policy then linking up with higher ed, to very heterogeneous, um, state institutional policies. Uh, it is a very, um, diverse and sometimes, uh, chaotic landscape. I think that some schools have built a lot of infrastructure and they've made great cultural inroads. I think others, both in terms of schools and individuals, have continued to kick AI down the can or kick the AI can down the road with the assumption that some sort of, um, consilient event or some sort sort of clarity will emerge that will then be broadly applicable, uh, to everyone. And I I think avoidance, uh, both at individual and institutional levels, is no longer possible. I think that we're in a situation where schools either have to focus their education around AI or students are going to learn piecemeal, or students are going to learn through many, uh, through one of the many developer-academies that are springing up. So we see a large movement around, uh, OpenAI, Google, Anthropic. We're seeing large, systematic, uh, learning ecosystems, and they're calling them academies, uh, that spring up, which means that the education around this paradigm shift we're in, in a position where if we do not have clear vision, it will then, uh, go to others to guide that and it will be a much easier, uh, on-ramp and it will be integrated with their other systems. So I think it's a, it is an exciting, uh, critical, and, uh, really uneven, uh, moment for AI and education.
**[6:57] Anand Rao** You know, I I was just going to ask a little bit of a follow-up there. You know, I think that I agree it seems like this is a very important year. We've kind of matured in our understanding of generative AI a bit. We've had a chance for some faculty to kind of reflect on it. What about those schools that don't seize this moment? That that do... do you see them kind of falling by the wayside in terms... are they going to be able to catch up later? I mean, what would you say to schools or faculty that are at schools that are thinking, "Hey, we're we're not ready to do enough this following, this upcoming year."
**[7:29] Adam Pacton:** I I think that it's... I would give the same advice that I give, uh, individuals. Um, you know, I'm in a house with an award-winning, uh, author, and, uh, I have a lot of people in the writing community who, uh, have been that more avoidant, uh, group of, um, people with uptake. And the advice that I give everyone is start. Start. Just start. And so like finding those starting points that can kindle intrinsic motivation, that is how individuals and institutions are going to be able to catch up because there's... I I think anyone who tells us that there is a point to be caught up to at this point is selling us a product. Um, and that's that that is the other warning that I would give those who are still, you know, in the wait and see, let's slow down, is if you cannot develop, um, contextually and institutionally and mission-driven approaches to AI adoption and integration, you will reach a point of critical mass where you have to take in somebody's pre-built tool. And we're seeing that, uh, in terms of some large integrations that we may have seen news about today. We're seeing that in, uh, some ed-tech companies' approaches. And you know, what is key to me is agency, institutionally and individually, and and maintaining that. So I think again, I'll... I'm sure we'll talk more about this, but, uh, my advice is just start. Start small, start moving. It doesn't have to be perfect, but you do have to start.
**[8:58] Stefan Bauschard::** Yeah. I mean one of the concerns I have, you know, I think some people approach it and they say, okay, well students will still learn, you know, their biology, their chemistry, you know, their composition in the university, and then they can just go learn like AI at one of these academies or online or do some special program. But the problem I see is that this is going to change like all those subject areas, right? Like how, what you're going to want to learn in biology, you know, developments in biology, that's all going to be changed by AI. You know, how you write, and we can talk about this more later, is probably going to be changed by AI. So if universities kind of say, and really any educational institution say, well, you can learn about the AI elsewhere, the students are learning about it elsewhere, that kind of really doesn't address the core question. They're going to start then learning about chemistry, uh, you know, elsewhere in an integrated AI world. So I wondered if you kind of might, you know, speak to that. It seems from, you know, your headshakes, uh, you you agree a little bit, but to me, that is kind of presents like an existential question, you know, at least to current educational institutions. Not just whether they're going to provide the education about AI instead of, you know, one of the companies, but are they going to continue to provide education that's relevant and in the AI world? Are we going to have that provided by the AI companies too? Because, you know, just to give a a simple example, it's, you know, DeepMind seems to be, uh, you know, pretty pretty kind of clued in, so to speak, about biology, right? And are they are they just going to kind of, uh, start crowding out, uh, traditional university, uh, infrastructures?
**[10:40] Adam Pacton:** I think those are spot-on questions. And you know, what it means to create, curate, advance knowledge is changing across disciplines. Um, you know, in in my own field, uh, what it means to write... this is a, it's been cracked wide open. It's a totally, totally different ballgame now. And so I think that there are... so there's institutional questions, there's practical questions, there are, uh, very real return on investment and student selection questions. Uh, as a student, if if I was able to learn, uh, an institution's approach to AI right now and it was very, uh, I don't want to say backwards, but it was very hesitant and it was non-adoptive, and by the way, I can do that in 10 seconds via an AI-generated search, I'm not going to go to that university. Everyone is available or everyone is aware of the impact, uh, on jobs, on the prospects of jobs across all areas, including thought work and especially thought work. So, I I think you're absolutely right that this is not, um, and I guess I I I was trying to keep it a little bit away from crisis language, but this is this is a serious, like, moment of crisis. How we know what we know. Uh, this needs to inflect all of our disciplinary work. It doesn't imply like, you know, full replacement. It doesn't imply throwing out everything that we know, but it does imply different approaches to pedagogy, to assessment, to knowledge production via peer review. Uh, all of these things are now affected and like they are all tracked as as very clearly and objectively affected. So, uh, yeah, I would be, I would be very concerned if my institution did not have some sort of vision, some sort of road mapping in place for broader, uh, AI integration and literacy, uh, training or fluency training, depending on who you're talking to.
**[12:51] Anand Rao:** I think you're spot-on with this, and it's it's also because AI is not the only threat. It's not the only challenge that higher ed is facing. I mean, we think about some of the other challenges that were coming up before generative AI presented itself as as something that would be so disruptive. You know, if you would speak to that a little bit, if you think about, you know, somebody, a parent thinking about sending their their child to college, it's not just a question about is AI going to be... are they going to be prepared for AI? Is the school going to be ready for it? But just fundamental questions of, does my child still need higher education?
**[13:22] Adam Pacton:** I think that's a fantastic question. It's one, uh, so to provide a little bit of context, um, when I originally came to ASU, I came as, um, one of the, uh, first designers of MOOCs in the MOOC wave. Um, and so I, uh, personally, uh, designed the first for-credit college writing MOOCs, um, way back in 2016. And back then, I could see that there were some some fracture lines that we were facing in higher ed. And these were around things like scalability and personalization and the tensioning in between those two. They were around, um, the validity of our assessments across all areas of inquiry, uh, and I mean validity in the in the strong sense. And very real questions around ROI, um, you know, for the student degree. Uh, and that's not even touching on sort of, uh, some of the financial cost-benefit, uh, relationships, especially with, uh, the radical rise in tuition even in the past, uh, since I was in undergrad, which we'll say wasn't very long ago, but I I think I think we all know was quite a while. Um, so that very question of what is the purpose of higher ed in this context, um, is again, deeply contextual, but, um, it is not, it is not to create and sustain, uh, inequities. And one of my great, great concerns is, uh, we've heard the term digital divide for a long time. Now AI represents the possibility of digital canyons opening up. Um, if you have, uh, students who have, uh, the money to have not even the top tier, but like the the entry paid tier, uh, AI and they're using them on a regular basis, the ability of those students to complete work, to secure jobs, to communicate effectively, to navigate our current society will be radically different than those students who don't. So, in some ways, um, higher ed is at an inflection point, uh, in terms of creating opportunities for equity with our students. And that's within individual disciplines. Um, people who are researching and doing non-AI augmented research versus AI-augmented research, it's night and day. I I can put together a, uh, fully fleshed out course, uh, undergrad level, we'll say, fully fleshed out, every bit, all the programming for Canvas, all the outcomes, make sure that it's really aligned, high-level, QM rubric, like top-shelf stuff, in a couple of hours now, uh, on my own, not in my subject matter. That is my ability to do that versus someone else's ability to put together a class in their area, in their discipline. It's night and day. If we look at, and I know I'm ranging far far here and I'll I'll rein it in in just one second. I was talking with someone today about, um, the situation with résumés and, um, cover letters. If we have a group of students who understands not only how to leverage AI to tailor, so they can tailor résumés and cover letters and tailor them effectively to different audiences, but also understands technically, these aren't being read by humans anymore. They're being read by AI. So the old rules no longer apply. This is, these are the the the kinds of critical, uh, data pieces that all of us need to have and our students need to have. You know, no matter what our our reservations or our objections are, our use and our behavior versus our knowledge and our integration are two different sets of considerations. And again, apologies. I know that that ranged widely.
**[17:31] Anand Rao::** No, I think you're hitting on a really important point. And I know at at my institution, talking with colleagues, we're very concerned about that equity question. When I talk at other to other schools about this as well, unfortunately, that sometimes means making sure that everyone's using a more entry-level model because it's easier to ensure that everybody has access to it. But what I think you're pointing to is not just that we need to make sure that everybody has access, but they need all need to all have access to the best models, um, and the best practices, because if not, they're going to be left behind.
**[18:00] Adam Pacton:** And I I think you're pointing to something non... that I I have struggled with at times in my own institution and with people that I've worked at in other institutions, which is that differential access even when you have access to high-quality tools. So, uh, a lot of institutions have invested hugely in building homegrown AI ecosystems. You know, the, I'm hearing them called walled gardens or guardrail systems. Uh, that's fantastic. There's a lot of great work that you can do. What do those students do when they leave that walled garden? What literacies do they have in place to keep themselves safe? How do they understand the the ubiquitous AI interfaces that they're encountering everywhere else? Um, similarly, you know, if you have access to really high-level models, your ability to get things done versus your free ability to get things done, as you were pointing out, is night and day. So there's there are all these needles to thread in terms of how we teach, what we teach, um, what's the appropriate model to use, how do we need to know what these differences are, and then how do we then... and this is the question that keeps me awake at night... how do we really think about the fact that the vast majority of our students who are coming in, uh, as freshmen, coming in as transfer students, are mobile-first? And so that is layering on a whole another level onto the interface and onto the UX and the LXD that many of our institutions and our instructors just aren't prepared for.
**[19:42] Stefan Bauschard:** Yeah. And I was, you know, we're also kind of entering a world where, you know, people say that the new primary interface is not just going to like, not be the desktop or the laptop or, you know, maybe the iPad or, you know, the phone, right? It's going to be this whole new device, uh, that these companies are going to be developing, and it's going to be... you're going to be able to even create like your home hologram reality, right? That's that's going to come kind of challenge us even next. And it just kind of, it does obviously take educational institutions, uh, time, uh, to adjust. But you know, the one thing, you know, obviously in a way that this first hit universities and, you know, K-12 schools, because you know, when ChatGPT 3.5 came out, you know, kids were using it to write the papers. And I always say that, you know, technologies changed, but nothing really kind of forced schools to adapt to it, right? We always say like, we want to prepare kids for the next like level technology, you know, we want to prepare them for the information age, but we didn't like, have to, right? Because there's nothing that really about the information age other than, okay, it was easier to get those articles you were supposed to read online than like go to the library and change them. But the way generative AI, since it it can do the work that is assigned to students, right? And to the the point that you just made, right? That the more sophisticated the model, which isn't surprising, it's basically the better models have more intelligence, which means that they can do the assignments better, right? But to go back to kind of what you kind of talked about, assessment, obviously in universities, one of the primary means of assessment, you know, is the is the paper, the written paper. And you know, it's it's kind of your, uh, background, right? And your area of expertise. And I'm wondering, you know, what kind of thought you've given to how, you know, generally speaking, or you can be, you know, specific, like how are we going to change how we teach students to write? If we're going to change them, maybe we're going to keep things the same, but how are those changes going to occur? Because I, you know, when teachers ask me, K-12 teachers or writing professors who don't really know very much, you know, I I I find that to be the hardest, the hardest question to answer because that, you know, comes right at the core content they're supposed to be teaching. Students how to write, the AIs can write. If they have some other class, you know, I have all kind of suggestions for you know how you can how you can change your, uh, how you can change your assessment and kind of classroom activities to move away from the paper. But in writing, you know, especially, there's still, there's still the standards that they have to teach, especially in K-12. They haven't changed. Um, so I'm just kind of wondering if you could speak to this.
**[22:13] Adam Pacton:** Absolutely. And I think that's the, uh, and I don't think it's a hyperbole to say, the multi-billion dollar question. Um, my short answer is way back when I was in grad school, um, I I did a lot of research on writing, uh, knowledge transfer, and, um, I did a lot of research on what we're assessing and how the assessment actually works in writing. And my my somewhat radical position would be, our assessment of writing doesn't work now. We just think it does. Um, you know, we we think that we can go through, uh, a piece of writing and in our minds keep primary traits in place and separate as we read and be assessing these in parallel with any degree of reliability. And when when you read the research on, uh, assessing writing, uh, you see very much that inter-rater reliability is very, very low. And in fact, there's there's decades of people, you know, spending time around things like norming, spending time around things like trios, because we we know that even on the same day, my ability to reliably assess is not great. Now, when when we keep that in mind, there's another question, which is when we assess writing, are we assessing writing as technology or writing as content? And I would argue that we just smush the two together and we do a bad job assessing both. So most of the time in undergraduate, uh, courses, especially outside first-year composition, writing is just the the medium of assessment. It's just the technology by which we're assessing something else. But there's a lot of romantic notions wrapped up in writing as the conduit to the soul or the conduit towards epistemic. And, um, you know, when you start to to to ask, what are we really assessing and does it have to happen in writing, uh, a lot more questions get implied. So in in composition, for example, uh, we are assessing argumentation, we're assessing rhetoric, we're assessing citation practices, we're assessing information literacy, we're assessing, uh, what are the other ones? There's so much that we cram in. Um, but we don't want to assess grammar, but sometimes we want to assess grammar. Uh, so I think it is, uh, you know, to use the rhetorical language, it's it's an amazing moment of kairos, or like this this critical opportunity where we can say, what is the most important thing that we want our students to walk away with? What are the outcomes? I personally could care less whether students can write, uh, an essay, uh, of of any strength. They will, unless they're an academic, they will never write an essay again. And if they're an academic, they're going to write an essay that is so shaped by the genre conventions of their own individual discipline or the journal or whatever the other medium is, that the ability to transfer any generalized writing across that is... it's it's fictional. It's imaginary. And so for me, I think this is an incredible moment for us to step back and say, "Well, rhetoric is incredibly important." Being able to think about differential audiences, being able to think about differential media, being able to think about interacting with non-human audiences around very clear goals that we need to rapidly gather contextual information around that we can do through AI. Um, it's incredibly exciting. And everyone that I know, um, I know a lot of compositionists, uh, even in composition, even my colleagues in technical communication, once we start using AI to augment our writing, we don't go back because it doesn't make sense not to. There's too much more that we can do, uh, using it. I don't think replacement is, uh, you know, uh, viable. But to back up to your question, for K-12, that has huge, huge implications. Uh, I don't see anybody implementing getting rid of first-year composition as a writing class anytime soon in higher ed, uh, except for a few outliers, uh, especially those I have influence over. Um, but in K-12, we need metrics that are accessible that allow us to chart, uh, learning progression. Um, you know, some of those goals have a backwards influence and drive our curriculum. I'm not a K-12 expert, but, uh, one of the reasons the essay has stuck around for so long is that it has structural and semantic components that are easy to assess. And so we can, you know, say this person is here, this person is here. It's a great classificatory tool. Um, should it stick around? I don't think so. I think it's it's time that we we think about what it means to compose, what it means to do analysis. We look at how things are put together, how things are taken apart, and then how we can, uh, you know, to use Aristotle's definition of rhetoric, how we can look at all of our available resources, uh, to best accomplish our ends.
**[27:43] Anand Rao: That's not controversial at all, Adam.
**[27:49 Anand Rao: :** Um, I I think you're you're really on the right track in terms of my experience teaching writing-intensive courses or talking with colleagues. But I I think there's another aspect of this I I wanted to tease out if you would. You know, earlier in the discussion, you mentioned the need to be able to appeal to students' intrinsic motivation, um, to be able to help them find their agency or at least be able to underscore their agency, uh, within the process. And and when I talk to colleagues, one of the concerns is they need to understand this is... it's good for them to learn how to write. They still need to learn the process. And and part of it is because of the development of arguments, for thought process, for interrogating ideas. And and so I think that's that's a fair point, and I think a number of people are concerned about that. But the worry is, what's the motivation for students to do that when they can cut corners? Um, so it gets back to that idea of the motivation. And and this is something Stefan and I have talked quite a bit about. I'm really concerned about it, in part because we don't really have an educational system that helps build that intrinsic motivation. It seems like it tends to sap it a bit. You know, you're told to jump through hoops. You're told to go through and play a certain game instead of pursuing your own education. I'd leave this wide open. There's a lot there to kind of play with, but how do you see, you know, appealing to that intrinsic motivation? How do you see affirming a student's agency in particular with with AI?
**[29:09] Adam Pacton:** Uh, this is a fantastic question and I'm so happy you asked it because this is my soapbox right now. Um, so I'll say a couple things. One, one thing that you said that that sticks out to me is, um, we want to teach writing because it's good for them. That's always a sort of like, I see a red flag pop up and my mind says, "Why? Why is it good for them?" And like, I I know there are reasons: externalized cognition, argumentation. Um, but the other part of me says, yeah, you know, informal logic is good for that too. So is formal logic. But in any event, there's there's something that play with AI works as, um, you know, Alexander's sword through the the the Gordian knot. And I don't know why it's all ancient metaphors today, but it it just happens to be that way. Um, this is one of those spots where there's a low-hanging fruit that people are missing and they're thinking like, "I have to revise the entire curriculum. I have to to do massive systemic changes." There's such a low-hanging fruit that if you've played with AI, it's going to sound immediately familiar. And it's, uh, OpenAI listed the most popular prompts at the end of the spring semester, and this is a variant of one of them. It's basically, "I don't care about this assignment. How... make it interesting." Now, what what I've done is I've built on that, uh, in my college composition courses, and I now have a very structured prompt that I give my students that says, uh, does a couple of things. One, it, uh, utilizes a flipped interaction, um, pro... flipped interaction pattern. For our listeners, what that is is just, uh, the AI is asking you all the questions to gather information from you instead of you just inputting. It's great for breaking up the like "Googlification" of AI, where I give you thing, you give me back what I want, and and getting sort of that contextual prompt. In any event, my prompt that I encourage my students to use continuously essentially says, "I I'm not interested, uh, naturally in this topic. I want you to ask me questions about what I am interested in. Include hobbies, include books I like, include movies I like, include games, any other questions you think will help you gain an understanding of what I'm interested in, what motivates me, what excites me. Once you have that to 95% confidence rating, um, I want you to then present this assignment and link it to my interests." So what this does is we assume like, there there's there's one approach where, yes, you have to do this, we know you have to do this, this is just the onerous part of this particular point in your educational process, like dividing up the writing process into five different, uh, five different stages, for example. But when you can offload... and it's not really offload... but when you can put the creation of the intrinsic motivation feedback loop for any assignment onto an AI, you have the chance of making that assignment a value-add for the student, which can short-circuit academic integrity issues. It can short-circuit offloading for the student. Because for me, I, you know, I almost failed out of high school because I have that type particular mind that is, I need to know why I'm doing this. If I think the reason's stupid, I'm not going to do it. Um, and so that kind of feedback loop that can link any of the work that you're doing to your interests in ways that are structured, like that's the really low-hanging, immediately accessible way to get students to continue doing the work as we think about some of the larger structural, disciplinary, epistemic, uh, administrative, uh, vectors that are a lot hairier.
**[33:18] Anand Rao:** Boy, it seems like a great use of the memory feature in most of these platforms for a student. If they can set that up so that the platform understands their interests, their hobbies, their passions, so then they can take the work and find ways to tie it in. I I like that a lot. I'm going to try that on my own classes.
**[33:34] Adam Pacton:** Yeah, that's that's the supercharged way of doing it. You can do it with a single prompt. Uh, but yeah, using that memory feature... I I recently, uh, posted on LinkedIn asking, uh, sharing a prompt originally that OpenAI shared on their TikTok about, um, "Tell me something that you, the AI, has been trying to tell me, but I'm just not hearing. I'm just not listening." And it's blown up with people saying, "I feel like I've gone to a year of therapy. Uh, you know, this is, it's so clear." So like that that leveraging of that memory feature like is even further than I thought about this as that pathway to intrinsic motivation. And then if you go further with custom instructions and say, "Okay, now you have this, here are your data points. As we're working on other work, remember this and help me," you know, you can scaffold metacognition to intrinsic motivation. I love it.
**[34:16] Anand Rao** Maybe the next step then is as a faculty member, if I have my students with their memory features set up that way to understand their interests, I don't give everybody the same assignment. It's automatically tailored for the students.
**[34:25] Adam Pacton:** Yes, that's good. I I'm going to steal that. Let's build that platform. That'd be fun.
**[34:49] Anand Rao:** Absolutely. Oh, Stefan, you're muted. It's interesting too, just a quick reflection on that to a little bit of the discussion earlier about, we want to make sure students have tools, right, and tools that they can use. Well, in order to, you know, kind of integrate that memory instruction into the assignment, they need to be working in a system that has kind of that memory, uh, memory instruction, right? So, that that's kind of, you know, a lot of times we're just thinking about like equity in in terms of like access or like, you know, can they all use the same thing? But as kind of more, you know, instructors integrate the the AI tools into the work, the the tools need to be able to kind of generally do the, um, provide kind of the ability to help the students, in empower the students in the way that, uh, the way that you described. Um, but you know, we kind of been on on, you know, some technical notes here. Uh, you know, on a more personal note, I noticed that you post on LinkedIn about, uh, once a week, uh, where you've mentioned discussions, uh, with your teen about AI. And I think that's great because I actually try to talk to my teen about AI, but he kind of got a little tired of it. So, I kind of wait, you know, for her, for him to... I wait for him to raise the, uh, uh, the issue, and then, you know, I just go. But, uh, I think it's really awesome, you know, that you're having these weekly discussions. And what you know, what have they said that like has surprised you like the most or given you a fresh perspective on on the technology?
**[36:21] Adam Pacton:** Yeah. So, you know, first, uh, sort of a a caveat. Um, you know, we we have both parents are in the house. We are, uh, both teachers at the college level. Um, my wife is a a best-selling commercial author. Um, we we have a small echo chamber and we homeschool. So there's a lot of high-level discourse just all the time. But, um, you know, one of the things that we are continuously doing in those those weekly lessons, by by the teen's request, uh, is combining rhetoric and AI. Like that is baked in, uh, from the the foundation, and it's also starting with, um, their interests. But one theme that I see over and over and over again, which is a clarion call for educators is, um, the desire to maintain agency and voice as a reflection of maintained agency. So there's, uh, you know, I I think even Sam Altman recently in an interview was talking about these horror stories about, oh, teens are just, they they won't make any decision unless they're they're asking AI, and, uh, it's it's absolute decision offloading. And I don't know which teens, uh, he's talking to. Uh, most teens that I know, uh, and and I know I was this way as a teen, they they want, uh, they need developmentally self-determination. And so like, uh, most teens are canny enough to see that AI can, you know, get in the way of that. So I think showing, um, I know showing my teen and showing others, like, here are the ways that we increase, expand, and elaborate agency, uh, that is... that creates the buy-in immediately. And so, you know, uh, for my teen, uh, one of the things that we do is we we talk about, uh, in any given lesson, what is it you're interested in? And then we start there and we'll do like a deep research. So we'll get a deep dive and then I'll ask them, "How do you want to how do you want to approach this? Do you want to play a game? Do you want to do have a traditional report? Do you want to have a mystery? Uh, do you want to have, uh, this information spun up as a Dungeons and Dragons game that like the story is then spilling out?" And, uh, you know, the fears around, uh, agency and efficacy are real, but AI as replacement, I I don't see it. I think that that is a rhetorical position that people use to forward, uh, a product or to forward a very particular agenda. Uh, we can use AI to expand agency or we can use it for full cognitive offloading. Whether that happens is on us, uh, in terms of how we teach how to use this, in terms of how we acculturate it, um, in terms of the the general culture of use around it.
**[39:31] Anand Rao::** I I totally agree with you. But I guess my one caveat might be that I I am concerned that not all educators and and have the desire or necessarily the the means to be able to offer that agency or recognize that agency. When I think about an overloaded public school teacher that isn't able to then think about personalizing assignments and unfortunately that ends up becoming a situation where students are feeling like it's thrust upon them. They don't really have any say and they're more likely to to look to offload it. So that might be a separate point, just as as an aside. That's something I have a little concern with, but I I I think I'm with you on the idea of there are ways to be able to reach that agency and to be able to affirm that agency, and students really want it and they need it. I did want to ask you a bit of a follow-up thinking about the way teens are using the technology. You know, we've always heard the the phrase "digital natives," and I think that's always been a bit of a misnomer, uh, and concerns of how some of our colleagues might assume, "Well, they're digital natives. They know how all of this works." Um, speak a little bit to that when you see students in the college setting that, um, can we assume that they know how AI works, how they use it effectively, um, where do we need to meet them in terms of providing that? And obviously, we talk about AI literacy, but it seems like it's a little more than that for for some of these students.
**[40:47] Adam Pacton:** Absolutely. I I think, um, I want... I'm not going to quote the person because I'm not sure if they said this or not. Uh, but a very a very famous person in the space said, you know, "We need to we need to think less about teaching AI and teaching people how to use AI in these really structured ways. You just need to go and chat and have a discussion with it." If you have a PhD that touches on communication, how you chat is going to be very different than someone who doesn't have that level of education. And I think, you know, there's there is a danger of, um, AI moving like water as you interact with it. Uh, a lot of teens who haven't had direct, uh, education in it, and a lot of adults, um, they they have discussions with it that I see... Um, I forgot the term for this, when one technology sufficiently resembles another, we transfer over, um, you know, some of our rules. So like Google Sheets and Microsoft Word. Um, what we've seen behaviorally is the chat is either a direct input-output, input-output, input-output in a very like simplistic way, or, uh, there's over-anthropomorphization. And I'm shocked that I got that out in one... this late in the day. Um, and so what what I've done with my teen is, and this has changed at various points, is we talk about prompting frameworks as ways to get started. And you know, people are talking about contextual prompting now, but I use a very simple one which is RATIO. So what's the Role? Who's the Audience? What's the Task? What's the Instructions? What's the Output? And giving just that little framework, that that little like hat, for lack of a better term, uh, radically adjusts the behavior. And so I think as we interact with users, it's the same thing for teens as it is for adults. And this is this is core to understanding. This is not analogous to technology we've experienced in our lifetime. This is literally for the first time, perhaps, uh, since this was coined, uh, and became a theoretical principle, we're living through and aware of a paradigm shift, uh, fundamentally, uh, in terms of technology, epistemology, aesthetics. And if we can give people frameworks that can speed up on-ramping in structured, safe ways, then we're going to see a huge boat raising. Um, if we expect sort of, um, old-school Fordian manufacturing approaches to education with AI, we're in the weeds. We're we're lost. And, um, you know, I think users, whatever the metaphor we use for them, but users who are are deeply in the tech context from a very young age are going to move through it, uh, without the same sort of reflective apparati that we have. So we we have to give them the apparati that can make visible what's happening, and in that, give them greater agency, but but sell them on that. Like, hey, this is, you know, this is like playing, uh, and I'm not going to, I'm not going to out myself in what games I play, but this is like playing very complicated video games where you have to learn progressions and, you know, you can you can play through it and have fun, but if you want to be able to engage and to have better control and to to move deeper, you have to know a few structural things.
**[44:28] Stefan Bauschard:** All right. Yeah. And I'm going to ask you one question, then we're going to pause for a second because I have to get up and plug in. So I'll give you a minute to answer. Um, but, uh, you know, I want to follow up on that that notion of two two ideas, right? One one is talking about agency and the other one is talking about a paradigm shift. And kind of intersecting both of those is we have something that now works. It used to be called Operator, uh, where it could go on your desktop and do these things, and it didn't really work that well and nobody really used it. Um, but now we have, uh, just released, you know, a few days ago, uh, by OpenAI called an Agent, which actually can go out and do things. The first time I tried it, it worked for me for 30 minutes and produced a pretty decent result, which I was shocked. I just kind of assumed it was going to time out and not finish because it it just it just kept thinking. And and then it did all all three things that I requested and I say gave me pretty reasonable output. I taught a class with it that night. I teach an entrepreneurship class with a business school graduate, and that's kind of what I asked it to prepare. And he's like, "Yeah, that's like basically the, you know, the the type of material you would need for a pitch." And he he thought it was, you know, really reasonably strong for a sample. So when we talk about, you know, risks of kind of agency going away, uh, and this paradigm shift, I say that intersects two issues because you can just kind of basically ask it, uh, to go out and like kind of do all these things for you. And it's obviously going to be a paradigm shift in terms of how people work, whatever kind of work they continue to do, right? And you know, even just the way you work, right? I mean, this could this could work, this is a brand new product that worked for 30 minutes, and a year it's going to work for a day or two. So I'm just wondering wondering if you kind of might, you know, reflect on that and, you know, coming in this fall, like now it went into the $20 account, I I think today, right? And, you know, that that's obviously affordable for a lot of students. So how do you think that might kind of impact, um, you know, agency generally in the paradigm shift work that's being done on campus, and then, you know, maybe even faculty awareness of kind of this new capability. But I'll give you a minute to to think, answer.
**[46:42] (Brief pause)**
**[47:47] Adam Pacton:** So I think the questions about agent and operator and agentic AI, uh, in general are, uh, they are critical, great questions. And this is the one place where, uh, I will not seem as much like an AI evangelist, uh, as I may in some of my other, uh, remarks. Um, you know, I I started playing with Agent, uh, last night. I I think it it went live pretty late, uh, for the Plus users. And you know, the my big reflection on it is that it it pulls a deep research in, uh, pretty effectively for people who are not yet familiar with, uh, using deep research. Um, and in terms of it making decisions, one of the things that I keep coming back to is, what does it mean for the AI to make those decisions if oversight is necessarily constant? Um, and so, uh, a lot of people are are using fairly innocuous use cases around Operator, but anything that requires, uh, even a modicum of, uh, human use, uh, there there's a big disconnect between, uh, what people see at a very high level organizationally and then what individuals are seeing, uh, within their roles. I think, uh, Stanford, I think HAI, uh, just released a few things around this, around that disconnect. Um, I do not, uh, I personally am not at the point where I trust the AI to make these decisions yet. Um, I I've used it long enough to know that, um, those choices are not, uh, always going to align very well. Uh, generally, I am agnostic about worries that AI is a stochastic parrot. Um, you know, I I made my peace with Searle's, uh, Chinese box experiment decades ago. And but with agents, that's a different a different question entirely. So intentionality, meaning, um, this is where things start getting a little hazier for me. Um, and that's just personally. Now institutionally, um, that again is an even stronger, like, this is a place to hit the brakes. Um, we have, you know, I I I work for a multi-billion dollar massive institution, and the diversity of AI use, the layers of security, and the complexities of interoperability of SSO, um, I think any institutions that rush to be first movers on Agentic AI within their institutions are going to have huge heartbreak. There's going there's going to be a lot of problems. Uh, especially as we see some, uh, I think the news story was yesterday about an agent that deleted, uh, its company's entire database. Yeah. So, um, I I'm a firm believer of humans in the loop. I don't think we're at a place now where we can with clarity identify those places where we can safely remove humans from the loop. I don't even think we're there in terms of like, uh, as agents are doing basic research, um, you know, not primary research, just basic secondary research. Um, so I'm going to wait and see on that. I don't feel the pressure to be a mover on Agentic AI. Um, I think those, as I said, I think those who really rush out the gate on it, uh, you're... they're leaving themselves open to a lot of vulnerability.
**[51:41] Anand Rao:** Yeah. Yeah, I mean, it'll be an interesting question, you know, beyond like whether you want to lead on it, but what kind of use of these tools, right? Even kids using it in their own systems like on your network, right? Like these, and you know, university networks have obviously become, you know, quite secure. But of course, when you, you know, the multi-billion dollar institutions probably have more, you know, just kind of layers of security and and uses kind of beyond, uh, some others. So, you know, and we have, you know, we have cyber issues just kind of generally in society. So I think this is going to, um, it's going to really open, really open the door. I I will say, when I showed a couple of people this, um, you know, it really helped raise awareness to the capabilities of kind of AI, whether you know that, you know, okay, maybe the agent, you know, wasn't fully autonomous, but it it really kind of shocked people. Uh, and you know, I think, uh, you know, Anen told me that, uh, you know, just the agent in the Comet browser, uh, you know, was able to complete an an assignment for a student and you know, type it, type it slowly into their Google Docs, you know, for the teachers who are still still relying on the, you know, the history of the of the Google Doc to see if the student was writing. So, you know, that's obviously like a pretty simple use, right? And it's probably not much of a, you know, huge security issue. And it kind of gets at the question of regardless of whether students, you know, institutions want to be forward and, you know, helping students learn how to use these and integrating them into the curriculum, where it's just kind of like students showing up on campus with a Comet browser, which will probably be in the $20 subscription. And you know, there's plenty of things floating around, uh, where you can get invitations and just putting their assignment and a screenshot of their assignment in the browser, uh, and having and having, uh, it completed. So just kind of as a follow-up, do you see these as kind of just, uh, possibly even just being very disruptive, uh, academically, um, you know, with with kind of out like a... you're still prompting it, but you know, not in the same way.
**[53:49] Adam Pacton:** Yeah. I think that it's, this is a similar species of disruption, um, that we've been seeing, I think, since 2022. Um, it's going to be more ubiquitous. It's much lower bar to, um, you know, to entry. But that again, I'm going to circle back and say it is, uh, intrinsic motivation and ROI. Like these are things that we have not, uh, in education, uh, some... I mean, some people have dealt with this and tried to like show it and make it legible for students for a long time. But many, many, many people have just, um, that hasn't been the primary focus. The primary focus has been, "Here's your content. This is what you have to know. These are your outcomes. You have to achieve this." But now, you know, if we want to be serious about, uh, securing our assessments, if we want to be serious about, uh, co-creating education with students, which I think we we should be, then we really, we have to find those intrinsic motivation entryways and we have to work to show what the ROI is. We have to show students like, "Look, you actually know... you actually have to know how to put together an argument rather than just like tell ChatGPT, 'Make me an argument.'" You you have to know that you have to ask for the different parts of an argument. You have to know that it's going to satisfice you. Uh, you know, so when we can show, "You need to know X because this is why it's going to be returned," and we're going to meet you where you are, whether that's, you know, through, um, you know, basic like intrinsic motivation exercise like I described earlier, or whether that's through on-the-fly personalization, leveling, all these other affordances that AI gives, like that's what we have to confront to meet that disruption. Because I, um, you know, I I work with faculty who are not as, uh, immersed in AI, but who are still like teaching writing. They're like, "Hey, I know this student used AI. It has all these tells." Fantastic. You can't accuse them. You will be sued. We can't prove it because of humanizing AI. So work with what's in front of you and let's get that curriculum changed so they actually care enough to do what's most important. Uh, and then we'll assess that.
**[56:14] Stefan Bauschard:** We we've taken so much of your time and I know we need to to to let you go soon. So we'll get back to our our final two questions. Um, you know, Adam, for the educator or administrator that's listening who feels both inspired but also completely overwhelmed, uh, what advice would you give them? What's the single most important thing they can start with this week just to start to move in the right direction?
**[56:40] Adam Pacton:** I I think, you know, my my advice, and this is my advice as an educator, this is my advice, um, for life in general, is play. Use play as your entryway. Um, get onto an AI, get it to understand you in context. Use that exercise we talked about. "Ask me questions to know what I'm interested in." Now tell me some... like as simple as it asks questions, gets to know you a little bit. "Tell me something you think is going to blow my mind," and then go down the rabbit hole. Like we, uh, in our family, one of the things that we do a lot is, uh, we go on country drives and what we'll do is before we go, we'll we'll go on to Perplexity and say, "Hey, give me a deep research on insert some really wild, uh, overspecific thing." I think one of ours was, uh, "Give us a deep research on, uh, the Dresden Files book series' conceptions of the courts of Fae and, uh, give us background, give us quotes, and introduce us to all the characters." And then we'll go. And I think that it's through play that is the natural way to flow states. It's the natural... like we want more flow. So when we experience that, we're going to want to come back. And that as we do that, then it becomes easier to look around and be like, "Oh, this is a pain point over here. I'm going to work with AI on this a little bit." But if we start from the, um, "add more things, do more things," no one wants to get started with that. That's been the promise of technology for hundreds of years now. It should make our lives easier. So if we're just adding on as a bureaucratic or a checkbox thing, forget it. Play. That's the best advice I can give.
**[58:31] Stefan Bauschard:** Excellent.
**[58:31] Anand Rao: Um, so you know, your, uh, yeah. So why don't we have the last one? But, uh, yeah, you know, I mean, look, your comments here are, uh, incredibly insightful and I I really love the way you, uh, say things and have really kind of articulated, uh, some of the the key concepts and ideas. And you know, where where else can, you know, people go to to follow your work? I mean, we can share, uh, we can share your LinkedIn, maybe, you know, some other resources on, you know, how to access a newsletter I know that you're working on. Um, would you maybe kind of just give us a a a quick preview of of some concrete things that people, uh, you know, can read or access that you're working on and where they can find where they can find you?
**[59:20] Adam Pacton:** Yeah, absolutely. And I'll even I'll share one of my, uh, Wizard of Oz tricks with the listeners as well. Um, you know, I make use of, uh, Perplexity and ChatGPT schedule tasks to get me very focused news stories every day on very specific topics, come right to my inbox, right at very specific times. And then, uh, LinkedIn, I was telling another colleague at a different institution today, LinkedIn is not what people think it is anymore. It is a is a different creature entirely. Um, I I tend not to publish traditionally anymore because it's so slow. Um, and LinkedIn, as you both know, the the speed at which information can be shared and resources and community is incredible. Um, so I would say, uh, come find me on LinkedIn. I am I'm on there daily putting out, sharing articles, sharing resources. That's where I'll be sharing out our, um, our upcoming newsletter from CISA, "Humans in the Loop." Uh, that'll be coming out, I think, every two weeks, uh, starting in August. So we'll be featuring lots of, uh, voices to follow, lots of new articles, lots of prompts for educators and non-educators, uh, alike to try. Um, and you know, drop me a a comment or DM and come join the conversations.
**[1:00:47] Stefan Bauschard:** Adam, thank you so much for taking the time to talk with us. Uh, we've learned a lot and I really, really look forward to following your work and following that newsletter as well. And hopefully, we'll have you back to talk a little bit more about how you've implemented some of these in the future.
**[1:00:59] Adam Pacton:** Yeah, thank you so much for having me. It was a pleasure chatting with you both today.
**[1:01:03] Anand Rao:** You're welcome. Thanks.
**[1:01:04] Stefan Bauschard:** Thanks. Take care.
**[1:01:05] (Outro Music)**

