This morning, I sat down to do what I often do: produce a basic debate topic analysis.
Today’s topic: whether the US should enter into a free trade agreement (FTA) with the European Union (EU).
After almost 40 years in debate at this point, I can write up a review of major debate arguments pretty quickly, and current AI tools have only turbo-charged my ability in this area. I usually know a lot about the topics already, so I can easily direct the writing and research tools to help me, and I usually only need minimal fact-checking and reference checking given how much I usually already know about the topics.
Anyhow, today, after reviewing some articles on the pros and cons of the deal, I came across an argument that an EU-US FTA would likely result in fewer regulations on genetically modified foods in Europe. I’ve been through the GM debate before on other topics, but my “backfile” on this topic was old, so I asked Perplexity (using Claude’s Opus as my AI tool choice as a Pro subscriber) for some references using a simple prompt.
Seriously? It can’t give me sources that say GMOs are dangerous and need to be regulated? While I agree with its bottom-line conclusion that most people think they are safe, and even the EU has moved to soften its regulations, I hardly agree that providing sources for the claim that GMOs can be considered dangerous and need to be regulated constitutes “spreading misinformation not supported by scientific evidence.”
Let’s try Google Scholar.
It just gives me the articles. It’s not so judgmental.
The second one is even from a good source with highly qualified authors.
I certainly wouldn’t have been so annoyed or taken the time to write this if it just happened once. But it did it again!
I simply asked it for sources that support the argument that disrupting US-EU relations would be good because it would support European “strategic autonomy,” something some people advocate.
Again, it couldn’t hold back, thoroughly presenting a contrary opinion I didn’t ask for, and even telling me that I was probably wrong on the net — “this risks damaging transatlantic unity if taken too far. The key is finding a balance where a more strategically autonomous EU complements rather than competes with NATO and the US alliance.”
Again, Google doesn’t judge me.
So, I thought I’d try the ultimate test.
Huh? While I certainly agree that there is at least consensus that the earth is warming, there is less of a consensus but close to one that humans are contributing to it (with the amount of the human contribution being subject to debate), I cannot disagree more with the claim that arguing the problem may be exaggerated constitutes “misinformation.” There are enormous debates on the impact and extent of the problem. It’s Perplexity’s conclusion (or Opus’) that there is a consensus on this, which is misinformation.
Anyhow, a few observations:
Traditionally, our search engines have helped lead us to information from which we can draw our own conclusions or judgments. With all the paid ads on the main search results page producing high rankings, they certainly influence our judgments, but they don’t tell us what to think about the topic we are researching.
I’m generally a big fan of these new tools. I like that they, like the internet (as a high school and college student, I had to go to the library to find information), make it much easier to find what we are looking for and even help us write. But now if they are going to tell us what to think, they are offloading judgment, an essential skill for both students and adults to develop and use. Information and answers can instantly be provided by AI (and the accuracy of that provided info will grow), but we will have to reach our own conclusions and judgments. Otherwise, we are just letting the AIs produce the information, analyze the information, produce judgments, and then (eventually) act on the judgments. I guess if we are comfortable with that…
How are these conclusions being drawn, and what is the source? I’ve never been a fan of citing AI the way APA and MLS suggest. Why? Because I take responsibility for any of the writing it produces that I publish and because I would never cite it for any factual claims (I first verify those and then cite the original source), but if AIs are going to make judgments and at least students are going to think they are real and rely on them, then they should certainly cite them!
I don’t think this is good. AIs, at least for now, have the ability to share a lot of knowledge, and that knowledge is getting more accurate based on several advances, but as far as we know, AI is probably not yet sentient or conscious. It seems here like it makes a judgment, but it doesn’t seem there yet in terms of ability.
AIs are still homogenous. I have been fortunate to be an advisor on a project with some who have more technical skills than I have working to individualize AIs, including helping them develop their own belief systems and values. This is an important project for several reasons, but we aren’t there yet (and no one is). Until this happens, I’m not so favorable to being told what the consensus is, especially when I don’t ask.
I do understand where this comes from. During the pandemic, we didn’t want our search engines to provide directions for people to drink bleach if they became infected with COVID-19, but if today’s AIs can’t distinguish between that and GMOs, European strategic autonomy, and the impacts of climate change, then I don’t think they are intelligent enough to make judgments and/or share their opinions without me at least asking first.
Have you tried Arc search? It takes this one step further and reads websites for you, outputting a new custom made site just for you.