A Few Good AI "Debate Cards"
When I started debating in the mid-1980s, we would arm ourselves with a series of quotations we would use to support our arguments.
To organize the quotes, we would write or type up the supporting quote on an index card, “tag” the card — put a summary statement at the top of the quote —, and then organize the index cards in a file drawer or brief case system (yes, we were/still are nerds).
The quotes we used became known as “cards"."
As technology advanced, these were typed up on mimeograph paper so multiple copies could be made and more easily shared with teammates. On the mimeograph paper, we organized our “blocks” — sets of what you can think of as organized responses to a major argument supported by quotes/ “cards.”
As photocopying became ubiquitous, we would copy library articles and books, physically “cut” the quotes from the photocopied articles, and tape them to papers we would then subsequently photocopy to share with our teammates.
Copying text directly from e-books and web articles (“copy & paste”) replaced the physical cutting of the quotes.
Debaters, however, still printed their word processing files read our “blocks” in debates until systems built around Word macros enabled the easy organization and manipulation of the documents and “cards” to use in the debates.
Despite this progress, including the complete elimination of printing, debaters still “cut cards” (just electronic ones) and organize blocks. They also “tag” their cards.
These are some samples of “cards” I “cut” and “tagged” about AI.
I hope you enjoy the cards!
__
Technology will emerge that enables machine intelligence to exceed human brain power
Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page number at end of card
The 2030s and 2040s: Developing and Perfecting Nanotechnology It is remarkable that biology has created a creature as elaborate as a human being, one with both the intellectual dexterity and the physical coordination (e.g., opposable thumbs) to enable technology. However, we are far from optimal, especially with regard to thinking. As Hans Moravec argued back in 1988, when contemplating the implications of technological progress, no matter how much we fine-tune our DNA-based biology, our flesh-and-blood systems will be at a disadvantage relative to our purpose-engineered creations.[45] As writer Peter Weibel put it, Moravec understood that in this regard humans can only be “second-class robots.”[46] This means that even if we work at optimizing and perfecting what our biological brains are capable of, they will be billions of times slower and far less capable of what a fully engineered body will be able to achieve. A combination of AI and the nanotechnology revolution will enable us to redesign and rebuild—molecule by molecule—our bodies and brains and the worlds with which we interact. Human neurons fire around two hundred times per second at most (with one thousand as an absolute theoretical maximum), and in reality most probably sustain averages of less than one fire per second. By contrast, transistors can now cycle over one trillion times per second, and retail computer chips exceed five billion cycles per second.[48] This disparity is so great because the cellular computing in our brains uses a much slower, clunkier architecture than what precision engineering makes possible in digital computing. And as nanotechnology advances, the digital realm will be able to pull even further ahead. Also, the size of the human brain limits its total processing power to, at most, about 1014 operations per second, according to my estimate in The Singularity Is Near—which is within an order of magnitude of Hans Moravec’s estimate based on a different analysis.[49] The US supercomputer Frontier can already top 1018 operations per second in an AI-relevant performance benchmark.[50] Because computers can pack transistors more densely and efficiently than the brain’s neurons, and because they can both be physically larger than the brain and network together remotely, they will leave unaugmented biological brains in the dust. The future is clear: minds based only on the organic substrates of biological brains can’t hope to keep up with minds augmented by nonbiological precision nanoengineering Kurzweil, Ray. The Singularity Is Nearer (pp. 245-246). Penguin Publishing Group. Kindle Edition.
AI could lead to multiple catastrophes
Kaifu Lee & Quifan, August 2023, KAI-FU LEE is the CEO of Sinovation Ventures and the New York Times bestselling author of AI Superpowers. Lee was formerly the president of Google China and a senior executive at Microsoft, SGI, and Apple. Co-chair of the Artificial Intelligence Council at the World Economic Forum, he has a bachelor’s degree from Columbia and a PhD from Carnegie Mellon. CHEN QIUFAN (aka Stanley Chan) is an award-winning author, translator, creative producer, and curator. He is the president of the World Chinese Science Fiction Association. His works include Waste Tide, Future Disease, and The Algorithms for Life. The founder of Thema Mundi, a content development studio, he lives in Beijing and Shanghai. Lee, Kai-Fu; Qiufan, Chen. AI 2041
As in 1848, after a period of explosive growth and momentum, there came a period of counterrevolution. But this time, some of the loudest protesters are, paradoxically, also at the vanguard of AI development. In March 2023, a group of over 33,000 scientists, start-up founders, and corporate leaders signed an open letter calling for a six-month moratorium on training powerful AI systems due to their risks to humanity. This was followed by an even louder tocsin. A precise twenty-two-word statement from the Center for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This stark statement was signed by some of the biggest names in AI, the likes of Sam Altman, Bill Gates, Geoffrey Hinton, Demis Hassabis, and Stuart Russell. Even those at the cutting edge of this technology are accelerating forward with their hands itching over the brake. Beyond the existential risk, reconstituting the world for AI 2.0 brings myriad new challenges. We explored some of the ones that give us white knuckles, like autonomous weapons, in chapter 7, “Quantum Genocide.” Two years later, the kind of harrowing scenario we imagined—with a swarm of autonomous drones—is not looking so far-fetched at all. Governments, companies, civil society, and individuals will have to work to mitigate these possible harms. The reality is that regulation will always lag behind innovation, and innovation is moving at light speed. On a technical level, there are still kinks to iron out. As I point out in chapter 3, generative AI is prone to so-called “hallucinations,” providing confident yet inaccurate answers to questions. Accuracy is vital or else public trust and acceptance will be impossible. Even when working correctly, AI 2.0 could be a Cambridge Analytica on steroids: a disinformation machine, personalizing and adjusting its message to persuade and influence swathes of the population. Previous election interference will be dwarfed as governments struggle to compete with bad actors, as I show in chapter 2, “Gods Behind the Masks.” The scalability that makes AI so promising also makes it dangerous. In the labor market, AI 2.0 will displace workers, resulting in inequality, loss of purpose, and possibly even social unrest—likely unleashing a justified rage against the machine. To avoid backlash, citizens must be supported as they future-proof their careers against the ever-lengthening reach of AI. Even environmentally there are issues. These models are trained with a vast amount of computing power, which uses electricity, water, and e-waste. In a warming world where water and energy are becoming more competitive, this usage could be hard to justify, stunting the industry. On a more spiritual level, some people are using generative AI to reincarnate their dead relatives, so called griefbots, a topic I explored with celebrities in “My Haunting Idol.” Last year, Pope Francis, Rabbi Eliezer Simcha Weiss (a member of the Chief Rabbinate Council of Israel), and Sheikh Abdallah bin Bayyah (an Islamic scholar) met in Rome to discuss AI and theological consequences of this development.
AGI (Computers smarter than all humans) by 2029
Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card
Cassandra: So you anticipate a neural net with sufficient processing power to be able to exceed all capabilities by humans by 2029. Ray: Correct. They are already doing that with one capability after another. Cassandra: And when they do that, they will be far better than any human in every skill possessed by any human. Ray: Correct. In one area after another, they will be better than all humans by 2029. Kurzweil, Ray. The Singularity Is Nearer (p. 287). Penguin Publishing Group. Kindle Edition.
AI reduces the risk of nuclear terrorism and accidental nuclear war
Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page number at end of card
Still, there is reason for measured optimism about the trajectory of nuclear risk. MAD has been successful for more than seventy years, and nuclear states’ arsenals continue to shrink. The risk of nuclear terrorism or a dirty bomb remains a major concern, but advances in AI are leading to more effective tools for detecting and countering such threats.[20] And while AI cannot eliminate the risk of nuclear war, smarter command-and-control systems can significantly reduce the risk of sensor malfunctions causing inadvertent use of these terrible weapons.[21] Kurzweil, Ray. The Singularity Is Nearer (p. 270). Penguin Publishing Group. Kindle Edition.
AI enables a fast response to a bioterror attack and prevent human extinction
Kurzweil, June 2024, Ray Kurzweil is a renowned inventor, futurist, and author who has made significant contributions to fields such as artificial intelligence, optical character recognition, and speech recognition technology. Known for his predictions about technological singularity, Kurzweil has received numerous awards, including the National Medal of Technology, and currently serves as a director of engineering at Google, where he focuses on machine learning and language processing, The Singularity Is Nearer, page nubmer at end of card
Biotechnology We now have another technology that can threaten all of humanity. Consider that there are many naturally occurring pathogens that can make us sick but that most people survive. Conversely, there are a small number that are more likely to cause death but that do not spread very easily. Malevolent plagues like the Black Death arose from a combination of fast spread and severe mortality—killing about one third of Europe’s population[22] and reducing the world population from around 450 million to about 350 million by the end of the fourteenth century.[23] Yet thanks in part to variations in DNA, some people’s immune systems were better at fighting the plague. One benefit of sexual reproduction is that each of us has a different genetic makeup.[24] But advances in genetic engineering[25] (which can edit viruses by manipulating their genes) could allow the creation—either intentionally or accidentally—of a supervirus that would have both extreme lethality and high transmissibility. Perhaps it would even be a stealth infection that people would catch and spread long before they realized they had contracted it. No one would have preexisting immunity, and the result would be a pandemic capable of ravaging the human population.[26] The 2019–2023 coronavirus pandemic offers us a pale glimpse of what such a catastrophe could be like. The specter of this possibility was the impetus for the original Asilomar Conference on Recombinant DNA in 1975, fifteen years before the Human Genome Project was initiated.[27] It drew up a set of standards to prevent accidental problems and to guard against intentional ones. These “Asilomar guidelines” have been continually updated, and some of their principles are now baked into legal regulations governing the biotechnology industry.[28] There have also been efforts to create a rapid response system to counteract a suddenly emerging biological virus, whether released accidentally or intentionally.[29] Before COVID-19, perhaps the most notable effort to improve epidemic reaction times was the US government’s June 2015 establishment of the Global Rapid Response Team at the Centers for Disease Control. The GRRT, as it is known, was formed in response to the 2014–2016 Ebola virus outbreak in West Africa. The team is able to rapidly deploy anywhere in the world and provide high-level expertise to assist local authorities in the identification, containment, and treatment of threatening disease outbreaks. As for deliberately released viruses, the overall federal bioterrorism defense efforts of the United States are coordinated through the National Interagency Confederation for Biological Research (NICBR). One of the most important institutions in this work is the United States Army Medical Research Institute of Infectious Diseases (USAMRIID). I have worked with them (via the Army Science Board) to provide advice on developing better capabilities to quickly respond in the event of such an outbreak.[30] When such an outbreak occurs, millions of lives depend on how quickly authorities can analyze the virus and form a strategy for containment and treatment. Fortunately, the speed of virus sequencing is following a long-term trend of acceleration. It took thirteen years after its discovery to sequence full-length genome of HIV in 1996, and only thirty-one days to sequence the SARS virus in 2003, and we can now sequence many biological viruses in a single day.[31] A rapid response system would entail capturing a new virus, sequencing it in about a day, and then quickly designing medical countermeasures. One strategy for treatment is to use RNA interference, which consists of small pieces of RNA that can destroy the messenger RNA expressing a gene (based on the observation that viruses are analogous to disease-causing genes).[32] Another approach is an antigen-based vaccine that targets distinctive protein structures on the surface of a virus.[33] As discussed in the previous chapter, AI-augmented drug discovery can already enable potential vaccines or therapies for a newly emerging viral outbreak to be identified in a matter of days or weeks—hastening the start of the much longer process of clinical trials. Later in the 2020s, though, we will have the technology to accelerate an increasing proportion of the clinical trial pipeline via simulated biology. In May 2020 I wrote an article for Wired arguing that we should leverage artificial intelligence in order to create vaccines—for example, against the SARS-CoV-2 virus that causes COVID-19.[34] As it turned out, that is exactly how successful vaccines like Moderna’s were created in record time. The company used a wide range of advanced AI tools to design and optimize mRNA sequences, as well as to speed up the manufacturing and testing process.[35] Thus, within sixty-five days of receiving the virus’s genetic sequence, Moderna dosed the first human subject with its vaccine—and received FDA emergency authorization just 277 days after that.[36] This is stunning progress, considering that before COVID-19 the fastest anyone had ever created a vaccine was about four years.[37] As this book is being written, there is ongoing scientific investigation into the possibility that the COVID-19 virus might have been accidentally released after genetic engineering research in a lab.[38] Because there has been a great deal of misinformation surrounding lab-leak theories, it is important to base our inferences on high-quality scientific sources. Yet the possibility itself underscores a real danger: it could have been far worse. The virus could have been extremely transmissible and at the same time very lethal, so it is not likely that it was created with malicious intentions. But because the technology to create something much deadlier than COVID-19 already exists, AI-driven countermeasures will be critical to mitigating the risk to our civilization. Kurzweil, Ray. The Singularity Is Nearer (p. 273). Penguin Publishing Group. Kindle Edition.
US leadership key to global artificial superior intelligence control
Leopold Aschenbrenner, June 2024, S I T U AT I O N A L AWA R E N E S S: The Decade Ahead, https://situational-awareness.ai/ situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf, Columbia UniversityColumbia University Bachelor of Arts - BA, Mathematics-Statistics and EconomicsBachelor of Arts - BA, Mathematics-Statistics and Economic 2017 - 20212017 – 2021 Valedictorian. Graduated at age 19, Fired from OpenAI for leaking
Superintelligence will give a decisive economic and military advantage. China isn’t at all out of the game yet. In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers? And will we manage to avoid self-destruction along the way? Superintelligence will be the most powerful technology— and most powerful weapon—-mankind has ever developed. It will give a decisive military advantage, perhaps comparable only with nuclear weapons. Authoritarians could use super- intelligence for world conquest, and to enforce total control internally. Rogue states could use it to threaten annihilation. And though many count them out, once the CCP wakes up to AGI it has a clear path to being competitive (at least until and unless we drastically improve US AI lab security). Every month of lead will matter for safety too.
[There are more “cards” below the fold, and subscribers will enjoy access to a weekly email full of “AI Debate Cards”]
Keep reading with a 7-day free trial
Subscribe to Education Disrupted: Teaching and Learning in An AI World to keep reading this post and get 7 days of free access to the full post archives.