AGI: Should More than a Few People Decide How we are going to Redefine Humanity and our Relevance in the Next Few Years?
Should we have more time to adapt?
Sunday’s claim by Sam Altman that OpenAI is close to AGI has touched off the debate about imminent AGI. Many are willing to pass it off as “hype,” claiming he’s a “tech bro.” OK, but ad hominem attacks don’t win debates. And he’s far from the only one who believes in a fast-time frame, even if most believe it’s later than 2025 or that his definition of AGI is too limited.
Back to AGI —
On Monday, the influential Allie Miller shared
This is not something we can just ignore because some people may not like Sam Altman.
The reality is that the development of true Artificial General Intelligence (AGI)—systems capable of performing almost any human task as well as or better than all or nearly all humans (Kurwzweil) and “a shorthand for any intelligence ... that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence> (Ben Gorezel and Shane Legg).
Altman’s defenition of AGI is more limited, focusing more on human-level job abilities, though that is a of abilities),.
A broader definitioin — say everything that all or nearly all humans can do + can be generalized — would represent the most profound transformation in human history, likely exceeding the combined impact of the agricultural and industrial revolutions. This transformation would ripple through every aspect of human society, fundamentally altering our economic systems, social structures, and even our understanding of ourselves as a species.
Under more comprehensive definitions of AGI (full ability to reason, plan, have common sense, and perhaps (though not necessarily) act with intention, we could see. —
The end of human labor as we Know it, and a potential existential crisis of purpose. Imagine a world where all forms of labor—from complex scientific research to artistic creation to caregiving—can be performed more effectively, efficiently, and cheaply by AGI systems. The industrial revolution mechanized physical labor; AGI would mechanize cognitive labor. This isn't just about job displacement in specific sectors; it's about the potential irrelevance of human labor on a fundamental level. What will a society look like where the very notion of "earning a living" becomes obsolete? How will humans find meaning and purpose in a world where their skills and talents are potentially surpassed by machines in every conceivable domain? This could lead to unprecedented societal upheaval and an existential crisis for humanity, forcing us to redefine what it means to be human.
Unprecedented wealth Creation, potentially coupled with unfathomable inequality. AGI could unlock levels of productivity and innovation far beyond our current imagination. Imagine new materials, cures for diseases, and solutions to climate change being discovered and implemented at an exponential rate. This could lead to an era of unprecedented abundance, potentially eradicating poverty and material scarcity. However, the question remains: who will control these AGI systems and the wealth they generate? If concentrated in the hands of a few, we could witness the emergence of unfathomable wealth inequality, potentially creating a new form of techno-feudalism where a small elite wields unimaginable power over the rest of humanity, who may be relegated to a life of dependence and irrelevance. This would exacerbate many societal issues that could become exponentially more difficult to solve.
The transformation of governance and warfare into a battle of algorithms. Imagine nation-states wielding AGI systems capable of strategizing, outmaneuvering, and outperforming human leaders in every aspect of governance and conflict. Diplomacy, resource management, and military strategy could become dominated by algorithmic calculations, potentially leading to a new era of hyper-efficient, but potentially dehumanized, global politics. Warfare could become a terrifying game of competing AGIs, capable of launching autonomous cyberattacks, deploying swarms of drones, and developing new weapons beyond human comprehension. The very concept of human agency in political decision-making could be eroded, replaced by a cold, calculating logic that may not prioritize human values or ethical considerations.
The redefinition of consciousness, intelligence, and life itself. If we create AGI systems that truly possess consciousness and self-awareness, it forces us to confront profound philosophical questions. What constitutes "life"? Do these machines deserve rights? Would they be our equals, our superiors, or something else entirely? The emergence of AGI could shatter our anthropocentric worldview, forcing us to acknowledge that we may no longer be the sole intelligent species on this planet—or even the most intelligent. This could trigger a profound identity crisis for humanity, challenging our long-held beliefs about our place in the universe and potentially leading to a complete re-evaluation of our values, ethics, and future as a species.
Given the profound and rapid societal transformation, this essay will explore a simple cased "forced pause" on AGI development to responsible progress and pace societal restructuring. The sheer scale of adaptations required for AGI—rethinking our economic structures, educational systems, legal frameworks, and even our fundamental understanding of human purpose—is too vast to be implemented the current rate of technological advancement, especially when coupled with current political ad geopolitical changes.
The reality is that if moving to AGI as quickly as possible breaks society, we will never capture its benefits.
It is true that such a pause claim has been made before, and by people far more influential than I, so what’s the point?
(1) I hope readers will meticulously examine each facet of potential societal upheaval detailed herein, from the obsolescence of traditional jobs and intellectual property to the very real existential threats and honestly assess whether such a breakneck pace of change is genuinely sustainable or if a more deliberate approach is essential to ensure that AGI ultimately serves, rather than subjugates, humanity. My hope is thinking what this means for society will give people pause.
(2) Practical proposals are being made by influential people.
Ethereum’s Buterin has issued a call to reduce world-wide compute by 99 percent.
Max Tegmark — has made a persuasive case for advancing narrow AI developments while avoiding AGI. In his presentation, he answers common objections to an international agreement to stop AGI. He points out that we have agreed to ban other advanced technologies such as bioweapons and human cloning. As for China, he points out that they also wouldn’t want to lose power to AGI.
(3) I’m spreading the word. If you agree, share.
(4) When the original pause later was made, AI was far less advanced and very few people were aware how ChatGPT3.5 and 4 were just part of a trajectory. Also, I think it’s fair to say even the most knowledgeable computer scientists, including AI Godfather Geoff Hinton, are. surprised by the rate of progress.
(5) I think it’s important to raise awareness, and this is a good way to do it.
Anyhow, let’s consider how disruptive AGI would be.
AGI, Economic Disruption and the End of/Change of "Work" As we Know It
In an AGI world, the economic world would undergo a radical restructuring. While initial concerns often focus on transitional unemployment, the reality would likely be far more profound—a permanent structural change in how human labor and value creation function. Traditional professions once considered immune to automation, including doctors, lawyers, and engineers, would face fundamental disruption, even if the jobs are not eliminated.
The key here is that disruption doesn't have to mean just mean jobs disappearing. It means a profound change in how a profession is practiced, the skills needed, the value proposition of human professionals, and the overall structure of the industry. Even if doctors, lawyers, and engineers still exist in an AGI world, their roles and how they work will be fundamentally altered.
Previously, these professions were traditionally seen as safe from automation because they required:
Complex problem-solving. Diagnosing a rare disease, crafting a novel legal argument, or designing a complex bridge require intricate reasoning and the ability to deal with unique, nuanced situations.
Specialized knowledge. Years of education and experience build up a vast reservoir of specialized knowledge that was thought to be uniquely human.
Judgment and intuition. These fields often involve making decisions based on incomplete information, relying on experience-based intuition and professional judgment.
Human interaction. Doctors need bedside manner, lawyers need to build rapport with clients, and engineers need to collaborate with teams. These were thought to be inherently human skills.
AGI's ability to process, learn, and apply knowledge across diverse domains would render many traditional skillsets redundant.
Superior Knowledge Processing
Vast databases. AGI can access and process the entirety of medical literature, legal precedents, or engineering principles in a way no human ever could. This includes journals, textbooks, case studies, research papers, and more.
Pattern recognition. AGI can identify patterns and connections within this data that humans might miss, leading to more accurate diagnoses, more effective legal strategies, or more innovative engineering designs.
Real-time updates. AGI can constantly update its knowledge base with the latest research and findings, staying ahead of human professionals who struggle to keep up with the exponential growth of information.
Enhanced Problem-Solving
Cross-domain Application. AGI can apply knowledge from one domain to another. For example, an AGI could leverage knowledge from materials science to inform a medical diagnosis or use insights from biology to design a more efficient engineering system. This cross-disciplinary thinking is difficult for humans who tend to specialize.
Scenario simulation: AGI could run thousands of simulations to test different diagnoses, legal strategies, or engineering designs, identifying potential risks and optimizing solutions in a way that would take humans years.
Logic and accuracy: AGI is not prone to the same cognitive biases or emotional reasoning that can cloud human judgment, leading to more objective and potentially more accurate decisions.
Skillset Redundancy:
Diagnosis and treatment planning: In medicine, AGI could potentially diagnose diseases with higher accuracy than humans by analyzing patient data (symptoms, medical history, genetic information) against vast medical databases. It could even suggest personalized treatment plans based on the latest research.
Legal research and analysis. In law, AGI could sift through mountains of legal documents, identify relevant precedents, and even draft legal briefs or contracts, significantly reducing the time and effort required by human lawyers.
Design and optimization: In engineering, AGI could generate multiple design options for a structure or product, optimize them for various parameters (cost, efficiency, durability), and even identify potential flaws before they are built.
Even if human professionals are still needed for oversight, ethical considerations, and human interaction, their roles will be dramatically transformed.
Shift in value proposition. The value of human professionals will shift from being primarily knowledge repositories and problem-solvers to being:
Interpreters and communicators. Explaining complex AGI-generated insights to patients, clients, or stakeholders.
Ethical decision-makers. Guiding the use of AGI within ethical boundaries and making decisions in complex cases where human values are paramount.
Human connection points. Providing empathy, understanding, and personalized interaction that AGI might not be able to fully replicate.
Increased efficiency and productivity. AGI will handle many of the routine and time-consuming tasks, allowing human professionals to focus on the most complex and challenging aspects of their work, potentially increasing overall efficiency and productivity.
New specializations. New roles will emerge that focus on working alongside AGI, such as AGI trainers, auditors, and ethicists. These roles will require a different set of skills that combine technical expertise with an understanding of human values and societal needs.
Democratization of expertise. AGI could make expert-level knowledge more accessible to a wider range of people, potentially reducing the need for highly specialized professionals in some areas.
In essence, AGI doesn't just automate tasks; it restructures professions. It challenges the traditional foundations of expertise, skill, and the value proposition of human professionals in these fields, even if it doesn't eliminate the jobs entirely. This is why it's a fundamental disruption.
*For those of you who follows the details of the AI capabilities, you may be thinking that some of what is above can already be done/will be able to be done soon by AI, regardless of AGI. This is true, and it’s a reason that it may make more sense to radically slow computing power than to first define AGI and then enforce a cut-off.*
AGI, The Loss of Intellectual Property and Innovation Compression
Intellectual property systems, long the backbone of current innovation incentives, would face obsolescence in an AGI World. AGI systems could rapidly generate alternative solutions to any problem, effectively circumventing patent protections while technically remaining within legal boundaries.
AGI's ability to rapidly generate alternative solutions throws a wrench into this carefully constructed system.
The Current Pace of Innovation (Pre-AGI)
Traditionally, innovation has been a relatively slow and iterative process:
Research and development. Years of research, experimentation, and development are typically required to bring a new product or technology to market.
Funding cycles. Securing funding for research projects often involves lengthy grant applications and review processes.
Prototyping and resting. Developing and testing prototypes is time-consuming, requiring physical construction, experimentation, and refinement.
Regulatory approval. Navigating regulatory hurdles and obtaining necessary approvals can take months or even years, especially in fields like medicine or transportation.
Market adoption. Even after a product is launched, it takes time for it to gain market acceptance and widespread adoption.
The AGI Revolution: Hyper-Accelerated Innovation
AGI has the potential to drastically shorten every stage of this process
Instantaneous research. AGI can process all existing scientific literature and data in a given field almost instantly, identifying gaps in knowledge and potential avenues for research at speeds unimaginable to human researchers.
Rapid prototyping. AGI could design and simulate prototypes in virtual environments, eliminating the need for physical construction in many cases. This allows for thousands of iterations in the time it would take humans to build a single physical prototype. For instance, an AGI could design a new airplane wing, simulate its aerodynamic properties, refine it thousands of times, and have a near-perfect design in minutes.
Accelerated experimentation. AGI can design and analyze experiments at lightning speed, identifying optimal parameters and drawing conclusions from data far more quickly than humans. This includes things like drug discovery, materials science research, and climate modelling.
Automated problem-solving. Instead of incremental improvements, AGI could potentially leapfrog entire stages of development by coming up with entirely new and unforeseen solutions to problems.
Adaptive manufacturing. AGI combined with advanced manufacturing techniques like 3D printing could lead to on-demand production of customized products, further reducing the time from concept to market. For example, if an AGI develops a new material, it can immediately send the specifications to an automated factory for production.
For example, today it takes 10-15 years and billions of dollars to bring a new drug to market. AGI could potentially identify promising drug candidates, design clinical trials, and analyze results in a matter of hours, dramatically reducing the time and cost of drug development.
The AGI Innovation Cycle & Patents
Circumventing Patents with Ease
Understanding the invention's essence. AGI can analyze a patent, understand the underlying principle or problem the invention solves, and then generate alternative solutions that achieve the same result through different means.
"Inventing around.” This process is known as "inventing around" a patent, and it's a legal way to avoid infringement. Currently, it's a time-consuming and costly process for humans. AGI can do it rapidly and efficiently.
Alternative designs seconds. Imagine a patented drug with a specific molecular structure. An AGI could analyze the patent, understand how the drug interacts with the body, and then design thousands of alternative molecular structures that achieve the same therapeutic effect but are different enough to avoid patent infringement.
No reverse engineering needed. Unlike humans, AGI may not even need to meticulously reverse-engineer a patented product. It can use its vast knowledge base to develop alternative solutions from first principles, starting with the desired outcome and working backward.
The Speed of Innovation Outpaces Patent Protection
Rapid iteration. AGI can iterate through design possibilities at an astonishing speed, generating and testing new solutions in a fraction of the time it takes human researchers.
Patent filing lag. The patent application process is slow. It often takes years from filing to grant. By the time a patent is granted, an AGI could have already generated multiple superior alternatives, rendering the original patent practically useless.
"Flooding the zone." A company could use AGI to generate a vast number of patent applications for slight variations of a technology, creating a "patent thicket" that makes it difficult for competitors to operate, even if their technology is meaningfully different. This could stifle innovation rather than encourage it.
Undermining the Incentive Structure
Reduced exclusivity. If AGI can easily circumvent patents, the exclusivity they offer is diminished. This reduces the incentive for companies to invest heavily in R&D, as they may not be able to protect their investments.
First-mover advantage still exists but is shortened. While the company that first introduces a product to the market will still have an advantage, that advantage will be significantly shorter-lived if competitors can quickly deploy AGI to develop and market alternatives.
Shifting focus from invention to implementation. The emphasis might shift from inventing new things to being the fastest and most efficient at implementing and commercializing ideas, regardless of who originally conceived them.
AGI, the Concentration of Power and Wealth Inequality
The concentration of economic power would reach unprecedented levels. The limited number of companies or nations controlling AGI systems could achieve market dominance that dwarfs the influence of current tech giants. Network effects would be amplified to extreme degrees, creating winner-take-all dynamics that render traditional antitrust frameworks obsolete.
Wealth distribution could also become an acute challenge. As AGI automates most economic activity, traditional mechanisms of wealth distribution through labor would break down. Without proactive measures, this could lead to unprecedented levels of wealth concentration and societal destabilization. Policy innovations like Universal Basic Income, wealth redistribution mechanisms, or "AGI dividends" could become essential for maintaining societal cohesion. Nonetheless, there are incredible challenges to actualizing these.
Challenges in Conservative Political Environments
The emergence of more conservative governments does create challenges related to managing the transition to AI.
Skepticism of government intervention. Conservative political thought often favors limited government intervention and deregulation. Agile governance, which may require more active government involvement in shaping technological development, could be met with resistance based on the belief that it infringes on individual liberties or free markets. For example, conservatives might be more likely to oppose regulations on AI development, even if those regulations are intended to address safety or ethical concerns.
Focus on national security. While both conservative and liberal ideologies support national security, conservative viewpoints tend to prioritize it to a greater degree. However, they may be hesitant to embrace international cooperation on AGI governance, fearing it could weaken national competitiveness or sovereignty. For example, a conservative government might be reluctant to sign an international treaty on AI development if it is perceived as limiting the country's ability to develop AI for military purposes.
Inherent Slowness of Government Responses
Bureaucratic inertia. Government agencies are often large, complex bureaucracies with established procedures and hierarchies. This can make them slow to adapt to new situations and resistant to change. For example, it can take years to develop and implement new regulations, even in the absence of political opposition.
Legislative processes. Passing new laws or amending existing ones is a time-consuming process, involving multiple stages of debate, negotiation, and approval. This can make it difficult for governments to keep up with the rapid pace of technological change. For example, the legislative process in the US, with its checks and balances, can be particularly slow, making it difficult to pass comprehensive AI legislation in a timely manner.
Election cycles. Political priorities can shift with each election cycle, making it difficult to maintain a consistent long-term approach to issues like AGI governance. A new administration might overturn policies put in place by its predecessor, leading to instability and uncertainty. For instance, a change in administration could lead to a complete reversal of policy on AI regulation.
Lack of technical expertise. Policymakers often lack the technical expertise to fully understand the complexities of AGI and its implications. This can lead to poorly informed decisions or inaction due to uncertainty. For example, a government official without a background in computer science might struggle to grasp the technical nuances of AI safety research.
Influence of special interests. Lobbying by powerful industries can influence government policies, sometimes slowing down or blocking reforms that are not in their immediate interests. For example, tech companies might lobby against regulations that they perceive as stifling innovation, even if those regulations are necessary to address societal risks.
Judicial review. Government actions, including regulations, are subject to judicial review. This can lead to further delays and uncertainty, as courts may overturn or modify government policies. For example, a new regulation on AI could be challenged in court, leading to years of legal battles before its fate is decided.
Gridlock and partisanship. Especially in two-party or highly polarized systems, partisan gridlock can prevent meaningful action being taken even when there is urgency. The opposing party may block legislation simply to deny the governing party a win.
Lack of Support for Worker Adaptation
The emergence of more conservative governments can create challenges related to managing the transition to AGI, particularly regarding worker retraining and social support programs. Here's a structured list outlining potential reasons for this:
Emphasis on individual responsibility and market solutions. Conservative ideology often emphasizes individual responsibility and the efficacy of free markets. This can translate to a belief that individuals should be primarily responsible for adapting to technological changes, including acquiring new skills. Consequently, there may be less support for large-scale, government-funded retraining programs, with a preference for market-driven solutions like private training initiatives or allowing market forces to naturally reallocate labor. This can lead to underinvestment in crucial retraining infrastructure, leaving workers vulnerable during the transition.
Concerns about the cost and efficiency of social programs. Conservative fiscal policy often prioritizes lower taxes and reduced government spending. Social support programs, especially large-scale ones designed to cushion the impact of job displacement due to AGI, can be perceived as expensive and potentially inefficient. This can lead to resistance towards implementing robust social safety nets, even if they are necessary to mitigate the negative social consequences of technological unemployment. Arguments may focus on potential disincentives to work or the creation of dependency on government assistance.
Distrust of bureaucracy and centralized planning: Conservative viewpoints often express skepticism towards large government bureaucracies and centralized planning. Worker retraining and social support programs often require significant administrative structures and coordination, which can be viewed with suspicion. This can lead to a preference for smaller-scale, localized initiatives or private sector solutions, even if these are less effective in addressing large-scale societal challenges. This can result in a fragmented and inadequate response to the widespread job displacement potentially caused by AGI.
Focus on economic growth through deregulation and tax cuts: Conservative economic policy often prioritizes economic growth through deregulation and tax cuts, with the belief that this will create jobs and opportunities. While this approach can be beneficial in some contexts, it may not adequately address the specific challenges posed by AGI-driven job displacement. The focus on general economic growth may overshadow the need for targeted interventions, such as retraining and social support, that are necessary to help workers transition to new roles in a rapidly changing economy. There is a risk that the benefits of growth may not be distributed equitably, leaving those displaced by automation behind.
These factors can contribute to a situation where conservative governments are less inclined to support the necessary worker retraining and social support programs needed to navigate the transition to AGI effectively, potentially exacerbating social and economic inequalities.
AGI and the loss of Education
I believe the educational focus might need to shift toward special human qualities—such as empathy, ethical reasoning, critical thinking, judgement and AGI collaboration skills. I’m offering summer programs for students to support that, but preparing the next generation for a future dominated by AGI requires a complete overhaul of curricula, with emphasis on adaptability, lifelong learning, and the development of wisdom over rote knowledge. Schools, many of which are stuck arguing over the best ways to catch students using AI in paper writing, are not prepared for this moment. At All.
They are not. even talking to students about AI, let alone how to survive in an AI World.
AGI and Psychological and Philosophical Reckoning
The rise of AGI would force humanity into a collective psychological reckoning. As a species that has largely differentiated itself through intelligence, the emergence of superior artificial minds will challenge our sense of identity and worth. This could trigger widespread existential anxiety.
he advent of AGI would be unlike any other technological advancement humanity has faced. Throughout history, we've defined ourselves as the pinnacle of intelligence on Earth. Our complex societies, technologies, and art are all testaments to our cognitive abilities, setting us apart from other species. This perceived intellectual superiority has been central to our sense of self and our place in the world.
However, AGI surpassing human intelligence would shatter this foundational belief. Imagine encountering an entity that not only equals but exceeds our capacity for reasoning, creativity, and problem-solving. This encounter would force us to confront profound questions about the nature of consciousness, the value of human life, and our role in a world where we are no longer the most intelligent beings. This existential crisis could manifest in widespread anxiety, depression, and societal unrest as individuals grapple with a diminished sense of purpose and identity in the face of superior artificial minds. 1 The very core of what it means to be human would be challenged, potentially leading to a period of profound psychological and societal upheaval.
AGI would require new frameworks for finding meaning and purpose in a world where humans are no longer the most capable intelligence. Philosophers, psychologists, and theologians would play crucial roles in helping society navigate this existential transition.
AGI, Safety Concerns and Existential Risks
The safety implications of AGI development are particularly concerning. The potential for recursive self-improvement means that once AGI reaches a certain threshold, human control might become impossible to maintain. Traditional safety mechanisms and scaling approaches, as discussed by researchers like Yann LeCun and Nick Bostrom, might prove inadequate against the complexity of truly general artificial intelligence, especially if the technology is developed and deployed faster than it can be regulated.
The challenge of aligning superintelligent systems with human values would become critical for the survival and prosperity of our species.
AGI, Global Power Dynamics and Scientific Advancement
Beyond societal concerns, AGI would transform global power dynamics, scientific progress, and environmental interaction. Nations with AGI capabilities could achieve overwhelming advantages, potentially reshaping international relations and creating new geopolitical risks.
he development of Artificial General Intelligence (AGI) could lead to one country attacking another due to a complex interplay of factors, including:
Preemptive strikes. A country that achieves AGI first might fear that other nations will use their own AGI development for hostile purposes. This could lead to a preemptive strike to neutralize the perceived threat, especially if the AGI is believed to be capable of rapidly accelerating technological and military advancements.
AGI arms race: The pursuit of AGI will trigger an arms race, with nations competing to develop and deploy the technology first. This could create an environment of mistrust and paranoia, increasing the likelihood of conflict.
Miscalculation or escalation. AGI could make complex decisions in military or strategic contexts. A miscalculation or misinterpretation of an AGI's actions by another country could lead to escalation and ultimately war.
Cyberwarfare and espionage. AGI could be used to conduct sophisticated cyberattacks against another nation's critical infrastructure or to steal sensitive information, potentially provoking a military response.
Economic disruption. AGI could give a country a significant economic advantage, potentially leading to trade wars or resource conflicts as other nations try to protect their interests.
AGI, Cultural Evolution and Ethical Paradigms
Cultural and social structures would undergo dramatic evolution. Traditional social hierarchies based on intelligence or capability might be upended, requiring new forms of governance and decision-making. Art, culture, and entertainment would transform as AGI becomes capable of generating and manipulating creative content at unprecedented scales and quality levels.
Moral and ethical frameworks would also require significant revision. Questions about the rights and responsibilities of artificial beings, the nature of consciousness and personhood, and the evolution of moral philosophy in light of superintelligent entities would become central to societal discourse. Humanity would need to consider whether AGI entities deserve moral consideration and what obligations we have toward them.
What if Sam Altman reads this and says we don’t have to worry?
Then OpenAI really isn’t close to AGI.
He’d also be lying. He recognizes there are existential risks.
What Can You Do to Slow-Down the Race to AGI?
It's understandable to feel helpless in the face of such a powerful and transformative technology. However, individuals can take action to influence the development and trajectory of AGI.
1. Educate yourself
Understand the technology. Learn about the basics of AI, its potential benefits and risks, and the different approaches to its development. This knowledge empowers you to engage in informed discussions and make informed decisions.
Stay informed. Follow the latest developments in AI research and policy. Subscribe to newsletters, read articles, and attend talks and conferences.
2. Engage in Public Discourse:
Voice your concerns. Share your thoughts and concerns about AGI with your friends, family, and elected officials. Write letters to the editor, participate in online forums, and attend public meetings.
Support responsible AI initiatives. Advocate for policies and regulations that promote the safe and ethical development of AGI.
3. Make Conscious Choices
Support ethical tech companies. Choose to do business with companies that prioritize ethical AI development and are transparent about their practices.
Be mindful of your data. Limit the amount of personal data you share online and be aware of how it might be used to train AI systems.
5. Develop Critical Thinking Skills:
Question the information you receive. Develop the ability to critically evaluate information about AI, especially from sources with vested interests.
Promote media literacy. Encourage others to be critical consumers of information and to question the narratives presented by tech companies and the media.
While individual actions may seem insignificant, collective action can create meaningful change. By engaging in these activities, individuals can contribute to a more cautious and responsible approach to AGI development, ensuring that it serves humanity's best interests.
Engage Demonstrations and Protests
Organize or join protests. Participate in rallies and marches to raise public awareness about the potential risks of uncontrolled AGI development and demand greater transparency and accountability from AI developers.
Target key institutions. Organize demonstrations outside the offices of tech companies, government agencies, and research institutions involved in AGI development.
Use creative tactics. Employ attention-grabbing tactics like street theater, art installations, and public debates to spark conversations and engage the public.
Political Advocacy
Lobby elected officials. Contact your representatives and senators to express your concerns about AGI and urge them to support legislation that promotes responsible AI development.
Support political campaigns. Donate to or volunteer for candidates who prioritize AI safety and regulation.
Participate in public consultations. When governments and organizations seek public input on AI policy, submit your comments and attend public hearings.
Form advocacy groups. Create or join organizations dedicated to influencing AI policy and promoting public awareness.
Engage with international organizations. Support efforts by international bodies like the United Nations to develop global guidelines for AI development and governance.
Examples.
Organize a "Slow Down AGI" march. Gather concerned citizens to march on the headquarters of a leading AI research lab, demanding greater transparency and public engagement in their work.
Launch a petition. Create an online petition calling for a moratorium on certain types of AGI research until adequate safety measures are in place.
Stage a "die-in" protest Lie down in a public space to symbolize the potential existential threat posed by uncontrolled AGI.
Lobby for a "National AI Safety Agency": Advocate for the creation of a government agency dedicated to overseeing AI development and ensuring its safety and alignment with human values.
By combining these strategies with the previous suggestions, individuals can amplify their voices and exert greater influence on the future of AGI. Remember, collective action can create powerful momentum for change.
Conclusion
AI will do a lot of good
People who run AI companies are not “evil.”
Will AGI be a net good? It’s hard to say.
I fail to see the harm in slowing down and giving more people a say.