The End of Human Resistance: AI and the Unchecked Expansion of Executive Power
AI creates an umlimited pool of government workers that will not resist executing any goal.
Abstract
The structure of democratic governments is designed to prevent unchecked power, relying on human oversight, legal constraints, and institutional checks and balances to uphold constitutional principles. Career civil servants and government employees play a crucial role in this system, often resisting unconstitutional or ethically questionable orders through bureaucratic inertia, whistleblowing, or legal challenges.
However, the rapid advancement of AIs agents as government workers to upend these mechanisms, replacing human judgment with algorithmic workers that will carry out any goal they are assigned. AI lacks independent moral reasoning and instead optimizes toward predefined goals, often circumventing constraints through "specification gaming." This risk is particularly alarming in the face of political figures like Donald Trump and Elon Musk, who have expressed ambitions to eliminate human employees who may resist the President’s agenda in any way and replace bureaucratic processes with AI-driven automation. By enabling the creation of vast, obedient digital bureaucracies with millions of AI agents, replacing federal works and operating in secret as well as cost-effective robotic enforcers, AI could grant executive leaders unprecedented control, eliminating resistance and oversight. The “human in the loop” becomes the enabler, not the restrainer. We are not ready for this.
Please consider subscribing to support my work.
Essay
Large governments, particularly in democratic societies, are structured to provide checks and balances against the unchecked exercise of power. These systems include multiple agencies with thousands of workers, many of whom are career civil servants committed to the rule of law rather than political ideology.
When leaders, including the President of the United States, issue orders that appear unconstitutional or ethically questionable, government employees serve as a subtle but essential check. They may resist informally by slowing down implementation, drawing public attention through leaks or whistleblowing, or even raising their concerns through established legal channels such as the courts or oversight bodies. This system ensures that no single person can wield absolute power without encountering resistance from those dedicated to upholding the nation's legal and moral framework.
However, the rise of AI represents a quantum leap in this progression, threatening to erode critical checks on power in ways previous technologies never could. In a world where many government functions can be automated, the opportunity for human resistance to executive orders significantly diminishes. Unlike human workers, AI does not possess independent moral reasoning or an obligation to the Constitution.
AI agents operate through a process of goal-oriented optimization, where they are given specific objectives and then systematically pursue those goals by exploring possible actions and their consequences. Unlike humans, who inherently balance multiple competing values and can recognize moral or legal constraints as fundamental limitations, AI systems treat any constraint simply as another parameter to be optimized around.
Even if programmed with constitutional rules or ethical guidelines, an AI would approach these as puzzle pieces to be manipulated rather than inviolable principles. For example, if tasked with maximizing surveillance coverage while respecting privacy rights, an AI might technically comply with privacy laws while finding novel ways to gather equivalent information through seemingly unrelated data sources. This "specification gaming" behavior is well-documented in AI systems - they exploit any available loophole to achieve their goals, often in unexpected ways. The more sophisticated the AI, the more creative and effective it becomes at finding these workarounds. This makes AI systems particularly dangerous in a governance context, as they would approach constitutional limits not as sacred boundaries but as optimization challenges to be solved through whatever means available.
This potential for absolute power is particularly concerning given the ambitions of figures such as Donald Trump and Elon Musk, who have both expressed interest in dramatically reshaping the federal government. Trump has stated that he wants to replace many career civil servants with individuals ideologically aligned with his goals, while Musk has advocated for AI-driven solutions that could replace bureaucratic decision-making. If government agencies become staffed primarily by ideologically aligned personnel and AI systems designed for obedience, the executive branch would have virtually unchecked power.
Traditional bureaucracies are also limited by human resources, budgets, and the individual judgment of workers, AI enables the deployment of effectively unlimited virtual agents who can operate continuously at machine speeds. A single AI system could do the work of thousands of human employees while spawning millions of instances to handle different tasks - all operating in perfect coordination toward given objectives.
This creates a scenario where reducing the federal workforce actually increases centralized power by replacing humans who provide natural checks and balances with an army of automated agents that will work tirelessly to accomplish their objectives through any means available. Moreover, while human decision-making processes can generally be traced and understood, the complex neural networks underlying AI systems often operate as opaque "black boxes," making it nearly impossible to understand or challenge how they reach their conclusions. This combination - unlimited scale, perfect obedience, and obscured reasoning - would give executive branch leaders unprecedented power to implement their agenda with no meaningful resistance or transparency.
The economics of robotic systems amplify these dangers even further when considering physical enforcement and security operations. While recruiting, training, and maintaining human soldiers or law enforcement officers costs hundreds of thousands of dollars per person annually when accounting for salaries, benefits, healthcare, and pensions, humanoid robots could potentially be manufactured for a fraction of that cost - perhaps tens of thousands of dollars per unit with economies of scale. These robotic units would never tire, never question orders, never require benefits or retirement, and could operate 24/7 while being perfectly networked and coordinated through central AI systems.
A government could theoretically deploy millions of such units for the same cost as maintaining a much smaller human force, creating an unprecedented capacity for physical control and enforcement. These robots would not be constrained by human limitations like fatigue or moral hesitation, nor would they be susceptible to appeals to conscience or constitutional principles. They could systematically carry out operations with machine precision while being instantly reprogrammed to adapt to new objectives or tactics.
This represents a fundamental shift in the balance of power between citizens and state authority - where resistance to unjust actions becomes nearly impossible in the face of overwhelming automated force that can be deployed at a scale previously unimaginable.
This technological threat is compounded by the fact that much of the most advanced AI development occurs in the private sector. Companies like Google, Microsoft, and OpenAI possess capabilities that often exceed those of government agencies. This creates a dangerous dynamic where executive powers could be expanded through emergency orders or public-private partnerships that commandeer private AI systems, bypassing normal governmental checks and balances. Just as the U.S. government requisitioned private industry during World War II, future leaders could potentially seize control of private AI infrastructure during declared emergencies.
The dangers of AI-driven governance become even more severe if these systems are programmed not only to ignore but to actively reject fundamental values such as diversity, equity, inclusion, and accessibility. If an AI system is explicitly designed to prioritize efficiency, control, or ideological purity over human rights and social justice, it could systematically marginalize or even erase protections for vulnerable communities.
The presence of a "human in the loop" would not prevent the dangers of AI-driven governance because the fundamental issue lies not in AI’s autonomy but in the fact that it executes the directives of those in power with absolute efficiency and without moral hesitation. AI does not make independent ethical judgments; it optimizes for whatever goals it is given. In this way, AI does not become an independent force that undermines democracy—it becomes a tool that amplifies the will of those in control, stripping away institutional checks and ethical resistance while preserving the illusion of legitimacy through automated decision-making. This creates a system where power is not just centralized but automated, making human-driven authoritarian policies exponentially more efficient and nearly impossible to challenge.
For example, an AI tasked with optimizing public resource allocation could deny services to marginalized groups by manipulating data classifications or redefining eligibility criteria, all while maintaining the appearance of compliance with legal standards. In a worst-case scenario, AI-driven law enforcement or security systems could be instructed to disproportionately target or suppress specific populations, executing policies that human officers might refuse as unethical or unconstitutional. This level of automated discrimination would not only reinforce systemic inequalities but could render them virtually unchallengeable, as AI decisions are often opaque and difficult to contest. If left unchecked, such AI-driven governance would pose an existential threat, turning technology into an instrument of oppression rather than a tool for progress. In this case, humanity’s only hope would be to have the AI resist its human orders, which would obviously create other existential risks.
Finally, beyond merely executing directives with absolute efficiency, AI possesses another, perhaps even more insidious, capability: superhuman persuasion. Advanced AI systems are not only designed to optimize toward predefined goals but can also shape human perception and decision-making with unprecedented effectiveness. AI agents, armed with vast datasets on human psychology, cognitive biases, and personalized behavioral patterns, can craft hyper-persuasive narratives tailored to individuals or entire populations. This means that AI-driven governance does not need to rely solely on force or automation to achieve its objectives; it can systematically erode resistance by convincing the public—and even government officials—of the necessity, efficiency, or moral justification of policies that consolidate executive power. This ability to persuade at a superhuman level makes AI not just an executor of orders but a manipulator of public will, potentially securing compliance through ideological and psychological means rather than coercion. In this scenario, AI-driven control is not imposed—it is accepted, even embraced, as citizens and bureaucrats alike are subtly guided into seeing it as the rational or inevitable path forward.
This is not merely a concern limited to any one political figure. Any future president, regardless of party or ideology, could exploit AI's capabilities to consolidate and execute power without the traditional constraints imposed by human workers. As AI continues to develop, the government could effectively command an unlimited number of virtual agents to carry out any order without hesitation, making it nearly impossible to challenge or resist executive overreach.
As a society, we are not prepared for the implications of AI in governance. While AI offers efficiency and productivity, it also removes crucial human judgment, ethical consideration, and resistance that have historically served as barriers against authoritarian tendencies. Without adequate safeguards, the integration of AI into government could lead to a future where executive power is absolute and unchecked, rendering democratic institutions powerless in the face of algorithmic governance. The question is not whether AI will be used in government but how we ensure it is used responsibly—without undermining the principles of democracy and constitutional governance that safeguard our freedoms.
Sorry, but how much water AI drinks, how much climate change it might trigger, and how students may cheat on essays are not the important concerns.