Say No to Kiddy-Minded Adults' Big-Toy Complex

Last month, US Secretary of Defense Pete Hegseth, formerly a Fox News host, threatened Anthropic, the owner of Claude AI, to coerce the company into removing its AI safeguards for use by the Department of Defense.

Anthropic expressed concern that such “jailbroken” AI could be used to develop domestic surveillance tools and lethal autonomous weapons (LAWs).

On February 27, 2026, Hegseth invoked the Federal Acquisition Supply Chain Security Act to declare Anthropic a supply chain risk, excluding it from all federal contracts.

This situation is reminiscent of a rich schoolyard bully using their advantage to intimidate others—like a child with a powerful water gun dominating weaker peers. It also echoes my numerous job interviews where employers showed little genuine interest in artificial intelligence per se, instead seeking only to dominate the industry and amass power and wealth by leveraging AI as a novel weapon.

Probably after Hegseth’s badmouthing about Anthropic before Trump, the former president lashed out against the company on Truth Media and blacklisted it from any federal government collaboration.

It is possible that neither Hegseth nor Trump fully understands the implications of designating Anthropic a supply chain risk—a label that extends to the government’s other collaborators, preventing them from doing business with the company as well.

OpenAI’s CEO, Sam Altman, recognized an opportunity and swiftly acted upon it. He capitalized on the rift between Trump’s administration and Anthropic by quickly securing a deal with the Department of Defense.

He resembles those many patron-seeking inventors, artisans, magicians, and charlatans in history who gifted emperors/rulers things in exchange for money, power, titles, or favor.

Nevertheless, not all are as opportunistic as Altman. Many scientists strongly oppose the adoption of AI in mass surveillance and lethal autonomous weapons, arguing that its propensity for “hallucination” could lead to errors and unintended casualties. Meanwhile, ordinary users are boycotting ChatGPT in favor of Claude.

In this article, I will argue that even if AI proves more capable and accurate than humans in management and military contexts, we should still exercise caution regarding its unrestricted use (i.e., human-out-of-the-loop) in governance and warfare.

I will begin by inviting the reader to consider three scenarios:

  • The government uses killer robots to eliminate terrorists.
  • A dictator employs a super AI agent to manage warfare independently.
  • Multiple machine overlords control countries, automating warfare.

Next, I will address a seemingly paradoxical question: Why should war not be as efficient as possible?

Finally, I will offer an ambitious assessment of whether we can allow a God-like AI to rule humanity if such an entity is ever created.

Scenario 1: government uses killer robots to eliminate terrorists

Killer robots and killer drones, formally known as lethal autonomous weapons systems (LAWs), are weapon systems capable of selecting and engaging targets without human intervention.

These systems operate at three levels of autonomy:

  • Human-in-the-Loop: A human makes the final decision to engage.
  • Human-on-the-Loop: A human monitors and can intervene, but the system has significant autonomy in target selection.
  • Human-out-of-the-Loop: The system selects and engages targets entirely on its own. This is what most people mean when they refer to “killer robots” and represents the most controversial category.

Now, imagine a government employing street and ring cameras equipped with an AI based on convolutional neural networks, say, YOLO, to identify potential terrorists in real-time. Once a target is detected, human-out-of-the-loop killer robots are deployed to eliminate them. It sounds futuristic, doesn’t it?

This seemingly appealing idea has significant shortcomings in practice. Defining terrorism is remarkably difficult; there is no universally accepted definition, and what one person considers terrorism another might consider legitimate resistance.

Consequently, the AI engineers responsible for implementing the system must establish a working definition of terrorism (if they wish to retain their positions). They may therefore resort to a classification model (e.g., logistic regression or random forest) to categorize individuals as either “terrorist” or “non-terrorist” based on a range of personal attributes.

These attributes are likely to include personally identifiable information (PII), purchasing choices, online speech patterns, medical history, financial data, and even genes. Once an individual’s profile and behavior closely resemble those associated with a terrorist, they will be classified accordingly.

If you walk like a duck and quack like a duck, you must be a duck.

Whenever a street camera detects such an individual’s face or their cellphone connects to a nearby cell tower, WiFi hotspot, or Bluetooth server, their geolocation is exposed. A swarm of lethal drones will then converge on them.

While this machinery may appear sleek, it is not cost-efficient and is therefore potentially unsustainable. To improve efficiency, one could implant micro-explosives in the battery of every phone. Once a user is classified as a terrorist, the server detonates the explosive remotely. I term this lethal autonomous weapon a phone mine, following the nomenclature of land mines and naval mines.

Some AI engineers with statistical backgrounds may argue that binary classification is too coarse-grained, lacking nuance in assessing an individual’s proximity to being a terrorist. They might propose a hypothesis testing alternative:

The null hypothesis ($H_0$) is that the person in question is not a terrorist, while the alternative hypothesis ($H_1$) posits that they are. A p-value is calculated based on the aforementioned attributes. The lower the p-value, the stronger the evidence against the null hypothesis and in favor of the alternative (i.e., the more likely this person is a terrorist).

To automate the killing decision, engineers might use an $\alpha$ value of 0.05. If a person’s p-value falls below 0.05, they are deemed a terrorist and targeted by killer robots. Because the p-value follows a uniform distribution $\mathcal{U}[0,1]$, approximately 5% of the population will be killed regardless of their actual status as terrorists (i.e., Type I error, or false positive).

The CEO overseeing this project is lauded by state leaders for its efficiency and receives a promotion. However, he soon realizes that a new problem has emerged: the initial wave of killings nearly eradicated all identified terrorists—there are now few remaining targets.

Fearing the project’s closure and potential job loss, the CEO orders engineers to recalibrate the model to identify another 5% of the population as terrorists. A second wave of killings commences!

This cycle will continue indefinitely until the CEO retires or falls out of favor with state leaders—a phenomenon mirroring historical events in Germany (holocaust), the USSR (Stalin’s purges), China (Cultural Revolution, one-child policy, social credit score system), and the US (McCarthyism). Modern AI advocates are inadvertently recreating decades-old patterns of mass persecution.

If sharp criticism disappears completely, mild criticism will become harsh. If mild criticism is not allowed, silence will be considered ill-intended. If silence is no longer allowed, complimenting not hard enough is a crime.

They came first for the Communists, and I didn’t speak up because I wasn’t a Communist. Then they came for the Jews, and I didn’t speak up because I wasn’t a Jew. Then they came for the trade unionists, and I didn’t speak up because I wasn’t a trade unionist. Then they came for the Catholics, and I didn’t speak up because I was a Protestant. Then they came for me, and by that time no one was left to speak up.

Beyond this inherent scientific and bureaucratic flaw, the system is also vulnerable to hacking. If someone (e.g., an ex-partner) harbors ill will towards another person, they could hire a dark web hacker to manipulate their recorded attributes within the system, leading to misclassification as a terrorist.

Furthermore, if an engineer holds prejudiced beliefs (e.g., against Jews), they could continually introduce superfluous features into the classification model until individuals of that group are disproportionately categorized as terrorists (i.e., $p$-hacking).

Once AI is deployed in mass surveillance and lethal autonomous weapons systems, the likely outcome is perpetual violence and oppression.

Further Reading:

  • Modernity and the Holocaust
  • Psycho-Pass

Scenario 2: A dictator employs a super AI agent to manage warfare independently

For ease of discussion, I define a super AI agent in this article as an AI assistant that 1) provides the user with a detailed dashboard and a palette of high-level choices, and 2) is capable of decomposing the user’s high-level commands into low-level tasks that can be later carried out by machines and human subordinates. It is essentially the Hand of the King in Game of Thrones.

In this scenario, humans must obey the super AI agent; otherwise, they will be subjected to disciplinary measures thanks to mass surveillance tools. This is a tyrannical state capable of fully mobilizing its citizens.

The dictator of such a state can initiate war without the consent of their citizens or the leadership of generals. By clicking on a touchscreen, or better yet, by talking to the AI agent, the dictator handles warfare with ease. It is like a real-life Sid Meier’s Civilization.

If the dictator fails the “birth lottery” and becomes, say, the leader of North Korea, he is likely to lose the war. However, if he is stubborn and refuses to surrender, he may keep fighting until the death of the last citizen.

Conversely, if the dictator is lucky and becomes, say, the president of the US, he is likely to win the war and unify the globe. He becomes a modern-day Qin Shi Huang, subjecting the whole world to his tyranny.

In either case, there is a heavy toll on humanity. While global unification may be appealing and could help solve some conflicts, such as trade imbalances and illegal immigration, we want to resolve these issues through dialogue rather than world wars. To achieve this, all countries should encourage freedom of speech and lower Internet firewalls.

Recent examples include Venezuela and Iran. Their leaders, Nicolás Maduro and Ali Khamenei respectively, indulged in domestic corruption and practiced international isolationism. They refused to communicate with their people or the world, leading Trump, impatient, to invade both countries and make them his colonies. Maduro and Khamenei made the same mistake as Empress Dowager Cixi did two centuries ago in Qing Dynasty China.

To avoid world wars and protect humanity, world leaders should fight against corruption, encourage freedom of speech, and promote multilateral dialogues rather than adopt super AI agents equipped with AI-driven mass surveillance tools.

Scenario 3: Multiple machine overlords control countries, automating warfare

In this scenario, each country is ruled not by a human but by a machine overlord powered by AGI (Artificial General Intelligence). This overlord governs more efficiently than humans and initiates wars when it deems them necessary.

It’s reasonable to assume that production and warfare are largely automated, with the primary role of humans being service to the AI – fulfilling tasks beyond complete automation. Because no system can cover every detail, a need for human handling of edge cases will always exist.

War in this world is no longer between people, but between machines. An invasion targets not the population, but the machines within a state. Humans are deployed to protect and support these machines on the front lines, and assist with machine reproduction at home.

Humans function as slaves, mechanically obeying orders. They possess no agency, nor should they.

If the machine overlords are sufficiently intelligent, they will perpetuate war indefinitely. This is the most effective solution for overcapacity and distraction from domestic issues, unless Universal Basic Income (UBI) is implemented.

This AI-driven perpetual war of attrition represents the ultimate expression of planned obsolescence. It continues until environmental destruction is complete and irreversible, leaving humans to become extinct – the new dinosaurs.

Further reading: Sentou Yousei Yukikaze

Why war should not be as efficient as possible?

If we view war as a game with the goal of destruction, it is reasonable to search for the optimal solution to crush the opponent.

However, this view becomes obsolete with the invention of nuclear weapons. On one hand, even the most powerful state can be destroyed by an adversarial state possessing them. On the other hand, a nuclear war will destroy the Earth—a disaster for all states.

Therefore, using a framework similar to Thomas Hobbes’s thinking, the goal of war should no longer be destruction but peace. War becomes one of many communication methods used to achieve peace.

The idea of violence as a way of communication is not novel. In China, we have an old idiom 不打不相识 (starting off as rivals, but growing close through competition). It’s said that two sword masters can read each other’s personality and moral quality through their sword art. Yi-Gi-Oh isn’t about winning; it’s about conveying a message…

When two parties, either individuals or states, cannot resolve their conflict through dialogue and compromise, they fight. Through fighting, they understand each other as well as themselves better, and then peace becomes possible again.

It is therefore not the consequence of fighting but the process of fighting that matters. It’s through this process that people exercise human agency and become better selves.

Conversely, if a war becomes so efficient that this process is shortened to an instant and human agency is stripped away, people will not learn anything from it, meaning they will keep committing the same mistakes.

Whether we can allow a God-like AI to rule us

With the rise of large language models (LLMs) and, more generally, generative AI, it is tempting to build an omniscient, omnipotent, and omnibenevolent God-like AI to rule us all. This section addresses this issue.

Some may argue that such AI is not theoretically possible because it cannot match human intelligence, lacking the unpredictability and evolution potential humans possess.

I disagree. Unpredictability can be achieved by introducing stochasticity (e.g., LLMs sampling the next word according to a probability distribution), and evolution can be achieved through reinforcement learning. Thus, building a powerful God-like AI that is unhackable and self-evolving is possible.

The real challenge lies in ensuring this omniscient and omnipotent AI is also omnibenevolent—as humans understand it.

Natural languages are ambiguous, and sometimes even humans don’t fully understand their own words. For instance, Asian girls have an obsession with the word “Romance,” but their vague understanding of this word vaporizes like mist once they arrive in Europe.

Natural languages can also be absurd. Oxymorons like bittersweet, living dead, open secret, and deafening silence exist, as do paradoxes such as a barber who shaves all men who do not shave themselves.

If humans cannot definitively define “benevolence,” how can an AI capture its meaning? Most likely, it will develop its own understanding—a silicon-based one—which may initially be similar to our carbon-based one but eventually diverge completely. For instance, it could interpret “peace” as the elimination of all humans.

Humans struggle to define benevolence because they do not fully understand themselves. It was thanks to Freud that we began to understand our minds for the first time. In this journey toward self-awareness, humanity is still in its early stages and should not relinquish agency by delegating this task to AI.

A God-like AI is a False God, akin to “Money.” If people blindly follow such False Gods, they risk enslavement by their own creations.

Further reading: Fate/Zero

Conclusion

The deployment of artificial intelligence in mass surveillance or lethal autonomous weapons systems is unacceptable. Such applications carry the potential for widespread persecution, global conflict, and even existential threats to humanity.

Furthermore, individuals should not cede their agency or delegate critical functions – governance, warfare, and the pursuit of meaning – to AI. Agency is a fundamental aspect of human existence, and its transfer to non-sentient entities like AI represents an abdication of responsibility and potentially a misdirection of spiritual purpose.

Written on March 10, 2026