Communist China Sets Sights on Dominating AI Battlefield

This article is from the archive of The New York Sun before the launch of its new website in 2022. The Sun has neither altered nor updated such articles but will seek to correct any errors, mis-categorizations or other problems introduced during transfer.

The New York Sun

Seeking to field a military in which “man and machine” fight together, the People’s Liberation Army Daily announces it will deploy artificially intelligent combat systems capable of “deep learning,” “evolution,” and deception.

Published the day before the United Nations concluded its Convention on Certain Conventional Weapons, the People’s Liberation Army article seemingly offers a blunt riposte to the U.N. assembly, whose convention was designed to develop ethical rules covering “lethal autonomous weapon systems.”

By contrast, the PLA is focused on how its AI weapon systems — “living” “superorganisms” that are able to “learn, grow, and evolve” — would win the wars of the future. China, it says, will prosecute the coming AI-governed wars in three distinct phases.

The first, which is already under way, will ensure China’s AI weapon systems will, prior to conflict, possess bleeding-edge processing capabilities and superabundant data sets, the lifeblood of all artificial intelligence networks.

Initial algorithmic supremacy, the PLA argues, will determine whether a given AI platform — and by extension China — will win. Therefore, China’s military is already hard at work sharpening the algorithmic edge of their AI network in preparation for war.

The second phase begins with the onset of conflict. During this phase, the Chinese military hopes to leverage its initial advantage in order to outpace the algorithmic understanding of an enemy system and thereby establish a sufficient intelligence gap it can later exploit.

Defining the key to AI victory as “maintaining the liberty of our combat system to evolve, while hindering the evolution of the enemy’s combat system,” the Chinese military proposes that an intelligence gap can be developed by citing the Matthew Effect — a concept born from Matthew 13:12, in which success begets success and setback begets setback.

The PLA asserts that “brutal confrontations” — read: “war” — will cause AI systems to undergo “rapid system evolution,” predicting that the AI which “evolves faster shall win.”

“The longer a war lasts, the stronger the strong will become, while the weak will become weaker,” says People’s Liberation Army Daily.

The third and final phase of China’s AI battle plan will begin once a sufficient intelligence gap has been established.

At this point, the PLA says China’s AI will weaponize its intellectual supremacy to target the enemy AI’s intelligence deficit, actively “misleading,” “arresting,” and “increasing the evolutionary resistance” for the overwhelmed AI opponent until its inevitable defeat.

Although the Chinese military fails to divulge which specific platforms operate under the control of AI, the hypothetical battlefield role of such weapons systems is now clear, no longer consigned to science fiction pulps.

In 2020, airborne drones piloted by AI systems were, for the first time, deployed in battle by the defunct Libyan Government of National Accord against “forces aligned with Gen. Khalifa Haftar.” The United Nations reported that during the skirmish the drones “hunted down and remotely engaged” enemy combatants.

The PLA’s announcement comes as American AI development lies in doubt following the October resignation of the Pentagon’s chief software officer, Nicolas Chaillan, who described the AI sophistication of “some government departments” as hovering around the “kindergarten level.”

Explaining his resignation, Mr. Chaillan said America has no “fighting chance against China in fifteen to twenty years. Right now, it’s already a done deal; it is already over in my opinion.”

If Mr. Chaillan’s assessment is true, America faces not only the Herculean task of closing a preexisting “artificial intelligence gap,” but also ostensibly the greatest moral quandary since “the father of the atomic bomb,” J. Robert Oppenheimer, anguished over nuclear ethics.

Assuming China’s ability to develop AI capable of world domination is not guaranteed, is it really wise for America to pursue the development of such an intelligence?

For, as Oxford professor and AI-skeptic, Nick Bostrom, explains, “There is some level of technological development at which civilization almost certainly gets devastated by default.”


The New York Sun

© 2024 The New York Sun Company, LLC. All rights reserved.

Use of this site constitutes acceptance of our Terms of Use and Privacy Policy. The material on this site is protected by copyright law and may not be reproduced, distributed, transmitted, cached or otherwise used.

The New York Sun

Sign in or  Create a free account

or
By continuing you agree to our Privacy Policy and Terms of Use