A.I. is on the front lines of the war in Ukraine – Steve Bannon’s War Room: Pandemic

War is terrible. But it has often played a pivotal role in advancing technology. And Russia’s invasion of Ukraine is shaping up to be a key proving ground for artificial intelligence, for ill and, perhaps in a few instances, for good, too.

Civil society groups and A.I. researchers have been increasingly alarmed in recent years about the advent of lethal autonomous weapons systems—A.I.-enabled weapons with the ability to select targets and kill people without human oversight. This has led to a concerted effort at the United Nations to try to ban or at least restrict the use of such systems. But those talks have so far not resulted in much progress.

Meanwhile, the development of autonomous weapons has continued at a quickening pace. Right now, those weapons are still in their infancy. We won’t see humanitarian groups’ worst nightmares about swarms of “slaughterbot” drones realized in the Ukraine conflict. But weapons with some degree of autonomy are likely to be deployed by both sides.

Already, Ukraine has been using the Turkish-made TB2 drone, which can take off, land, and cruise autonomously, although it still relies on a human operator to decide when to drop the laser-guided bombs it carries. (The drone can also use lasers to guide artillery strikes.) Russia meanwhile has a “kamikaze” drone with some autonomous capabilities called the Lantset, which it reportedly used in Syria and could use in Ukraine. The Lantset is technically a “loitering munition” designed to attack tanks, vehicle columns, or troop concentrations. Once launched, it circles a predesignated geographic area until detecting a preselected target type. It then crashes itself into the target, detonating the warhead it carries.

Russia has made A.I. a strategic priority. Vladimir Putin, the country’s president, said in 2017 that whoever becomes the leader in A.I. “will become the ruler of the world.” But at least one recent assessment, from researchers at the U.S. government–funded Center for Naval Analyses, says Russia lags the U.S. and China in developing A.I. defense capabilities.

In an interview last week with Politico, one of the study’s authors, Samuel Bendett, told the publication that Russia would definitely use A.I. in Ukraine to help analyze battlefield data, including surveillance footage from drones. He also said that it was possible that China would provide Russia with more advanced A.I.-enabled weapons to use in Ukraine in exchange for gaining insights into how Russia effectively integrates drones into combat operations, an area in which Russia has battle-tested expertise from Syria that China lacks.

A.I. might also play a vital role in the information war. Many fear that A.I. techniques such as deepfakes—highly realistic video fakes created using an A.I. technique—will supercharge Russian disinformation campaigns, although so far there is no evidence of deepfakes being used. Machine learning can also be used to help detect disinformation. The large social media platforms already deploy these systems, although their track record in accurately identifying and removing disinformation is spotty at best.

Some people have also suggested A.I. can help analyze the vast amount of open source intelligence coming out of Ukraine—everything from TikTok videos and Telegram posts of troop formations and attacks uploaded by average Ukrainians to publicly available satellite imagery. This could allow civil society groups to fact-check the claims made by both sides in the conflict as well as to document potential atrocities and human rights violations. That could be vital for future war crimes prosecutions.

Finally, the war has deeply affected world’s A.I. researchers, as it has everyone else. Many prominent researchers have engaged in discussion over Twitter about how best the profession should respond to the conflict and how the technology it works on could help end the current conflict and alleviate human suffering, or at least prevent future wars. The tech publication Protocol has an overview of the discussion—much of which, to my ears at least, seemed oddly naive and disconnected from the realities of international politics and war and peace.

See FULL STORY at Fortune Magazine.

Frank Miele