The Pentagon is investing in weapons that can decide when to kill on the battlefield. But can we teach machines to fight ethically?

The future may well be shaped by computer algorithms dictating how weapons move…

The Pentagon is investing in weapons that can decide when to kill on the battlefield. But can we teach machines to fight ethically? 1

The Pentagon is investing in weapons that can decide when to kill on the battlefield. But can we teach machines to fight ethically? 2

The Pentagon is investing in weapons that can decide when to kill on the battlefield. But can we teach machines to fight ethically? 3

Zachary Fryer-Biggs
 |  Special To The Washington Post

The camouflage-clad cadets are huddled around a miniature arena in the basement of a building dug into the cliffs on the West Point campus. They’re watching a robotic tank about the height of a soda can with a metal spear attached whir into action.

Surrounded by balloons of various colors representing either enemy fighters or civilians, the tank, acting on its own, uses a thumbnail-size camera to home in on a red balloon. The cadets wince as an earsplitting pop suddenly reverberates through the room: One “ISIS fighter” down.

That startling bang is the object of this exercise, part of a class in military ethics being taught to these sophomores at the U.S. Military Academy. The cadets have programmed the tank with an algorithm directing it to use its lance to “kill” the enemy fighters; now they are tweaking it to make the robot either more or less aggressive in fulfilling its mission — with as little harm to unintended targets as possible.

With a panoply of deadly autonomous weaponry under development today, the popping balloons are meant to trigger questions in the students’ minds about the broader ethical decisions they will face as commanders in the field. Col. Christopher Korpela, the director of West Point’s robotics center and an earnest spark plug of a man in both demeanor and frame, considers the deployment of such killing machines to be inevitable and wants to ensure that these officers-in-training are prepared to cope with them.

“There’s this very visceral effect, where this robot is popping a balloon,” Korpela says. “It’s a balloon, but it’s being destroyed like a human would be, and it makes it a little more real.”

The scale of the exercises at West Point, in which roughly 100 students have participated so far, is small, but the dilemmas they present are emblematic of how the U.S. military is trying to come to grips with the likely loss of at least some control over the battlefield to smart machines.

The future may well be shaped by computer algorithms dictating how weapons move and target enemies. And the cadets’ uncertainty about how much authority to give the robots and how to interact with them in conflict mirrors the broader military’s ambivalence about whether and where to draw a line on letting war machines kill on their own. Such autonomous machines were once so far beyond the technical grasp of scientists that debating their ethics was merely an intellectual exercise. But as the technology has caught up to the idea, that debate has become very real.

Already, the U.S. Navy is experimenting with ships that can travel thousands of miles on their own to hunt for enemy submarines or ships that could fire guns from just offshore as the Marines storm beaches. The Army is experimenting with systems that will locate targets and aim tank guns automatically. And the Air Force is developing deadly drones that could accompany planes into battle or forge ahead alone, operating independently from “pilots” sitting thousands of miles away in front of computer screens.

AIs draw fire

But while the march toward artificial intelligence in war continues, it doesn’t progress uncontested. Mary Wareham is one of the leading activists pushing governments to consider the moral ramifications of using AI in weapons. Originally from New Zealand, Wareham, whom I spoke to at her D.C. office in July 2019, has spent most of the past 20 years working for Human Rights Watch, trying to get governments to ban antipersonnel weapons such as cluster bombs and land mines. Now, as the advocacy director for the organization’s arms division, she is working to persuade world leaders to impose sweeping restrictions on autonomous weapons.

In October 2012, Human Rights Watch and a half-dozen other nongovernmental organizations — worried about the rapidly growing capability of drones and the breakneck pace of innovation in artificial intelligence — hatched the Campaign to Stop Killer Robots. The following year, the U.N. Convention on Certain Conventional Weapons (CCW) took up the question of whether the creation, sale and use of lethal autonomous weapons systems should be banned outright. Every year since then, Wareham has joined others from the Campaign to Stop Killer Robots in pressing her cause in the same dilapidated room at the United Nations’ office in Geneva.

Her core argument is that, because machines lack compassion and can’t sort through difficult ethical alternatives, using them to kill crosses a moral threshold. Machines, she argues, can’t judge whether their actions create a justifiably proportional risk to civilians, a key requirement in international law. Plus, she adds, a widespread embrace of such machines could make wars more likely, as robots might make tragic mistakes and it wouldn’t be clear who should be held responsible. Is it the person who launched the weapon? The weapon’s designer? Its builder? A public-opinion poll the campaign conducted in December 2020 found majority opposition to the development of AI weapons in 26 out of 28 nations surveyed, including the United States, Russia and China.

But thus far Wareham has made little headway in getting a ban through the CCW, which works as a consensus body; no draft treaty is presented to the United Nations unless all 125 member countries consent. So far only 30 nations have said they agree, while the United States, Russia and Israel, which are investing deeply in AI weaponry, have refused. (China has quixotically supported a ban on the use but not the development or production of such weapons.) If those countries don’t want a legally binding treaty, Wareham says, “We’re asking, ‘What can you support?’ Because it seems like nothing at the moment. … We’re in a dangerous place right now.”

In addition to the moral conundrums posed by AI, there remains a pervasive unpredictability to computer thinking, diverging from human logic in ways that might incidentally cause casualties or mission failure. Machines can lack common sense, as computers seek the most direct solution to problems, not the most ethical or practical one. In 2018, for example, a self-driving car being tested by Uber struck and killed a woman in Arizona. A nearly two-year government investigation revealed that the car hadn’t malfunctioned; rather, it had been programmed to look only for pedestrians in crosswalks. Jaywalking, as the woman was doing, was beyond the system’s grasp, so the car barreled ahead.

AI researchers call that “brittleness,” and such an inability to adjust is common in systems used today. This makes decisions about how much battlefield risk to embrace with AI particularly challenging. What if a slight uniform variation — some oil soaked into a shirt or dirt obscuring a normal camouflage pattern — confuses a computer, and it no longer recognizes friendly troops?

Machines present another potential defect: In their search for mission success, they can be ruthless cheats. For decades AI researchers have designed games as a testing ground for algorithms and a measure of their growing wisdom. Games, with their highly structured rules and controlled conditions, offer a safe nursery in which computers can learn. But in a notorious case, an AI system taught to play Tetris by researcher Tom Murphy at Carnegie Mellon University was instructed not to lose. As blocks descended faster and faster from the top of the screen, it faced inevitable defeat. So the algorithm found an ingenious solution: Pause the game and leave it paused — thus avoiding a loss. That kind of indifference to broader norms about fairness doesn’t matter in a game but could be catastrophic in warfare.

Military milestones

The debate over whether to use AI to cause mortal harm has accelerated in recent years, driven by a wave of investment by the Pentagon. The Defense Department’s unclassified budget asked for $927 million to spend on artificial intelligence, including weapons development, in 2020. It wanted $841 million for 2021. The Defense Advanced Research Projects Agency, a key birthplace of advanced military technologies, plans to spend $2 billion on AI over five years, concluding in 2023.

In December the Air Force successfully used artificial intelligence on a U-2 spy plane for the first time. The test limited the AI to managing navigation and radar while a human pilot controlled the jet, but it marked a milestone: AI deployed on an operational aircraft, albeit an unarmed surveillance plane.

The test was spurred by the campaigning of Will Roper, a former longtime defense official who ran weapons buying for the Air Force during the Trump years and was one of the Pentagon’s chief AI evangelists. Roper believes that military planners have to move ahead with testing AI, even if there are many unknowns, because the United States’ competitors are rapidly advancing their own abilities. “I fear our lack of keeping up,” he said during a roundtable with reporters shortly after the spy plane test. “I don’t fear us losing our ethical standards, our moral standards.”

Advanced AI means weapons operating faster, leaving human operators and their molasses reflexes behind. Roper said that because of the way AI capabilities are accelerating, being behind means the United States might never catch up, which is why he’s pushing to move fast and get AI out into combat. “It doesn’t make sense to study anything in the era of AI,” he said. “It’s better to let the AI start doing and learning, because it’s a living, breathing system, very much like a human, just silicon based.”

But while the technology is advancing, the military is still confronting the much larger ethical question: How much control should commanders give machines over the decision to kill on the battlefield? There’s no easy answer. The machines can react more quickly than any human, with no fatigue or war weariness dulling their senses. Korpela and Parsons both served in Afghanistan and Iraq and have seen how human beings in a war zone can be prone to poor decision-making. When close friends are killed in combat, soldiers can and do make the wrong choices about whom and what to target with firepower. Machines, by contrast, don’t get emotional and remain focused, they say.

Wareham and the Campaign to Stop Killer Robots have been working hard to organize tech workers to resist advancing AI for weapons. The effort reflects one substantial difference between AI and most other major military technologies developed in the past century: Nearly all of the advances in AI are brewing in commercial technology companies, not traditional defense contractors. Instead of employees knowingly joining arms makers, they’re working on projects in Silicon Valley that have pieces migrating into weaponry. And those tech companies aren’t completely dependent on the military for work, unlike the defense firms, although the Pentagon money is still a lure.

Few rules

The only rules for autonomous military weapons themselves were written a decade ago by a mid-level Pentagon official trying to imagine computer capabilities that were just beginning to seem plausible. Paul Scharre, a former Army Ranger who served in Iraq and Afghanistan, was working in the Defense Department’s policy shop as a civilian in 2010 when he was handed the assignment of writing the department’s policy guidelines for AI weaponry.

The policy he helped write, released around the time the Campaign to Stop Killer Robots was being formed in 2012, was meant to make it clear to weapons designers that they could continue their work, Scharre said. His main innovation was a requirement that systems capable of killing on their own be reviewed by a trio of senior Defense Department officials. But the policy didn’t prohibit anything. “At the end of the day, it’s worth pointing out that the directive doesn’t give the answer,” Scharre said. “It doesn’t say this is what you’re allowed to do and not, for all time.”

The Navy eventually abandoned the idea of arming the X-47B. And no other weapon has yet been deemed far enough along to qualify for the special review required by Scharre’s policy, according to knowledgeable current and former officials. But Pentagon officials say the moment is approaching when AI weapons will see combat.

Scharre’s views have evolved over the past 10 years, partly because weapons systems that were merely conceptual back then are now close to being on the battlefield. He still doesn’t support a blanket ban on autonomous weapons systems but he has recently embraced the possibility of restrictions on AI weapons that target people, as opposed to tanks, planes and ships.

Some of the future officers working with robotic tanks at West Point had adopted their own wary view of autonomous weapons. After repeated trial and error, they’d made good progress in programming the tanks to slay enemy balloons more efficiently, but many still weren’t convinced that weapons injected with AI are ready to be put in the field.

“It’s still a liability at the end of the day,” said Cameron Thompson, a cadet from Littleton, Colo. “We realize that it’s very good at its job and that we can program it very well. However, I don’t think a lot of people want to take the risk right now of being the first person to put this into an actual environment and see what happens.”

Our special thanks to:detroitnews.com

error

Enjoy our news? Please spread the word :)