Can a robot be racist?

Posted by Mike Walsh

9/20/16 12:00 PM

49220015.jpg

 

The tragic shootings in Dallas ended dramatically with the unprecedented use of a robotic device to kill Micah Xavier Johnson, a suspect involved in the incident. Although far from sophisticated, the improvised use of a bomb-disposal unit in this way raises an important question: how far should society go in permitting the use of lethal robotics and drones by the police?

There is a troubling feedback loop between gun culture, racial tension and the militarization of law enforcement in the US. The events in Dallas were heartbreaking for the families of all those involved, and terrifying for those with families now uncertain about their future safety. Guns are a disaster, but just as concerning, is the normalization of technologically enabled violence.

 

In the Dallas incident, the police deployed a Remotec Model F-5 built by Northrop Grumman Corp, with an explosive device on its manipulator arm. While not designed for this purpose, the use of this robot by the Dallas police is not so different to the way that a military drone or UAV might be deployed to eliminate suspected terrorists in a war zone like Afghanistan.

 

Weaponized drones are controversial, especially on US soil. Part of the problem is that drones are simply not surgical, precision instruments that eliminate bad guys without collateral damage. As the NYT wrote recently, even inside the government, there is no certainty about whom the offshore drone program has killed. It is believed that drone strikes have probably killed between 64 to 116 civilian bystanders and about 2,500 members of terrorist groups.

 

Those numbers invite a brutal calculus: is an error rate of approximately 100 to 2,500, a reasonable loss of human life, to prevent potential future catastrophe? How does that calculus change when similar lethal technologies are used not by the military in a faraway land, but by the police in your neighbourhood? Without a trial.


We reassure ourselves that whenever lethal technologies are involved, there is always a human in the loop. In fact, that is not always practical, nor in the future, may it be likely.

 

In conflict zones, in order to evade detection, drones will need to operate autonomously and execute lethal campaigns without the control of a human being. The Chinese have in fact invented their own autonomous police robot, the Anbot. With a disconcerting resemblance to a Dalek, the Anbot is armed with electroshock weapons, and is designed to patrol autonomously and protect against ‘social unrest’. There is even a 300 pound mall cop, the K5 built by Knightscope Inc, although at least one of those models has been placed on suspension after being mean to small children.

 

While it may seem strange to imagine autonomous technologies used in policing, over the the last few years the deployment of military hardware and tactics by the police has expanded rapidly. In fact, the American Civil Liberties Union found that the value of military equipment used by American police departments has risen from $1m in 1990 to nearly $450m in 2013.

 

SWAT teams and riot gear are one thing, but the real problems start when you start to combine military grade weaponry with the kinds of machine learning, data and algorithms increasingly found in the criminal justice system. Think about it. Next time you get pulled over on a highway and lock your hands on the steering wheel, who would you rather interact with: a militarized but human Robocop or an autonomous, robotic ED–209?

It is a fiction to imagine that AI will prevent discrimination in law enforcement. Software and systems, however, are not objective. They ultimately reflect the design decisions and biases of their creators.

 

Algorithms are already a key, and controversial part of the criminal justice system in the US.


ProPublica recently published a report on the use of the Northpointe platform by Broward County, Florida. The platform analyzes various factors to generate a score that reflects an offender’s chance of re-offending within two years. That means that in Broward County, criminal sentences are now based not only on what crimes people have been convicted of, but also, Minority Report style, on whether they are deemed likely to commit in the future.

When ProPublica analyzed the risk scores assigned to more than 7,000 people arrested in Broward County in 2013 and 2014 and then checked to see how many were charged with new crimes over the next two years, they discovered that only 20 percent of the people predicted to commit violent crimes actually did so. Worse, the formula was twice as likely to falsely identify black defendants as future criminals.

 

Police are facing increasingly well-organised, well-armed, and ruthless criminal individuals and organizations. They will need the support of technology to keep both us, and themselves safe. But while it is possible to put a rogue officer on a stand, and ask them to justify their actions — who is ultimately accountable when an algorithm is found to be systematically biased?

 

The key point is transparency.

 

Given the new threat level, a certain degree of techno-militarization of the police force may be inevitable. But at the same time we update the forces that protect us, we need to rapidly upgrade our legal frameworks to reflect a new model of 21st century due process.

From prohibitions on milking other people’s cows to how many sips of beer you can legally take while standing, our statute books are full of antiquated laws. It would be dangerous to make the same mistake today, by allowing the evolution of technology and regulation to diverge too widely.

Topics: Technology

New call-to-action

Latest Ideas