Thursday, October 18, 2018

The Emotionless Future of War Means Even More Killing

This article originally appeared on VICE UK.

Killer robots. If those words fill you with anything other than trepidation, you almost certainly work in military technology (in which case: congratulations on what I assume is a relatively well-paid job. But also: You have an immediate ethical duty to sabotage everything your company is doing).

Developments in AI have made fully autonomous weapons systems—tanks, ships, guns, planes—a technological possibility, and despite some efforts by the UN to prohibit their use, such machines are predicted to play an increasingly prominent role in military conflicts which could well define the fate of the 21st century.

This, at any rate, is one of a number of speculations presented by a Ministry of Defence report published earlier this month, which has been given the, all-things-considered, rather chillingly blue-skysy title The Future Starts Today. According to the authors, while "war is inherently a human activity whose character is determined by politics, strategy, society and technology," the increased role of killer AI robots in future conflicts could "change the very nature of warfare," with "less emphasis on emotions, passions, and chance."

This line is immediately followed by the above image of a robot standing in a field of lavender. I'm not sure how we're supposed to interpret this image. On the one hand: The robot is shot rather plaintively with the sun low in the sky, implying a harmony with nature, thus the world. Robots in military conflicts are good! They help us and mean fewer soldiers will die! On the other: The robot seems to be looking over at something just out-of-shot, which presumably it is about to destroy. This could be a bad terrorist, or it could be a whole village of innocent people. I guess it depends on what the algorithm has decided.

And that, I suppose, is the problem: Who knows what an AI system, deployed in a military conflict, will decide to destroy? It could be the enemy, or it could be you and everyone you love. Algorithms don't always do what they're supposed to—they tend to internalize the prejudices and biases of the past, which can lead to unpredictable side-effects, depending on how rotten we've historically been. Which, if we're talking about military action here, seems... very rotten indeed. Yet another reason why the traditional sci-fi proscription against robots capable of violence toward human beings is a deeply sensible one.

That said, the report isn't totally blasé about the possibility of autonomous weapons playing a central role in future conflicts. At least initially, its authors suggest, "machines capable of combat are likely to be used on the battlefield under close human supervision." This would presumably limit their potential to go haywire and kill us all (the killing will be contained in the normal way—limited only to the targets human beings have identified). But "as confidence in the machines capabilities grow [sic], they could be employed further away from human supervision." This is likely to have "profound" ethical consequences. "A machine," the authors admit, "may take no account of human suffering and might all too readily resort to violence."

But that's only the worst-case scenario. Alternatively, the authors speculate: "free from ego, hubris, and nationalistic sentiment, an artificial intelligence might more rationally calculate the probable costs of conflict, thus preventing states from making rash decisions... The potentially cool, hyper-logical calculations of machines could remove passion from war... violence might become a diminishing part of conflict." That might sound good, but we're talking about fucking autonomous weapons. Why wouldn't you be worried about this?

Killer robots are not the primary focus of The Future Starts Today report. Rather, the report gives a broad survey of a world threatened by mounting crisis. "The world is becoming ever more complex and volatile. The only certainty about the future is its inherent uncertainty." Climate change in particular will be a driver of conflict, with the risk of drought and famine increasingly the likelihood of riots, conflicts, and the displacement of peoples. Inequality and poverty will increase as growth in the developed world grinds to a halt—potentially, the super-rich may be able to access technology that will allow them to reverse the aging process and thus effectively live forever. But, for others, poverty, malnutrition, and mental illness are all that await. Social media bubbles and "echo chambers" will constitute a growing political threat.

This is all seems pretty much right: So what should we do about it? Identify the sources of these problems in our self-destructive economic system? Develop new models of ownership which will allow us to dismantle it? Er... well, this never really comes up. Rather, the report appears to conceive of the role of "security" being to keep things, as much as possible, as they presently are, "seizing opportunities" while "mitigating risks." What this means in practice is attempting to maintain a "multilateral" world order driven by rules-based co-operation between powerful states. This order can work together to develop solutions to climate change—which seems good (although weren't they supposed to be doing that already with the Paris Agreement). But it also means doing things like limiting the number of people who are able to emigrate to the developed world. This seems, to put it mildly, problematic.

Climate change is already one of the biggest drivers of migration. It is already leading to a politics based on increasingly hard borders between the developed and developing world. As the worst effects hit, remaining in certain parts of the world will mean death. So: What is our responsible "multilateral" order likely to do? Give these people the shelter they need? Not if current trends continue.

Why, in this context, would the world's governments want to invest in killer robots? There's an obvious answer: To make the process of policing those displaced by climate change easier. Philosopher Theodor Adorno, in his essay called Education After Auschwitz, argues that the Holocaust was only possible because it was managed in a "cold," detached way, with the perpetrators largely shielded from the human reality of what they were doing by a "veil of technology." All they did was oversee a process and manage a machine, which did all the killing for them. "Less emphasis on emotions, passion, and chance" can quite easily mean more violence, not less.

Sign up for our newsletter to get the best of VICE delivered to your inbox daily.

Follow Tom Whyman on Twitter.



from VICE https://ift.tt/2CRIklX
via cheap web hosting

No comments:

Post a Comment