Les robots tueurs sont là

robot tueurUn excellent article sur l’état actuel des armes de guerre autonomes et sur la nécessité de les interdire.

Un extrait :

Motherboard :
Where exactly do you draw the line between what is and is not acceptable within a system that will continue to be outfitted with autonomous features?

Peter Asaro :
There’s a very clear line to be drawn when it comes to using lethal force. It’s fine to automate transportation, surveillance and other military tasks. But when you talk about using or releasing a weapon, you really want a human who has situational awareness and who is aware of context and who is able to discern that the target is valid before the weapon is used. Otherwise you’re giving free reign for automation to decide for itself what is a target.

The problem is not just whether it is precise or accurate. Maybe the technology can progress in terms of its accuracy and precision, but the deeper question is, can it assess the value of a target? Is it really a threat? Is the use of lethal force really necessary? How has its military value changed in light of the unfolding battle? Is the value of that military target high enough that we’re willing to risk civilian lives? How many? It requires a sophisticated strategic understanding of what’s going on—risk assessments and other judgments that computers aren’t equipped to make right now. It’s not easy to write an algorithm to do that, and it may be impossible.

Also, there’s a lack of accountability and responsibility for whoever told the robot to go do something. The reality is that if the robot does something really bad, in criminal court you can’t hold it accountable. In reality, you lose the deterrent effect and any identifiable accountability and you have killing going on that’s unaccountable. That’s a huge problem.

Motherboard :
What implications does this have for conventional notions of human accountability? What sort of moral and philosophical quandaries do killer robots present?

Peter Asaro :
There’s a question of whether they can even conform to laws. And for us, is having no accountability in and of itself acceptable? That’s a moral question—whether it’s permissible at all to delegate the authority to a machine to kill a human being.

Generally, humans killing humans is acceptable when it’s self-defense. But in that case it’s a human estimation, and it’s always a judgment call.

It’s difficult when even uniformed armed combatants engaged in warfare are not always legitimate enemy targets. There’s a moral quality: If it’s not necessary to kill your enemy in a given situation then it’s morally wrong, even if it’s legal. Is it ever legitimate for a machine to decide who lives and dies? I think the answer is no, no matter how legitimate a computer is.

Source : Meet the Man Behind the Push to Ban Killer Robots
Via : http://azspot.net/post/52700186534/its-fine-to-automate-transportation-surveillance