Hands Up, Don’t Shoot!: HRI and the Automation of Police Use of Force

This paper considers the ethical challenges facing the development of robotic systems that deploy violent and lethal force against humans. While the use of violent and lethal force is not usually acceptable for humans or robots, police officers are authorized by the state to use violent and lethal force in certain circumstances in order to…

From Killer Machines to Doctrines and Swarms, or Why Ethics of Military Robotics Is not (Necessarily) About Robots

Ethical reflections on military robotics can be enriched by a better understanding of the nature and role of these technologies and by putting robotics into context in various ways. Discussing a range of ethical questions, this paper challenges the prevalent assumptions that military robotics is about military technology as a mere means to an end,…

The responsibility gap: Ascribing responsibility for the actions of learning automata

Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation wherein the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour anymore, and thus cannot…