Abstract

If robots are to be trusted, especially when interacting with humans, then they will need to be more than just safe. This paper explores the potential of robots capable of modelling and therefore predicting the consequences of both their own actions, and the actions of other dynamic actors in their environment. We show that with the addition of an ‘ethical’ action selection mechanism a robot can sometimes choose actions that compromise its own safety in order to prevent a second robot from coming to harm. An implementation with e-puck mobile robots provides a proof of principle by showing that a simple robot can, in real time, model and act upon the consequences of both its own and another robot’s actions. We argue that this work moves us towards robots that are ethical, as well as safe.


Type

Conference Paper

Pages

85-96

ISBN
978-3-319-10400-3 978-3-319-10401-0

Creators

Winfield, Alan F. T. (Author)
Blum, Christian (Author)
Liu, Wenguo (Author)

Publisher

Springer, Cham

DOI
10.1007/978-3-319-10401-0_8


Professional Fields