Abstract

Artificial intelligence and robotics are rapidly advancing in their quest to build truly autonomous agents. In particular, autonomous robots are envisioned to be deployed into our society in the not-so-distant future in many different application domains, ranging from assistive robots for healthcare settings to combat robots on the battlefield. Critically, all these robots will have to have the capability to make decisions on their own to varying degrees, as implied by the attribute 'autonomous'. While these decisions might often be in line with what the robots’ designers intended, I take it to be self-evident that there can, and likely will be, cases where robots will make inadequate decisions. This is because the world is open, with new entities and events appearing that could not have been anticipated by robot designers (e.g., Talamadupula et al (2010)). And even if the designers’ response to the world’s openness was to endow their robots with the ability to adapt to new situations and acquire new knowledge during their operation, so much for the worse, because learning capabilities in autonomous robots leave even less control in the hands of the designers and thus open up the possibility for inadequate decisions.


Type

Journal Article

Date

2014

Creators

Scheutz, Matthias (Author)

Webpage

→ external link


Professional Fields