Abstract

A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building sensitivity to the values, ethics, and legality of activities into AI systems. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing artificial morality and the differing criteria for success that are appropriate to different strategies.


Type

Journal Article

Volume

7

Date

2005-09

Webpage

→ external link

Creators

Allen, Colin (Author)
Smit, Iva (Author)
Wallach, Wendell (Author)

Issue

3

DOI
10.1007/s10676-006-0004-4


Professional Fields