Show simple item record

dc.contributor.author Sullins, John P. en
dc.date.accessioned 2012-09-24T16:42:21Z en
dc.date.available 2010-04-20T17:25:34Z en
dc.date.issued 2006-12 en
dc.identifier.citation Sullins, John P. "When Is a Robot a Moral Agent?" IRIE: International Review of Information Ethics. 6;12/2006 : 23-30 en
dc.identifier.issn 1614-1687 en
dc.identifier.uri http://hdl.handle.net/10211.1/427 en
dc.description.abstract The author argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when one can analyze or explain the robot’s behavior only by ascribing to it some predisposition or ‘intention’ to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moral agent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons. en
dc.publisher IRIE: International Review of Information Ethics en
dc.subject robots en
dc.title When Is a Robot a Moral Agent? en
dc.type Article en
dc.relation.journal IRIE: International Review of Information Ethics en
dc.contributor.sonomaauthor Sullins, John P. en

Files in this item


This item appears in the following Collection(s)

Show simple item record

Search DSpace

My Account

RSS Feeds