A Robust View of Machine Ethics

Steve Torrance

Should we be thinking of extending the UN Universal Declaration of Human Rights to include future humanoid robots? And should any such list of rights be accompanied by a list of duties incumbent on such robots (including, of course, their duty to respect human rights)? This presents a momentous ethical challenge for the coming era of proliferation of human-like agents. A robust response to such a challenge says that, unless such artificial agents are organisms rather than mere machines, and are genuinely sentient (as well as rational), no sense can be made of the idea that they have inherent rights of moral respect from us or that they have inherent moral duties towards us. The further challenge would be to demonstrate that this robust response is wrong, and if so, why. The challenge runs especially deep, as certain plausible views on the basis of sentience, teleology and moral status in biologically-based forms of self-organization and autonomy, appear to lend support to the robust position.

This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.