The possibility of creating thinking machines raises a host of ethical issues relating both to ensuring that such machines do not harm humans and to the moral status of the machines themselves. If something goes wrong who is responsible? Should it be the robot's programmer, designer, manufacturer, human overseer or his superiors?