This essay highlights an important ambiguity in the use of the term 'robot ethics,' as the phrase has at least three distinct meanings. Firstly, it applies to the philosophical studies and researches about the ethical issues arising from the effects of the application of robotics products on our society. In this sense, roboethics suggests the development of a very broad 'applied ethics', which, similarly to the ethical studies related to bioethics, deals with the universal, fundamental ethical issues. These are related to the need to protect and enhance human dignity, and personal integrity; to secure the rights of the weakest, and to limit the "robotics divide", in all those instances in which robotics products could either worsen the existing inequalities, or creates some new ones. In this meaning, roboethics pertains to all the issues deriving from the relationship between science, technology and society, and it benefits from the related studies in Psychology, Sociology, Law, Comparative Religions, and so on. Secondly, it could refer to the moral code which the robots themselves are supposed to adhere to (presumably a morality somehow programmed into them). For any level of robotic autonomy, there will in effect be some code the programmers create that the robot must follow in order to do what it ought to; in effect, that will be a moral code for the robots, in this second sense, and will enable humans to make the judgment that the robot acted morally, in obeying its programmed moral code and doing what it ought to do, or that the robot acted immorally, in doing something that it wasn't supposed to (that it ought not to have done), whether due to a electro-mechanical glitch, or a bug in the software, or lack of foresight about the conditions of its use, or otherwise incompetent programming. But the robot itself is unaware of its own programmed-in morality; it is 'just following orders', whether it does so badly or well. The last consideration leads us to yet a third sense of 'robot ethics': it could refer to the self-conscious ability to do ethical reasoning by the robots themselves, to understand from a first-person perspective their choices and responsibilities, and to freely, self-consciously choose their course of action. Such an ability would make robots full moral agents, themselves (and not their programmers, designers, or builders) personally responsible for their actions. This third sense of 'robot ethics' would imply that robots have a morality they choose for themselves, not merely one they slavishly, mindlessly must follow; they would share the human trait of self-conscious, rational choice, or freedom.
Roboethics: The Applied Ethics for a New Science
Veruggio Gianmarco;
2012
Abstract
This essay highlights an important ambiguity in the use of the term 'robot ethics,' as the phrase has at least three distinct meanings. Firstly, it applies to the philosophical studies and researches about the ethical issues arising from the effects of the application of robotics products on our society. In this sense, roboethics suggests the development of a very broad 'applied ethics', which, similarly to the ethical studies related to bioethics, deals with the universal, fundamental ethical issues. These are related to the need to protect and enhance human dignity, and personal integrity; to secure the rights of the weakest, and to limit the "robotics divide", in all those instances in which robotics products could either worsen the existing inequalities, or creates some new ones. In this meaning, roboethics pertains to all the issues deriving from the relationship between science, technology and society, and it benefits from the related studies in Psychology, Sociology, Law, Comparative Religions, and so on. Secondly, it could refer to the moral code which the robots themselves are supposed to adhere to (presumably a morality somehow programmed into them). For any level of robotic autonomy, there will in effect be some code the programmers create that the robot must follow in order to do what it ought to; in effect, that will be a moral code for the robots, in this second sense, and will enable humans to make the judgment that the robot acted morally, in obeying its programmed moral code and doing what it ought to do, or that the robot acted immorally, in doing something that it wasn't supposed to (that it ought not to have done), whether due to a electro-mechanical glitch, or a bug in the software, or lack of foresight about the conditions of its use, or otherwise incompetent programming. But the robot itself is unaware of its own programmed-in morality; it is 'just following orders', whether it does so badly or well. The last consideration leads us to yet a third sense of 'robot ethics': it could refer to the self-conscious ability to do ethical reasoning by the robots themselves, to understand from a first-person perspective their choices and responsibilities, and to freely, self-consciously choose their course of action. Such an ability would make robots full moral agents, themselves (and not their programmers, designers, or builders) personally responsible for their actions. This third sense of 'robot ethics' would imply that robots have a morality they choose for themselves, not merely one they slavishly, mindlessly must follow; they would share the human trait of self-conscious, rational choice, or freedom.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.