Robotic Companions: Some ethical questions to consider - CiteSeerX

29 downloads 0 Views 59KB Size Report
transgendered, bisexual, or yet something else? Fifth, to what extent will companion robots be fungible? A dollar bill, a Euro, or a 1000 yen note are all fungible:.
Robotic Companions: Some ethical questions to consider L Hinman, Ph.D.

Abstract— Several countries—including Japan and South Korea, and now Norway—have already made formal commitments to encourage the use of social robots to assist in the care of the elderly; others are sure to follow. Such policies raise interesting and important ethical questions for those who design such robots, for policy makers who guide their deployment, and for those who face the prospect of interacting with them during the final years of their lives. The answers to these questions are not immediately evident, and they raise fundamental questions about the role of technology in our lives. I shall argue for the wise and judicious use of robots in this context and caution against some of the dangers inherent in this situation.



countries—including Japan and South Korea, and now Norway—have already made formal commitments to encourage the use of social robots to assist in the care of the elderly; others are sure to follow. Such policies raise interesting and important ethical questions for those who design such robots, for policy makers who guide their deployment, and for those who face the prospect of interacting with them during the final years of their lives. The answers to these questions are not immediately evident, and they raise fundamental questions about the role of technology in our lives. I shall argue for the wise and judicious use of robots in this context and caution against some of the dangers inherent in this situation. In the following remarks, I shall concentrate on several areas of concern. First, as robotic companions become more common, how will this transform our idea of filial responsibility? Robots may take over some of the responsibilities of adult sons and daughters in taking care of the elderly, a phenomenon already seen in nursing home companions. They may become surrogates for the adult sons and daughters. Should we design them with voices, gestures, and appearances that mimic real people in the lives of the elderly? Second, in what ways will the use of robotic companions transform our expectations of ordinary human beings when EVERAL

Manuscript received January 13, 2009. Lawrence M. Hinman is with the Department of Philosophy, University of San Diego, 5998 Alcalá Park, San Diego, CA 92110, USA (phone: 619-2604787; email: [email protected] ) and also Co-Director of the Center for Ethics in Science and Technology (phone: (858) 822-2647; email: [email protected];

juxtaposed to robotic companions? Those with robotic companions may find that robots are, within their limits, extraordinarily responsive and patient. How will this transform the expectations of the elderly toward human beings, who are often less patient and compliant? Will the elderly come to prefer the companionship of robots to that of their own offspring? Third, to what extent should companion robots be designed to be honest? Many of us, including the elderly, like to hear positive comments, especially about ourselves. Should robots be programmed to be positive, even when this involves telling lies? To what extent should we trust robots to tell us the truth? Fourth, what sexual dimension should companion robots have? Over the millennia, humans—probably especially men—have found plenty of substitutes for human sex partners. Should robots be designed to provide sexual stimulation or satisfaction to others? Is this a form of pornography, perhaps raised to the next level? Even more fundamentally, should robots be gendered? If so, what would that mean? Would it be heterosexual, gay, lesbian, transgendered, bisexual, or yet something else? Fifth, to what extent will companion robots be fungible? A dollar bill, a Euro, or a 1000 yen note are all fungible: one dollar bill is as good as another dollar bill and, as dollar bills, they are completely interchangeable. Human beings, at the opposite end of the spectrum, are not fungible. I may love my wife dearly, but we could not substitute her twin sister for her without raising serious issues. Persons are not fungible. Where, along this continuum, are sociable robots to be placed? Is one instance of a particular model interchangeable with any other identical instance? This leads to a much more fundamental question about robotic self-consciousness and the importance of a narrative self. These questions should not cause us to abandon the work of creating sociable robots, but they should point to a common question: how can we create a good life together with robots? It is this question which should guide us in the future and which provides the organizing principle for this paper

II. FILIAL RESPONSIBILITY A. The Shifting Responsibility Human life is constituted through some very basic relationships, and certainly the relationship between child and parent is one of the most important of these. Typically, this relationship exhibits a certain kind of shifting symmetry. Initially the child is completely dependent on the parent. As the child grows toward adulthood, the degree of dependence decreases. The middle years are characterized by a relative lack of dependence on either side: the young adult child and the older adult parent each has an independent life. As the parent moves into old age (or before this in the case of serious illness), the parent becomes the dependent and the child assumes the role of caregiver. Some philosophers, such as Jane English [1], maintain that adult children do not owe their elderly parents anything; anything that the adult children choose to do for their parents is done our of friendship, not moral obligation. Other philosophers, however, see this relationship as one of shifting moral responsibilities. The bioethicist Daniel Callahan, for example, argues that there adult children to have a moral obligation to care for their parents, and “the moral ideal of the parent-child relationship is that of love, nurture, and the mutual seeking for the good of the other…mutual respect and reciprocity have been a central part of the moral standard” [2] B. Is This a New Problem? The interesting ethical question that arises here is whether, and under what conditions, such filial responsibilities can be transferred to artificial agents (AAs). Clearly in some cases such responsibilities can be transferred. When the adult child is unable to discharge the responsibility to the parent because of illness, military deployment, or other similar conditions, and when there are not siblings available to shoulder a greater portion of the responsibility, then we can readily imagine making arrangements that a surrogate take over this responsibility. In today’s world, even without extenuating circumstances, adult children often turn to surrogate human caregivers in assisted living facilities to discharge this responsibility on their behalf. The transfer or reassignment of responsibility is not a new phenomenon per se. C. The Unique Dimension of Robotic Caregivers The reassignment of responsibility to an artificial agent, however, is a new phenomenon. If Callahan is correct, the relationship between child and parent is one of mutual respect and reciprocity. One could imagine the relationship with a human surrogate as still retaining these elements of mutual respect and reciprocity, but could mutual respect and reciprocity be present in a relationship between the parent and an artificial agent?

These questions lead to an even more fundamental question about the level of possible sophistication in the design of robotic companions. What would it mean to create an artificial agent capable of mutual respect and reciprocity? I shall return to this question below.

III. TRANSFORMING EXPECTATIONS A. Shifting Expectations The second major question to be posed here about robotic companions is this: how will our interactions with robotic companions affect our interactions with regular human beings? Consider an example. It is possible to program a robotic companion to be infinitely patient. Presumably we do not want to make such patience truly infinite, for that runs the risk of having a robotic companion caught in an unending “if…then…” loop. However, it is certainly imaginable that the robotic companion’s patience could far exceed the possible patience of its human counterpart. This example leads to some interesting questions for programmers creating robotic companions and also for the elderly who are cared for by such companions. B. Considerations for Programmers One of the considerations in designing robotic companions concerns the extent to which such companions should “push back” against their charges, that is, the extent to which the AAs should refuse to be simply compliant to those for whom they are caring. One argument is support of AA-based resistance to demands for care is as follows. If a human lives in a world in which some caregivers should unlimited patience toward his or her demands, then this may foster an expectation that human beings should also exhibit such patience. However, human beings are loci of needs and interests that constitute, as it were, a center of gravity that exerts a natural resistance to the demands coming from others. A human being cannot simply serve the needs of others. As Thomas Hill has noted [3], this leads to servility, a moral failing in which the individual fails to respect himself or herself properly. Do artificial agents have a comparable moral center of gravity? Do they have interests independent of serving those whom they have been assigned? Again we find ourselves returning to the question of the relationship between caregiver and cared-for: is this a reciprocal relationship of respect, or is it a one-way relationship of servitude to the needs of the other? C. Considerations for those who are Cared for While it is proper to focus on the design aspects of robotic companions in this context of an IEEE conference,

it is also worth mentioning that this issue raises important questions about what expectations are appropriate on the part of the cared-for individual. Should the elderly, in other words, even want a robotic companion to be servile? There are at least two ways of answering this question. First, it is possible to highlight the indirect negative consequences of such expectations of servility. If robotic companions provide an unhesitating servile or compliant response, will this lead the elderly to expect the same from the world of human beings? If so, this would seem to be an undesirable outcome, because few human beings would be either able or willing to play such a complaint or servile role. Second, this issue sheds light on a basic distinction between two kinds of beings, mere things and persons. A cane or a walker is a mere thing, and as such it can be properly used in a purely instrumental way. Yet if we look at human beings, we see—as the German philosopher Immanuel Kant argued [4]—that they are not mere things, they cannot be used purely as means to an end. Rather, they must also be treated as ends-in-themselves, that is, as beings that have their own independent interests and dignity, as autonomous entities. Such entities can provide a distinctive kind of mutual recognition of your own autonomy. The issue for those who are cared-for is this: we cannot consistently treat robotic companions merely as a means to our own ends and at the same time expect from them the kind of recognition we crave in human relationships, the kind of recognition we might desire from a filial companion. A robotic companion can be a mere crutch, but if it is, it cannot be a filial surrogate. Thus once again we return to our original question about filial responsibility and surrogacy, but this time we come to the issue from the perspective of the person being cared-for rather than from the perspective of the adult child transferring filial responsibility to a robotic surrogate. Robotic companions can either be elaborate tools, purely instrumental entities, or else they can be autonomous artificial agents, entities with interests and dignity. However, they cannot be both simultaneously. IV. HONESTY AND TRUST Some of these same issues recur yet again when we pose the question of how honest robotic companions should be, and the analysis of this question will help us to move further into the domain of programming ethical robotic companions. How should robotic companions react to the expressions of emotion, statements, and requests of those to whom they provide care? The issue branches along two paths, one about honesty, the other about compliance. In this section, we will consider the issues of honesty; in the next, compliance.

Let’s begin by considering three models of robotic honesty in relationships. A. The Kantian Robot: Always tell the truth The German philosopher Immanuel Kant is famous, perhaps infamous, for his injunction—in his essay, “On the alleged right to lie”[14]—that we ought never to tell a lie. (As H. J. Paton has pointed out [15], this position may have been at variance with his overall position in his earlier writings.) The advantage of such an imperative, at least from the standpoint of a programmer, is that it is simple to implement: all robotic companion responses ought to be completely honest, not modified in any way. Imagine an elderly patient asking a robotic companion, “How do I look today?” Presuming it has the capability to provide an answer, the Kantian robotic companion would presumably answer straightforwardly and honestly, perhaps with a generic vocabulary (“wonderful,” “good,” “about the same as yesterday,” “poor,” “sick,” “dying”) or more specific descriptions (“your color has improved since yesterday, but I note that you still have circles under your eyes.”) B. The Discrete Robot: note on honest, truth and completeness It is helpful to keep in mind here that we are dealing with the honesty of the robotic companion’s responses, that is, whether the responses themselves represent the best state of knowledge of the robot, not with their accuracy or truthvalue. Honest responses are not necessarily true. Robots, like humans, can be mistaken in their statements. The issue of completeness of response adds a third category here, one that is much more difficult to parse. An example can make this much clearer. If an elderly person asks a robotic companion, “How do I look today,” the robot may say only truthful statements “Your hair is nicely coiffed today,” but not say truthful statements that may be perceived as negative (“you look like you’ve gain more weight.”). in other words, we may imagine a discrete robot. C. The Supportive Robot: Always saying positive things One could imagine a robotic companion that models itself on a cheerleader, always providing positive feedback to the person cared for, even when such feedback not only omits negative truths but also includes falsities. For example, the robotic caregiver may tell someone in its charge that the person looks good, even when the robot’s incoming information indicates that it does not. D. Trust There is an obvious problem with the supportive robot: after a few falsehoods, the robot’s credibility will decrease. The elderly, at least those not desperate for approval, will

soon stop trusting the robot as a source of support. The issues that gradually emerge here mirror those issues that arise in the discussions relating to paternalism in medicine. At what point, for example, does a physician withhold bad medical news, especially when nothing can be done to remedy the situation? Half a century ago, such withholding would have been commonplace; now, it is considered both immoral and illegal to withhold such information. We can easily see that the supportive robot, in the long run, is going to be self-defeating: its support will no longer have its desired effect. This is an example of one of the way in which ethics can come from a “bottom-up” approach of the type discussed by Wallach et al. [10] Yet are robotic companions to be held to similar codes of ethics? E. Modulating truthfulness and trust We may well find that robots that simply tell us what (they think) we want to hear are ultimately self-defeating. However, robots that tell the truth in an unvarnished and sometimes harsh way lack something else: a (moral and psychological) sensibility to the feelings of others that marks the robot’s interactions as distinctly non-human. Reflecting on how to program a robotic companion to tell the truth appropriately prompts us to think more explicitly and critically about our own purely human moral codes, a point that Wendell Wallach and Colin Alan stress in their Moral machines [11]. V. ROBOTS AND SEXUALITY Let us turn now to the question of compliance and examine a specific aspect of that issue. The more general issue is this: to what extent should companion robots comply with the expressed wishes of those for whom they care? The more specific issue is this: to what extent, if any, should robotic companions provide sexual satisfaction to those for whom they care? David Levy [18] has explored the area of robots and sex in great detail, and the possibility of robotic sex raises interesting and disturbing issues for robot designers [17]. Several questions merit attention. First, there is a possible moral advantage to be gained by introducing robotic sexual companions in the following way. If the availability of robots sex results in a reduction in the number of girls and women forced or pressured into lives of prostitution, then the availability of robotic sexual companions would seem to lead to less harm to human beings. Second, the availability of robotic sexual companions may lead to changing attitudes and expectations toward human sexual partners. Presumably in human-human sexual relations, the relationship is roughly symmetrical. In human-robot sexual relations, in contrast, the relationship would presumably be asymmetrical, with the robotic sexual

companion programmed to please the human partner. Put bluntly, will human beings come to be seen as less pleasing sexual companions than their robotic counterparts? This is a variation on the theme already introduced above about the extent to which robotic caregivers could spoil us for the demands of human-human relationships. VI. FUNGIBILITY AND ROBOTIC IDENTITY To what extent are robotic caregivers fungible? There are certain areas in which fungibility seems completely appropriate. One dollar bill is completely interchangeable with any other dollar bill as legal tender. (Occasionally, some particular dollar bill may have a sentimental meaning to some individual because it is, for example, the first dollar their business ever earned.) If a dollar bill is the paradigmatic case of a fungible object, then human beings would seem to be paradigmatically non-fungible. Even with identical twins, one cannot substitute one for the other. Each is unique, Robotic identity can be constructed in several different ways, and this affects the fungibility question. First, the construction of such an identity requires that robots identify their incoming data as their data. Briefly put, this is what Immanuel Kant had in mind in his discussion of the transcendental unity of apperception in the first half of the Kritik der reinen Vernunft. All incoming data must be labeled with a date and location stamp (the forms of intuition, space and time), and then further refined by the use of certain a priori rules, such as the notion of a physical object and the notion of cause-and-effect (the categories of the understanding). For a robot to have an identity, its “experiences” must be brought together as elements of “my experience.” This raises several interesting questions, two of which I will mention here. First, there is a clear sense in which a robot’s identity as the sum of its memories of itself and their interactions can be transferred without informational loss from one computer to another. Does this mean that we could come up with a replacement for a robot that that, if it contains the same memories, it will be completely interchangeable? This is one of the key questions about robotic identity, and one of the ways in which robots would seem to differ from human beings. Second, the human sense of identity—at least in the West—presupposes a degree of individualism that might not be present in other species. Bees, for example. appear to work for the common good of their hive to such an extent that group identity takes precedence over individual identity. It certainly seems possible that we may choose to give some robots a sense of group identity as the principal focal point of identity. One could imagine, for example, robotic soldiers that participate in a group identity as their primary

source of self. Robot swarms are an excellent example of this. To what extent could robots simply have a group identity as their primary identity? If this were to occur, robotic identity would be vastly different from human identity, and it is possible that individual robots would be fungible precisely as part of the larger herd. VII. CONCLUSION In the preceding consideration, we have begun to explore the ways in which relationships could develop between human beings and the robotic companions entrusted with their care, as well as the broader ways in which this may transform relationships among human beings. REFERENCES [1] [2] [3] [4] [5]



[8] [9]


[11] [12]

[13] [14]


[16] [17] [18]

J. English, “What do grown children owe their parents?” D. Callahan, “What do children owe elderly parents?” Hastings Center Report, April 1985, p. 32. Thomas E. Hill, Jr. “Servility and Self-Respect,” Autonomy and SelfRespect (Cambridge: Cambridge University Press, 1991) Immanuel Kant, Groundwork of a Metaphysics of Morals Kari Gwen Coleman, “Android Arete: Toward a virtue ethic for computational agents.” Ethics and Information Technology, Vol. 3, No. 4 (2001), 247-265. Stephen Petersen, “The ethics of robot servitude,” Journal of Experimental & Theoretical Artificial Intelligence, Volume 19, Issue 1 March 2007 , pp. 43 - 54 Colin Allen , Iva Smit , Wendell Wallach, “Artificial Morality: Topdown, Bottom-up, and Hybrid Approaches,” Ethics and Information Technology, v.7 n.3, p.149-155, September 2005 Luciano Floridi , J. W. Sanders, “On the Morality of Artificial Agents,” Minds and Machines, v.14 n.3, p.349-379, August 2004 James Gips,” Towards the ethical robot,” Android epistemology, edited by Kenneth M. Ford, Clark Glymour, and Patrick J. Hayes. MIT Press, Cambridge, MA, 1995, 243-252. Wendell Wallach, Colin Allen, and Iva Smit, “Machine morality: bottom-up and top-down approaches for modeling human moral faculties. AI & Society (2008) 22:565–582. Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (New York: Oxford University Press, 2008). Carson Reynolds and Masatoshi Ishikawa, “Robot Trickery,” International Workshop on Ethics of Human Interaction with Robotic, Bionic, and AI Systems: Concepts and Policies, October 17 - 18, 2006, Naples, Italy. H. J. Paton, "An Alleged Right to Lie" Kant-Studien 45 (1953-54).] 2 Immanuel Kant, Kant: Groundwork of the Metaphysics of Morals (Cambridge Texts in the History of Philosophy), translated and edited by Mary Gregor, Introduction by Christine M. Korsgaard (New York: Cambridge University Press, 1998). Immanuel Kant, Grounding for the Metaphysics of Morals With on a Supposed Right to Lie Because of Philanthropic Concerns, translated by James W. Ellington (Indianapolis: Hackett Pub Co Inc., 1993). Lawrence M. Hinman, Ethics: A Pluralistic Approach to Moral Theory, 4th ed. (Wadsworth, 2007). Charles Q. Choi, “Sex and marriage with robots? It could happen,” MSNBC. Oct. 12, 2007. David Levy, Love and Sex with Robots: The Evolution of HumanRobot Relationships (New York: Harper, 2008).