This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Artificial Moral Agency: Philosophical Assumptions, Methodological Challenges, and Normative Solutions Authors Dorna Behdadi, Christian Munthe Department of philosophy linguistics and theory of science, University of Gothenburg. Email:
[email protected] Abstract The field of "machine ethics" has raised the issue of the moral agency and responsibility of artificial entities, like computers and robots under the heading of "artificial moral agents" (AMA). In this article, we work through philosophical assumptions, conceptual and logical variations, and identify methodological as well as conceptual problems based on an analysis of all sides in this debate. A main conclusion is that many of these problems can be better handled by moving the discussion into a more outright normative ethical territory. Rather than locking the issue to be about AMA, a more fruitfull way forward is to address to what extent both machines and human beings should be in different ways included in different (sometimes shared) practices of ascribing moral agency and responsibility. Acknowledgement This work has been partly undertaken within the Lund-Gothenburg Responsibility Project, funded by the Swedish Research Council (VR), contract no. 2014-40. We are grateful for comments received by our colleagues Bengt Brülde, Leila El-Alti, Ida Hallgren, Sofia Jeppsson, Ben Matheson, Per-Erik Milam, Marco Tiozzo, and Anders Tolland.
1
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
1. Introduction The emerging field of "machine ethics"1 (Anderson et al. 2004), has raised the issue of the moral agency and responsibility of artificial entities, like computers and robots under the heading of "artificial moral agents" (AMA) (Allen et al. 2000).2 This debate has some family resemblance to debates on the moral status of various beings, but rather than concerning who can be wronged, it concerns who can do wrong3. Can artificial entities be among the latter? In this paper, we review positions and arguments in this debate from an overall philosophical standpoint, and propose areas and issues in need of (more pronounced) address to move the discussion ahead. Our contribution to the AMA debate is therefore first and foremost methodological, or meta-philosophical, and we argue that existing stalemates and weaknesses in the mainstream debate in this area motivate a move into a more explicit normative territory. The question in focus in the AMA debate can be formulated in the following way: What, if anything, is necessary and sufficient for an artificial entity to be a moral agent (an AMA)?
1 Not be confused with computer ethics, which focuses on ethics in relation to the use of
computers, nor with roboethics, which is concerned with the behavior of humans when designing and using artificial entities. See van Wynsberghe and Robbins (2018); Yampolskiy (2013a); Yampolskiy (2013b) and Wallach and Allen (2008); Dodig-Crnkovic & Persson (2008); Noorman & Johnson (2014). 2 For simplicity’s sake the terms artificial entity, machine, computer and robot will be used interchangeably (this, even though ‘entity’ encompasses non-physical AI-programs and not just robots and other embodied AI). 3 These two questions can be linked by a normative ethical theory that claims moral agency to be
a criterion of moral considerability (e.g. see Korsgaard (2004)). If such a theory is assumed, addressing the issue of AMA will include addressing the question of artificial moral status (e.g. Bryson (2010); Yampolskiy, (2013b); Gunkel (2014)). In the present article, we will not make such an assumption. However, our conclusions from reviewing the AMA debate may provide reason to scrutinize this sort of assumption in a new and more outright normative context. See the final section of this paper.
2
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
This issue has spurred discussion both on the general nature of (human) moral agency and on the relationship between moral agency and various other properties (such as consciousness, intentionality, free will, rationality and responsibility). Closely linked to this discussion is an epistemic debate regarding what may justify ascription of moral agency, and what practical stance with regard to AMA should be taken in light of that. The main lines of disagreement in the AMA-debate are charted in section 2, where we note a number of underlying assumptions of importance for what concept of moral agency is deemed relevant for the AMA issue. We then critically review how these underlying assumptions affect the AMA debate, addressing ideas that moral agency involves consciousness (section 3), rationality (section 4), free will and autonomy (section 5), and moral responsibility and attributability (section 6). The overall impact of our findings is then discussed in section 7. One conclusion regarding the shape of the AMA debate is that while it continues to be haunted by the notion of phenomenal consciousness a game changer for the AMA issue, this is for increasingly unclear reasons. At the same time, confusion regarding central concepts creates much uncertainty about what positions are actually incompatible and to what extent opponents are even addressing the same question. Methodological assumptions regarding the point of the discussion are also in need of development. We suggest as a remedy that the AMA debate focus on straightforward normative ethical issues about the extent to which artificial entities of different kinds and in different contexts should or should not be included into different types of human practices of moral appraisal and responsibility attribution. Such a focus at the same time creates a demarcation
3
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
problem for the AMA discussion that somewhat reminds of the one infesting discussions about the ascription of moral status or considerability to non-human beings. We suggest that this problem may be easier to handle in the AMA area, but that this remains to be demonstrated by systematic normative ethical inquiry. 2. Main Themes of the AMA Debate The center of the AMA debate is an opposition between two opposing conceptions of human moral agency, which seemingly produces different prospects for the idea of AMA. These are the so-called standard view and the functionalist view, respectively. Both come in different variants and, as we will see, there may be reason to debate the extent to which all of these truly conflict. Nevertheless, the starting point of the debate is the idea that these two general conceptions of moral agency are, first, incompatible and, second, imply different room for the notion of AMA. The standard view of human moral agency is that moral agents need to meet certain criteria of rationality
4
, free will or autonomy, and phenomenal
consciousness. Functionalism, here, refers to views on moral agency requiring only the presence of certain behaviors and reactions that are functionally equivalent (Wallach and Allen 2008) to the behaviors and reactions which advocates of the standard view would view as mere indicators of the standard criteria.
4 Few proponents of a standard view in the machine ethics debate actually mention rationality
explicitly, but it is nevertheless incorporated in their views on what capacity for decision-making is required for moral agency. See further below.
4
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Deborah Johnson is a main proponent of the standard view in the AMA debate. She stipulates the following conditions for moral agency of an entity, E (Johnson 2006): 1. E causes a physical event with its body. 2. E has an internal state, I, consisting of its own desires, beliefs and other intentional states that together comprise a reason to act in a certain way (rationality and consciousness). 3. The state I is the direct cause of 1. 4. The event in 1 has some effect of moral importance: Of these conditions, 1 (and possibly 2) assure that E is an agent, broadly speaking, while 2 and 3 add what is needed for the presence of moral agency of the type we ascribe to human beings. Condition 4 adds something to ensure that a particular instance of behavior is morally relevant, so that E not only possesses moral agency, but actually discharges it in a situation. This last point, however, has no bearing on the actual AMA debate (which concerns possession of moral agency), so in the following we will ignore it. Johnson sums up this view by describing how it serves to divide behaviors and beings: ... [a]ll behavior (human and non-human; voluntary and involuntary) can be explained by its causes, but only action can be explained by a set of internal mental states. We explain why an agent acted by referring to their beliefs, desires, and other intentional states. (Johnson 2006 p. 198)
Johnson’s main objection against the idea of AMA depends directly on conditions 2 and 3. Artificial entities are unable to be moral agents since they lack the internal
5
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
mental states that could have caused the events that would then have been their actions. Although they may 'act' in the sense of exhibiting behaviors reminding of human action from an external standpoint, these behaviors can never confer moral qualities to these entities, due to the absence of these internal mental states. The functionalist view of moral agency has been most clearly advocated by Floridi and Sanders, who reject criteria like consciousness, and embrace a ‘mind-less morality’ (2004 p. 351). Their starting point is the observation that what beings or entities may be moral agents depends on the level of abstraction that we choose when inferring general criteria on the basis of supposedly paradigmatic instances of human moral agency. The level of abstraction applied by the standard view is very low, keeping the criteria close to the case of an adult human being, but by raising the level, less anthropocentric perspectives on moral agency become accessible while consistency and relevant similarity with regard to the underlying structural features of human moral agents is maintained. Floridi and Sanders (2004) suggest a level of abstraction for moral agency, where the criteria for the moral agency of an entity, E, are the following: 1. Interactivity: E interacts with its environment. 2. Independence 5 : E has an ability to change itself and its interactions independently of immediate external influence. 3. Adaptability: E may change the way in which 2 is actualized based on the
5 Floridi and Sanders (2004) use the term "autonomy". However, Noorman and Johnson (2014)
have pointed out that this use of the term does not correspond very well to standard usage in philosophy (see, e.g., Christman, 2015): ”Machine autonomy remains an elusive and ambiguous concept even in computer science and robotics…” (Noorman & Johnson 2014, p. 55), and that what is meant is usually a very weak condition used in articles about combat robots (Adams 2001), where autonomy in ‘autonomous weapon’ or ‘autonomous system’ simply means independence of direct control of humans. Therefore, we have exchanged 'autonomy' for a term that more clearly captures the core meaning of the condition and thereby decreases risks of misunderstanding.
6
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
outcome of 1. While condition 1 corresponds roughly to its counterpart in the standard view, condition 2 departs significantly from the one requiring the presence of internal mental states in the standard view. Also condition 3 is different, both weaker by not requiring that the interactions of E are immediately caused by events falling under 2, and stronger by adding a condition of responsiveness that links 2 and 1 together. Floridi and Sanders (2004) argue that with these criteria of moral agency, the notion of AMA becomes quite realistic. In addition to the standardist-functionalist debate, there are two lines of argument that pertains to undermine the import of that very discussion for the AMA issue. One of these is an epistemological turn with pragmatic implications, represented by Johansson's proposal that moral agency may require subjective mental states in the spirit of the standard view, but that these should best be understood in functionalist terms (Johansson 2010). A moral agent needs to have capacities for discerning morally relevant information (like motives, consequences etc.), making moral judgments based on that information and initiate action based on such judgments. Johansson admits that these conditions require consciousness, but proposes an “as-if” approach for the ascription of such states: Whoever exhibits functions usually taken to signify the presence of the relevant capacities should be viewed as if these capacities are in fact in place. While moral agency may not be strictly (conceptually or ontologically) determined by these functions, they should in practice (epistemologically) be considered to be indicated by their presence. Faced with a machine that behaves like a moral agent, we should then in practice conclude that it is an AMA. 7
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
The other argument that pertains to undermine the relevance of the standardistfunctionalist conflict we will call the independency argument. It is expressed when Johnson argues that the feature of having been designed by a moral agent undermines an entity's moral agency even if it is plausibly ascribed features that would otherwise ground moral agency: ... [n]o matter how independently, automatically, and interactively computer systems of the future behave, they will be the products (direct or indirect) of human behavior, human social institutions, and human decision. (Johnson 2006, p. 197) ... [c]omputer systems and other artifacts have intentionality, the intentionality put into them by the intentional acts of their designers. The intentionality of artifacts is related to their functionality (Johnson 2006, p. 201).
Grodzinsky and colleagues similarly suggest that no machine can claim the degree of independency required for moral agency under both the standard view and functionalist views. Whatever intentional states or functionalist counterparts may be ascribed to a machine, these cannot be sufficiently detached from those of their human creators: If [...] the behavior of the agent can be ascribed explicitly to its designer, then the agent is not a moral agent. Thus, […] the agent essentially becomes an extension of its designer. […] ... it seems clear that the agent has the potential to move quite far from the original design, and from the control of the designer. In this case, at some time after such a soft- ware program is designed, implemented, and deployed there may not be a level of abstraction at which the original intentions of the original designer and implementer are any longer discernible. This situation appears to satisfy Floridi and Sanders’ conditions for moral agency. However, as we argued
8
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
above, this situation does not relieve the designer of responsibility, in that the extent to which the agent is constrained by its designer (Grodzinsky et al. 2008, p. 121).
Just as Johansson's epistemological pragmatic turn, this argument seems equally powerful regardless of if standardism or functionalism is assumed. This as the independency argument's core is about the attributability of features thought to ground moral agency to a particular entity. At the same time, it is uncertain to what extent Johansson's epistemic pragmatic turn would apply also to the independency argument, or if the indepedency argument should be wielded against the room for AMA seemingly created by that turn. Should we hold that as long as the attributability of relevant features seem as present as in paradigmatic human cases, we should pragmatically assume the presence of moral agency also in a machine? Or should we rather question the idea that an epistemically pragmatic idea about moral agency ascription creates grounds for attributing such moral agency to a machine? In the following sections 3-6, we will also address a further assumption, that is not as salient in the stating of AMA positions, but that figures frequently as an assumption for arguments related to these positions, namely the idea that moral agency relates in a specific way to moral responsibility. The independency argument will be particularly considered in that context, and to some extent in the context of discussing the role of autonomy (sections 5 and 6). Throughout, we will frequently return to the question of what should ground ascription of agency and related features, and how this relates to the AMA issue, thereby raising the methodological challenge actualized by Johansson's epistemological pragmatic
9
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
turn. 3. Consciousness and Moral Agency Most of the discussion in the AMA debate proceeds from an assumption that humans (and many animals) possess phenomenal consciousness or subjective mental states, i.e. that ‘there is something it is like to be’ them (Nagel 1974 p. 436), while artificial entities lack such states. The major disagreement under this assumption is about the significance of phenomenal consciousness for moral agency. The following is an account of the main views of this significance. Proponents of the standard view typically hold that it is a necessary feature, while functionalists dismiss its importance, and these stands determine their respective positions regarding AMA. However, there are standardists who reject the assumption that machines cannot be conscious (Himma 2008). If sufficiently sophisticated and similar to paradigmatic human beings, they would qualify as conscious entities. Johansson's epistemological pragmatic turn creates a variant of this latter stance: Granting the significance of subjective mental states for moral agency, she argues that we should ascribe such states to machines to the extent that they exhibit features and behaviors that would have us conclude that a human or an animal is conscious (Johansson 2010). The idea of phenomenal consciousness as essential for moral agency has been defended by many (Champagne and Tonkens 2013; Coeckelbergh 2010; Friedman and Kahn 1992; Johansson 2010; Johnson and Miller 2008; Johnson and Powers 2005; Sparrow 2007). These have presented two main arguments in its favor:
10
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
1. One needs phenomenal consciousness to engage in the sort of decisionmaking and appraisal that moral agents must be capable of. 2. Phenomenal consciousness is necessary for practices of moral praise and blame to be meaningful. According to the first argument, things like empathy, moral emotions and conscious grasp of moral values are needed for an agent to be able to assess a situation, make an ethical judgment, act on it and appraise the action in a way expected of a moral agent (Friedman and Kahn 1992; Irrgang 2006: Lokhorst and van den Hoven 2012; Picard 1997; Sparrow 2007; Torrance 2007). Certain subjective mental states are thus proposed to be important for moral agency because they are thought to be part of rational decision-processes required for moral agency. This argument links tightly to the assumption of moral agency as involving rationality, and will be further pursued below in section 4. Those who employ this idea to deny AMA, would also have to deny Anderson and Anderson’s claim that rational machine operations incorporating ethical elements, such as programmed rules of conduct, could constitute the sort of rationality required for moral agency (Anderson and Anderson’s (2007), or other ideas implying that the kind of rationality required by moral agency can be present merely in the form of behavioral dispositions. This question will be further discussed in section 4. There we will see that the case for argument 1 is strengthened if, in addition to the capacity for moral reasoning, moral agency is said to require a certain kind of moral knowledge or competence (see sections 4 and 5).
11
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
The second kind of argument links less to moral agency as such than to moral responsibility. It stresses the significance of a potential for moral appraisal and governance, based on the idea that emotional responses to blame and praise, such as shame, remorse and pride, is necessary for practices of holding alleged wrongdoers responsible to be meaningful (Lokhorst and van den Hoven 2012; Sparrow 2007; Torrance 2007). Beavers (2011, p. 9) extends this argument to claim that if we accept the idea of moral agents without subjective mental states, we abandon the core idea of human morality and enters a landscape of "ethical nihilism". However, the idea that moral agency presupposes moral responsibility is far from generally accepted (see section 6). For instance, some argue that while consciousness may be necessary for moral responsibility, it is not necessary for moral agency (Champagne and Tonkens 2013; Himma 2008). Moreover, even if we accept an idea of moral agency requiring responsibility, the meaningfulness of responsibility practices (such as praising and blaming) may not have to require phenomenal consciousness of moral agents. These ideas and further variations will be addressed in section 6. Also, those who deny that phenomenal consciousness is needed for moral agency argue along two lines. Both arguments, listed below, link to challenges of upholding the assumption that human beings are typically moral agents: 1. Requiring phenomenal consciousness undermines the practice of ascribing moral agency to human beings by activating the problem of other minds.
12
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
2. Requiring phenomenal consciousness undermines the assumption of human beings as paradigmatic moral agents, as phenomenal consciousness is metaphysically contested. The first argument is of an epistemological nature, linking to an assumption that a good theory of moral agency should preserve current pragmatics between people of ascribing moral agency. The second argument is more conceptual or ontological, but also seems to rest on an assumption of preserving a basic view: the idea that human beings may be moral agents. Both arguments point to how we identify each other as having subjective mental states and as being moral agents by referring to various observable functions. The functionalist response is then that this merely goes to show that those functions are what moral agency involves, and that the idea of moral agency as requiring subjective mental states should therefore be abandoned for a functionalist concept of moral agency (Anderson 2007; Coeckelbergh 2010; Floridi and Sanders 2004; Gerdes and Øhrstrøm 2015; Versenyi 1974; Veruggio and Operto 2008). As mentioned, the epistemic turn of Johansson uses very similar reasons to propose a different route than conceptual reform. The facts about our epistemic practices of identifying mental states and ascribing moral agency to people may be seen to ground a pragmatic conclusion that entities meeting the functional criteria used to these effects should always be viewed as conscious beings or moral agents, respectively, while our traditional concepts of consciousness and agency remain unchanged. What may be said for the case for functionalist conceptual reform regarding moral agency in that light?
13
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
One reason to affirm it might be an appeal to intellectual economy: if we do not need a concept of moral agency that requires phenomenal consciousness, we should do without it. This seems to be Floridi and Sanders position (2004). However, that reason is undermined by Johansson's argument for the epistemic turn, as this implies that we can retain an intuitively embraced concept and our ascription practices through an "as if" solution of the sort that she proposes. However, the proposal to reform the concept of moral agency has also been supported by the idea that we need this to fully take advantage of the alleged fact that machines can be designed to reason, decide and act morally better than the typical human moral agent. This not only because a computer would be far better at processing complex information and assessing probabilities, but also since its lack of consciousness implies a lack of emotions that would promote impartiality and rationality (Anderson and Anderson 2007; Anderson 2007; Anderson et al. 2004), e.g., in military applications (R. Arkin, 2007; R. C. Arkin, 2010; Lin, Bekey, & Abney, 2008; Sullins, 2010; Swiatek, 2012), or decision support in ethically sensitive areas (Anderson 2007). This reason uses pragmatic considerations as well, being less about doubting metaphysical background assumptions in the spirit of Floridi and Sanders, and more about accommodating for practices judged to be desirable. Therefore, once again, it appears unclear why an epistemic turn would not do also in the case of making room for the use of machines to improve human moral decision-making, granting such machines authority to advice and decide what to do in important and morally sensitive contexts, such as war or health care. Accepting the underlying premise that embracing such uses of machines would require to grant
14
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
them moral agency, it still seems that the "as if" approach of the epistemic turn would suffice. Such a move could be further supported by arguments of an outright ethical nature: Assuming that it is important to employ a way of ascribing consciousness and moral agency that secures desirable results, we should apply a precautionary approach, where we rather err on the side where we include entities that are in fact not moral agents among the entities we treat as moral agents than the side where we wrongfully exclude actual moral agents. But again, such a reason would only seem to support an "as if" solution of Johansson's type, not the functionalist conceptual reform. To arrive at Floridi and Sanders position, we thus seem to need a reason for viewing epistemic and pragmatic arguments of the sort described earlier as conceptually and/or ontologically decisive. Such a reason might be developed out of a notion that achieving the pragmatically or ethically desired result will require more than just an "as if" stance of Johansson's sort. We need a genuine change of perception of the world that allows for embracing machines as true fellows in the moral domain. For instance, based on a virtue ethical stance, Tonkens have argued that "we need to be open to the idea of multiple realizability of pain or consciousness” (2012 p. 142), advocating the idea that machines that are functionally equivalent to paradigmatic conscious beings not only should be viewed as if they were conscious, but as being conscious. Such an argument would then need to present support of the idea that a similar genuine change of moral stance is necessary to reap the benefits held out in the pragmatic arguments above. We return to this aspect in the final discussion.
15
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
4. Rationality and Moral Competence We saw that one argument in favor of the standardist position in the AMA debate leads over to an idea of rationality as a crucial aspect of moral agency, but this idea has also been explicitly stated in its own right (Davis 2012; Coeckelbergh 2009; Himma 2008), as well as implicitly (Hellström 2012). What exact constituents of rationality that are alleged to be needed for moral agency vary, however. Johnson argues that, for an agent to be able to act and not just behave, and thus be a candidate for moral agency, it needs to meet the following conditions: First, there is an agent with an internal state. The internal state consists of desires, beliefs, and other intentional states […] Together, the intentional states (e.g., a belief that a certain act is possible, a desire to act, plus an intending to act) constitute a reason for acting. Second, there is an outward, embodied event […] Third, the internal state is the cause of the outward event; that is, the movement of the body is rationally directed at some state of the world. Fourth, the outward behavior (the result of rational direction) has an outward effect (Johnson 2006, p. 198). This is an outline of a standard conception of practical rationality; the set of capacities one needs to possess to be able to know what to do and to adopt suitable means for attaining one’s ends (Kolodny 2016; Wallace 2014). Johnson (2006) specifies this in terms of the capacities to form preferences and goals, accompanied by the capacities to hold beliefs and to collect facts, a decisionmaking mechanism that enables one to weigh these in an appropriate way, and, lastly, an executive capacity that makes one able to act in accordance with the
16
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
decision (Kolodny 2016; Wallace 2014). All of these capacities are dispositional, and would therefore not seem to require phenomenal consciousness, merely an adequate stimulus and response pattern. AI applications like stock trading systems (Bahrammirzaee 2010), AlphaGo (Wang 2016) and clinical decision systems (Musen et al. 2014) therefore seem capable of meeting such a rationality requirement within the specific domains of activity for which they are designed and trained. 6 Floridi and Sanders' (2004) claim that any moral agent needs to, among other things, be able to act upon its environment (collecting information via ‘perceptors’), change state without direct response to interaction and have goals (or goal-directed behavior), therefore seems to be equivalent to Johnson's rationality condition. Neither Johnson nor Floridi and Sanders distinguish between rational agents and moral agents. In the general discussion of moral agency, however, ideas to require more than standard rationality, in terms of ‘moral knowledge’, ‘moral competence’ (Sliwa 2015) or ‘moral sensibilities’ (Macnamara 2015) exists. In the AMA-debate, a few authors include such ideas. For instance, Himma writes: ... for all X, X is a moral agent if and only if X is (1) an agent having the capacities for (2) making free choices, (3) deliberating about what one ought to do, and (4) understanding and applying moral rules correctly in paradigm cases. (2008, p. 6) Similarly, both Torrance (2007) and Asaro (2006) mention the ability to have ‘empathic rationality’ or to be able to ‘reason ethically’ as necessary for moral
6 Of course, even if such machines can meet a rationality requirement, there may be other
arguments against them being AMAs, for instance, the independency argument.
17
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
agency, besides being practically rational. While such requirements raise the bar for any entity to be a moral agent, whether or not they would undermine the possibility of AMA again depends on to what extent moral competence can be explained in functionalist terms. A radical proposal in this vein has been made by Anderson and Anderson: All that is required is that the machine act in a way that conforms with what would be considered to be the morally correct action in that situation and be able to justify its action by citing an acceptable ethical principle that it is following (Anderson & Anderson 2007, p. 19).
Such moral competence is not about the process of decision-making or the content of the cognition underlying it or discharged by it, but about the ability of "getting it right" according to some moral standard, and to communicate appropriate reasons for why the exhibited behavior was chosen. In this sense, an autonomous car may very well be morally competent, as long as it is as good as a human being of behaving correctly in traffic, and equipped with a technology to enable it to cite reasons for its decisions if prompted. But even if moral competence is assumed to include more substantial decision and cognitive abilities, the notion of AMA need not be undermined, as long as these are defined in dispositional terms. An AMA may, for instance, be developed in line with what has been called an "anthropomorphic" A.I. strategy,7 creating a machine with
7 See (Strannegård et al. 2013).
18
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
the dispositions needed for practical rationality, equipped with a programming that makes it interact with and learn from human moral judgment, reasoning and decision-making, much as a child does through upbringing and social interaction. This possibility would seem to remain open even if we would accept McDermott's (2008) claim that the sort of reasoning required by a moral agent is very difficult. One path to resisting such a conclusion has been suggested by Dreyfus (1992), who claims that human reasoning depends far more on subconscious instinct and intuition than conscious or structured thinking. Because of this, he argues, artificial agents can never perform the type of complex problem-solving that human beings are able to engage in. It is not obvious to us, however, that this stance supports the denial of AMA. Anthropomorphic A.I. may just as well be envisioned to have machines emulate such instinctive and intuitive elements of human agency. Another way of trying to resist the idea that machines may be rational and morally competent beings is to insist on an element of phenomenal consciousness, leading back into the complications touched on in section 3. 5. Free will and Autonomy Several authors in the AMA debate believe that free will is necessary for being a moral agent (Himma 2008; Hellström 2012; Friedman and Kahn 1992). Others
19
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
make the same point, using the notion of autonomy to express it (Lin et al. 2008; Schulzke 2013). In the AMA debate, some claim that artificial entities can never have free will (Bringsjord 1992; Shen 2011; Bringsjord 2007) while others, like James Moor (2009; 2006), are open to the possibility that future machines might acquire free will. Nadeau (2006) even claims that only machines can be "truly" free. Others (Powers 2006; Tonkens 2009) have proposed that the appropriateness of a free will condition for moral agency may vary depending on what type of normative ethical theory is assumed. Despite linking to the classic philosophical issue of free will, this portion of the AMA debate is quite disentangled from central issues and positions in the general free will literature, such as debates regarding compatibilism and incompatibilism (O'Connor 2016). The free will theme of the AMA debate, assumes the existence of free will among humans,8 addressing the issue if artificial entities may satisfy a source control condition (McKenna 2015). That is, what is discussed is whether or not such entities can be the origins of their own actions in a way that allows them to control what they do in the sense that we assume of human moral agents. An exception to this framing of the free will topic in the AMA debate occurs when Johnson writes that ‘... the non-deterministic character of human behavior makes it somewhat mysterious, but it is only because of this mysterious, non- deterministic aspect of moral agency that morality and accountability are coherent’ (Johnson 2006, p. 200). This is a line of reasoning that seems to assume an incompatibilist and libertarian sense of free will, assuming both that it is
8 Assuming determinism, compatibilism is assumed; assuming indeterminism, free will
libertarianism is assumed.
20
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
needed for moral agency and that humans actually possesses it. This, of course, makes the notion of human moral agents vulnerable to standard objections in the general free will debate (Floridi and Sanders 2004). At the same time, accepting Johnson's idea about the presence of a 'mysterious aspect' of human moral agents can allow for AMA, in the same way as Dreyfus' reference to the subconscious does: artificial entities can simply be built to incorporate this aspect. Not seldom, the theme of sourcehood in the AMA debate links to ideas about moral responsibility: The argument that machines are created for a purpose and therefore are nothing more than advanced tools (Powers 2006, Bryson 2010, Gladden 2016) or prosthetics (Johnson and Miller 2008), is thought to imply that machines can never be the type of source of their actions required for responsibility. The conclusion that the idea of an AMA is a misnomer follows once we add the assumption that moral agency assumes a capacity for moral responsibility. We will return to that idea in section 6. It is in this context that the independency argument against AMA becomes relevant, as it questions that the independency required by functionalists for moral agency can be in place in a machine. This since the machine's repertoire of behaviors and responses is the result of elaborate design. Floridi and Sanders question this proposal by referring to the complexity of ‘human programming’, such as genes and arranged environmental factors (e.g. education). In a similar way, designed software need not be construed out of unique, clear-cut intentions, linked to precise and predictable outcome in terms of machine action: ... software is largely constructed by teams; management decisions may be at least as important as programming decisions […] much software relies on ‘off the shelf’
21
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
components whose provenance and validity may be uncertain; moreover, working software is the result of maintenance over its lifetime and so not just of its originators […] Such complications may point to an organisation (perhaps itself an agent) being held accountable. But sometimes: automated tools are employed in construction of much software; the efficacy of software may depend on extra- functional features like its interface and even on system traffic; software running on a system can interact in unforeseeable ways; software may now be downloaded at the click of an icon in such a way that the user has no access to the code and its provenance with the resulting execution of anonymous software; software may be probabilistic […] adaptive […] or may be itself the result of a program (in the simplest case a compiler, but also genetic code […] All these matters pose insurmountable difficulties for the traditional and now rather outdated view that a human can be found accountable for certain kinds of software and even hardware (Floridi and Sanders 2004, pp. 371-372).
This applies also to the vision of artificially intelligent machines with a programmed set of normative rules (such as Asimov's famous three laws of robotics (1942)), as long as these are equipped with the additional ability to modify these rules based on experience and reasoning (Nagenborg 2007). Just as humans, machines designed in that way may be viewed as only "weakly programmed" (Matheson 2012), leaving room for the kind of independency required for moral agency. If this possibility is denied, we seem to face an apparent reductio of the claim that artificial entities cannot meet the independency or source control condition of moral agency due to having been designed and programmed:
22
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
In some sense, parents contribute the DNA to their child, from which it gains its genetic inheritance and a blueprint for its development. This will be its ‘‘software and hardware.’’ […] ... the lessons of the child’s upbringing could also be represented as instructions, given to the child from ‘‘external’’ sources. What the child experiences will play some role in its agency, and along with its DNA will inform the reasons it will have (at any moment) for acting” (Powers 2013, p. 235).
That is, a source control condition that excludes machines on the basis of design will fail to distinguish between human and artificial entities with regard to moral agency, thus undermining the notion of human beings as paradigmatic moral agents (Johansson 2010; Powers 2013; Sullins 2006; Versenyi 1974). If, instead, we apply a sourcehood requirement in the sense of independency that can include humans as moral agents, it will be difficult to exclude the possibility of AMA. It is still unclear, though, what such moral agency of machines would involve. Are they capable of moral wrongdoing just as most people are? Do moral norms apply to them just as they apply to most people? Perhaps not, but they may still be viewed as more of moral agents than mere tools. Sullins has suggested that robots and other machines, rather than tools or prosthetic limbs, should be compared to guide dogs and other domestic animals bred, trained and used for specific tasks. Even though the dog acts partly out of design and programming, its actions may still be independent enough for it to be ascribed some moral praiseworthiness for what it does (Sullins 2006). This analysis, again, illustrates how arguments regarding AMA in terms of free will, autonomy and independency (in the sense of sourcehood) slide over to the issue of moral responsibility (in terms of attributability of moral aspects of behaviors to agents).
23
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
6. Moral Agency and Moral Responsibility The concepts of moral agency and moral responsibility are usually taken to be strongly related. Moral agency makes someone eligible for being worthy of praise or blame for any morally significant action (Eshleman 2014), and thus for being the addressee of normative ethical judgment and related responsibility ascriptions. To assign moral agency is to assign such basic attributability of moral aspects of actions to those who perform them. However, in the AMA discussion, it is less clear-cut how agency and responsibility is assumed to be related. For example, in foregoing sections we have seen several arguments seemingly assuming an idea of moral responsibility being necessary for moral agency. We will therefore devote one subsection to the relationship itself, such as it appears among AMA discussants, before addressing the issue of attributability as a condition of moral agency, and its implication for AMA. 6.1. How do the concepts relate? The idea of moral agency as a necessary condition for moral responsibility seems to be shared by some contributors to the AMA debate (Parthemore and Whitby 2013), but as we saw in the former section, many opponents of AMA argue as if the opposite relationship was at hand. This may link to the ideas within standardism that the requirements of phenomenal consciousness and "free will" for moral agency flows out of an idea that these features are required for moral responsibility (see sections 3 and 5). However, also within standardism, it may be
24
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
asked why there cannot still be room for moral responsibility also when no phenomenal consciousness or "free will" is at hand. Moor distinguishes between four main classes of moral agents: ethical impact agents, implicit ethical agents, explicit ethical agents and full ethical agents (Moor 2006; 2009)9. The first are agents whose actions have ethically relevant consequences (such as harm), regardless of intent. Even quite simple artificial entities, like watches, can be ethical impact agents in this sense, but also a newborn in fact would be a moral agent in this sense. Implicit ethical agents, in contrast, are entities that are also designed with ethical considerations in mind. This includes many devices, like airplanes or nuclear power plants, which have in-built security systems to prevent undesirable events and ensure desirable function. Presumably, this would be the moral agency Sullins (see end of section 5) is willing to ascribe to bred and trained animals, and maybe this would also apply to children who have received some rudimentary fostering. Explicit ethical agents, in contrast, act out of (in contrast from merely in accordance with) ethical considerations. This may include all machines incorporating some kind of normative guidance system that is a part of their programming, e.g., Asimov's laws (1942), but also more complex normative principles, or systems that contain elements of autonomous learning and selfmodification. However, full moral responsibility requires phenomenal consciousness and "free will". While this theory may look like a gradualist echo of the standard view, it in fact opens the door for AMA, as moral agency need not be assumed to require full moral responsibility. To this may be added the observation
9 See Wallach and Allen (2008) for a similar account, where distinction is made between
’operational morality’, ’functional morality’ and full moral agency.
25
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
that the attempt of standardist AMA sceptics to hold out moral responsibility as necessary for moral agency violates the typical assumption in the more general moral agency literature. The standard notion in the moral agency discourse is thus that someone may be a moral agent without being morally responsible. This room is exploited by some advocates of AMA, who argue that machines may be ascribed moral agency but not moral responsibility. Anderson and Anderson hold out the capacity of a machine to identify and perform morally correct actions, and to justify such judgements on request, to be sufficient for being a moral agent. However, they insist, to be morally responsible, one needs to be conscious and have free will (Anderson and Anderson 2007). This move seems to allow for machine actions to be viewed as morally good or evil, but still not permitting the ascription of anything more than causal responsibility to artificial entities (unless they would meet the conditions of consciousness and free will). While moral agency is present, attributability is not: the AMA may have performed an evil action, but this evil cannot be ascribed to the AMA, and even less can this AMA appropriately be held responsible for it. This notion comes with a cost from a philosophical point of view, as we normally think that what it means to be a moral agent is (minimally) that if a moral agent is the source of a morally significant action, then the moral qualities of that action apply also to the agent (meaning that the agent did something right or wrong). If not, we may have a case of agency resulting in morally significant events, but no moral agency. If, on the other hand, there is moral agency involved, the agent is tainted by whatever moral qualities have been realized, good or bad. Standard
26
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
ideas within desert theory would then imply that such a moral agent may then be praise- or blameworthy, i.e., it may be justified to hold this agent responsible. If, for example, a combat machine has done something that we would consider a moral evil, say killing civilians, typical desert-based reasons for holding it morally responsible for such a deed would appear as soon as we assume it to be the moral agent of the wrongdoing. Dislodging moral agency and attributability of wrongdoing would therefore make such well-established normative ethical reasoning conceptually impossible, thus begging substantial normative questions. Floridi and Sanders argue for an adjusted variant of this kind of solution that allows some moral attributability for AMAs, although not justifying blame: Since AA [i.e. artificial agents] lack a psychological component, we do not blame AAs, for example, but, given the appropriate circumstances, we can rightly consider them sources of evils’ […] We can stop the regress of looking for the responsible individual when something evil happens, since we are now ready to acknowledge that sometimes the moral source of evil or good can be different from an individual or group of humans. (Floridi & Sanders 2004, p. 367)
This view has the advantage of preserving some traditional ideas about moral responsibility while recognizing that when an advanced artificial entity exhibits ethically troublesome behavior, this is something more than a mere misfortune. It is also less vulnerable to the principled objection of begging substantive normative questions, as it may hold that the dislodging of moral agency and attributability from practices of holding responsible is justified on the basis of underlying theories about what motivates such practices. However, as the view of Anderson and Anderson, it does create what in the literature has been termed a
27
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
"responsibility gap" (Matthias 2004): AMAs may be full moral agents performing wrongful actions, even moral agents who are properly attributed the wrongfulness of their actions, but nevertheless not properly blamed or otherwise held responsible for that wrongdoing. As the wrongdoing is nobody else's, it would then seem that no one is blameworthy for these wrongdoings. Again, this seems to conceptually exclude the notion of moral agents who do wrong as necessarily blameworthy. Several authors have addressed the responsibility gap challenge. Singer (2013) believes that a solution is to distribute responsibility across all those moral agents who are also capable of (full) moral responsibility involved, like designers and users, but also investors and other contributors. This idea assumes that fully responsible human moral agents can properly be held responsible for acts of other moral agents, although not involved in the decisions of the latter. Suggestions like this have been made regarding autonomous robotic devices, e.g., autonomous war robots, (Adams 2001; Champagne and Tonkens 2013). Champagne and Tonkens claim that this solution would depend on a human moral agent agreeing to take on this responsibility, an idea developed further is the notion of such voluntary undertaken responsibility as continuously negotiable between the involved human parties (Lokhorst and van den Hoven 2012; Noorman 2014; Schulzke 2013). The implication here seems to be that if no agreement is in place, the responsibility gap prevails. All of these responses view the responsibility gap challenge as a practical affair. However, the problem we pointed to above goes deeper: the creation of the responsibility gap by dislodging moral agency and moral responsibility on a conceptual level amounts to deciding substantial
28
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
normative ethical issues through mere terminological stipulation. The suggested responses to the gap allow for a (broadly) consequentialist view of prospective moral responsibility that is not dependent on either the presence of free will, or the tight link between moral agency and responsibility assumed by desert theories. Accordingly, measures like censure, reprogramming or other modifications of AMAs may be undertaken to obtain desired changes in their behavior (Floridi and Sanders 2004). AMAs may even be equipped with (artificial) moral emotional capacities of remorse, guilt and pride responses to their own behavior as part of machine learning scheme to facilitate continuous moral education and behavioral adaption (Coeckelbergh 2010). However, none of this would seem to resolve the basic challenge of the responsibility gap: the exclusion on mere terminological grounds of normative desert theories that link moral agency to not only attributability, but also to blameworthiness. If such views are granted as possible, the idea of human moral agents voluntary taking (full) moral responsibility for the actions of AMAs would seem to crumble. A person may, of course, declare herself responsible for an action done by another person who is a moral agent. However, from a desert standpoint, this can never undermine the (full) moral responsibility of the latter; at best, it may imply a shared blameworthiness. Tending to this challenge would therefore seem to necessitate a normative argument justifying a rejection of desert theories. While some AMA supporters may be attracted to such a solution, several seem to support the tight link between moral agency, attributability and blame, and are thus left open to the responsibility gap challenge (Champagne and Tonkens 2013; Eshleman 2014; Himma 2008).
29
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Others instead argue for the opposite kind of separation between agency and responsibility, claiming that artificial entities can be morally responsible to some extent, but not moral agents. Stahl defends this idea on the basis of a functionalist assumption that is the mirror image of the one made by typical supporters of AMA: it is (retrospective) moral responsibility that can be understood in functionalist terms, while moral agency may retain its standardist sense. An entity is not morally responsible because she is a moral agent in the standard view sense, but because the ascription of responsibility to this entity fulfills certain social goals (Stahl 2004, 2006). This is a line of reasoning that fits well both with a broadly understood Strawsonian conception of moral responsibility in terms of reactive attitudes (Strawson 1962), and with the idea of making sense of our practice of holding such attitudes as mainly a matter of pragmatics. The appropriateness of ascribing moral responsibility is explained not by how blameworthy moral agents are in terms of desert, but by how a system of blaming would promote valued social functions (cf. Dennett 1973). A recent model of how responsibility ascriptions can be thus understood by Björnsson and Persson describes how the attribution of responsibility can be guided by normative as well as descriptive expectations on agents in different social contexts (Björnsson and Persson 2012, 2013). Stahl’s claim can be justified in terms of such expectations being possible to extend to some machines and some types of machine actions in some social sectors, without thereby creating a ground for accepting the existence of AMA. This notion, of course, departs from the standard assumption that moral agency is required for moral responsibility, leaving room for what Stahl calls "quasi responsibility" that is not desert based. But at the same time it avoids the
30
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
responsibility gap (as all moral agents may remain potential wrongdoers and blameworthy). Stahl's suggestion may seem to mean that all arguments regarding AMA in terms of the possibility of morally responsible artificial entities fail. However, if we accept weaker conceptions of (quasi) moral responsibility (that do not require moral agency), may we not just as well introduce weaker moral agency concepts? Coeckelbergh proposes the category of 'virtual moral agents', suggesting that entities that do not fulfill standardist conditions for moral agency, but where it may be good reason to hold them responsible for reasons of the type suggested by Stahl, referred to as 'virtual blame' and virtual praise' (Coeckelbergh 2009, 2010). Just as the pragmatic attempts at solving the responsibility gap challenge, this suggestion moves the AMA issue into a more normative landscape, where the question of what practices of 'virtual blaming' and 'virtual praising' may be ethically justified becomes central. That question immediately actualizes a number of complicated ethical problems. For instance, should we start to ‘virtually’ blame also human agents that are otherwise exempted from moral responsibility ascription (like infants and gravely intellectually disabled people)? If not, how should this be motivated without losing the reason for ‘virtually’ blaming machines? There is also a methodological problem related to this kind of introduction of parallel, weaker concepts of moral agency and responsibility. These require less in terms of the capacities of (quasi) morally responsible (virtual) agents. Does this mean that artificial entities might, more frequently, be quasi-responsible for morally relevant actions than humans are morally responsible (proper)? Or
31
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
should, ‘proper’ moral agents, as indicated, have the ensuing ‘full’ moral responsibility added to the quasi-responsibility they would have had, had they been mere ‘virtual’ moral agents? Alternatively, should these weaker concepts be seen as wholly unrelated to any other discourse of moral agency and responsibility? If the concepts are that different to their ‘proper’ counterparts, is it even meaningful to speak of them as addressing the topic of AMA at all? In addition, all of these attempts to link moral agency to moral responsibility in a way that allows for AMA can be subjected to the independency argument. Johnson and Powers recognize that machines may be ascribed what they call role responsibility, implying that machines may be parts of wider networks of relationships around moral agents that make these agents morally responsible for particular acts (Johnson and Powers 2005). Such technological moral action contains the intentionality of other moral agents, namely those who have designed the machines to facilitate the actions in question, and thereby facilitates a moral agency that grounds responsibility attribution, but without granting this agency to the participating machines, although they somehow share in this agency. A similar account is defended by Verbeek (2011) who uses the term humantechnological associations. This line of thought can be challenged. Dodig-Crnkovic and colleagues (2011; 2008) apply a basic idea very similar to that of Johnson and Powers and of Verbeek, but use it to make room for artificial entities to be morally responsible to some degree, in virtue of their role in the performance of a morally important actions by (standard view) moral agents. Like many other advocates of AMA in terms of moral responsibility, they endorse a pragmatic understanding of the latter, as a
32
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
‘role defined by externalist pragmatic norms of a group’ (Dodig-Crnkovic and Persson 2008 p. 2). Moral responsibility as a ‘regulative mechanism’ can then not only be affected by artificial entities, but can in fact be shared by them, although to a lesser degree 10 . If the condition that moral responsibility presupposes moral agency is assumed, it would then follow that these entities are AMAs in the sense that, given the presence of certain external conditions, they share moral agency with human beings. This notion of shared moral agency and responsibility presents yet another way of viewing the relationship between moral agency and responsibility in the AMA debate. However, also in this case, it depends on the acceptance of weaker concepts of moral agency and responsibility, so the fundamental methodological challenge regarding if attempts like these really solve the AMA issue remains. 6.2. Moral Agency, Attributability and Blame. One way of moving the issue forward is to suggest a normative argument that favors a pragmatic notion of retrospective moral responsibility ascription, while leaving conceptual room for more traditional desert based notions of blameworthiness, were the latter requires moral agency. As we saw in the former subsection, the way ahead for such an argument is far from easily navigated, and runs the risk of either failing to address the responsibility gap challenge, or to define away substantial normative positions, or to fail to address the AMA issue at all. However, distinguishing more clearly between the ascription of moral agency,
10 Also see Hanson (2009) for an account on joint responsibility.
33
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
the attribution of wrong- and right-doing, the ascription of blame- and praiseworthiness, and the justification of actual practices of blame and praise may help the analysis forward. We can see this, by applying such more careful distinction to the independency argument (which we take to be a normative ethical argument). The independency argument claims that whatever more or less simulated or virtual or artificial counterparts to moral intentions and reasons there are, these cannot be ascribed to the machine apparently possessing them, but belong to its maker and/or user. As we saw in section 5, this argument is better not understood to claim that machines are too dependent on their makers to be ascribed the sort of control required for moral responsibility (as that would undermine the notion of human moral agency). In the former subsection, we saw that it neither can be employed to acknowledge that machines may share in agency that may generate moral responsibility, while denying that they can share this responsibility. However, this may leave room for a version of the independency argument that excludes moral responsibility in the sense of attributability only. In this version, the argument says that the makers' (and not the machine's) intention and rationality is what shapes machine action. Albeit a machine may exhibit agency in the sense of goal directed behavior, and albeit that behavior may possess significant moral qualities justifying responses akin to holding it responsible, these qualities cannot be attributed to the machine, but only to its maker(s). As moral actions, the machine's actions will not be its own, but belong to the intentional source of why the machine thinks, feels and acts as it does. This sets machines apart from human moral agents, as their intentions and reasons, although
34
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
dependent to some extent on intentional actions of others (e.g., upbringing and education), the resulting intentions of a human agent will typically not themselves be designed by other human agents.11 This argument is independent of whether or not we try to ascribe AMA using the standard or the functionalist view of moral agency. It is moreover not dependent on ideas that would fail to acknowledge that advanced artificial entities may share in morally significant actions, and be properly subjected to practices amounting to blame or praise (if that can be normatively justified). It is about the extent to which agents are the authors of their own moral stance from which they then make decisions and act. In a designed machine, whatever appears in terms of thoughts, lines of reasoning, values, intentions, emotions, strivings, dispositions and impulses will be the outcome of a set of logical rules created by someone else. This is not cancelled by the fact that the rule is complex, operates on probability distributions and/or incorporates self-learning elements. Therefore, although they may perform actions that are highly morally significant, and although we may properly subject them to blame and praise (as a normatively justified practice of reactive attitudes), the moral significance of their actions cannot be attributed to them, and thus (leaving room for desert based blameworthiness), they cannot deserve blame. This variant of the independency argument is not undermined by making human moral agency (in terms of attributability) impossible. For suppose that we encountered a human in a similar situation, a person who had only been taught
11 Obvious exceptions or atypical instances would be cases of brainwashing and hard
conditioning, or hypnosis, in which case many would indeed cancel their idea of a person as morally responsible.
35
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
one moral rule, but unfortunately a seriously flawed one; say, one advocating the obligatory betrayal of friends. This person has literally never encountered any alternative or opposing moral rule. Thus, when making a friend, this person does her best to betray that person, believing that she is thereby acting as one should as a good friend. Clearly, she is thereby performing a morally significant action, and clearly we would see ourselves as having good reasons to express reactive attitudes of a blaming kind towards her, if nothing else for the purpose of moral education. However, as she has every reason to believe that it is a moral duty to betray friends, it makes no sense to attribute the wrongness of the betrayal to her, despite her being its agent. Thus, albeit we have agency, a wrongful act resulting from it, and moral reasons for responding to the act with blame, we have no moral agent to which we can attribute the wrongness of the wrongful act. However, even if we accept this analysis, the conclusion would not be that such a person, or its artificial counterpart, could not be a full moral agent (to which we may attribute the moral qualities of her actions). If the person evolves, and develops a more authentic moral stance, where she chooses particular normative standpoints over others based on reflection and awareness of alternative views – possibly a learning process due to the displayed reactive attitudes of others – we would certainly start to view her as a full moral agent. But, as Grodzinsky and colleagues note, this may apply also to a machine, only if it is equipped with abilities for self-development and -learning, rigged to react to similar stimuli as in the case of a human child that develops into a full moral agent: ... it seems clear that the agent has the potential to move quite far from the original design, and from the control of the designer. In this case, at some time after such a
36
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
soft- ware program is designed, implemented, and deployed there may not be a level of abstraction at which the original intentions of the original designer and implementer are any longer discernible (Godzinsky et al., 2008, p. 7).
The capacities of such a machine do not need to include phenomenal consciousness, but would need to include the dispositional capacities of rationality and moral competence. Already these capacities may allow for moral agency and responsibility ascription in the sense of an artificial entity being a recognized agent of morally significant actions, where it is normatively justified for others to hold this entity responsible for these actions (by displaying responsibility relevant reactive attitudes), albeit not due to it being blameworthy. If we add a discharged capacity for moral self-development based on the reactive attitudes of others, we additionally seem to get a form of independency that it may make sense to require in order to ascribe "source control" to such a machine agent. If so, the moral qualities of the actions of this agent will be attributable to it, and it will be blame- and praiseworthy in a sense triggering desert-based reasons for why it should be held responsible for them. If moral responsibility is viewed as a central ingredient in what it means to be a moral agent, we may thus envision the idea of an AMA in both these senses, of which the latter is slightly more requiring. As before, some discussant may insist on adding a condition of phenomenal consciousness in order to have true moral agency. As we have seen, such a condition may be applied as an independent requirement, or as a requirement said to be necessary to meet to be ascribed rationality, moral competence, moral responsibility or a type of independency that may provide "source control". In each such case, we are then faced with two challenges: First, the functionalist
37
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
questioning of what motivates the consciousness-requirement if the rationality, moral competence, moral responsibility and independency of human moral agent can all be made sense of without it. Second, Johansson's observation that we seem to lack ground for ascribing phenomenal consciousness except on behavioral grounds, thus putting machines back on a par with human moral agents, as long as their behavioral features are sufficiently similar. It then becomes a question for further research in this area to formulate reasons in favor of such a consciousness requirement for AMA. 7. Concluding discussion The phenomenal consciousness requirement of the standard view of moral agency continues to run as a core thread through the AMA debate, but its role and impact has become increasingly unclear as this debate has evolved. The traditional arguments for this requirement point to features of human moral agents, which seem possible to make sense of without referring to any capacity for subjective mental states. These features, such as capacities for rational decision-making, moral competence, and ability to form an independent moral stance that guides morally significant action, can all be explained in dispositional terms. If this is denied, the epistemic pragmatic turn of Johansson suggests that we are still forced to ascribe moral agency and/or consciousness on the basis of our identification of observable signs of such dispositions. In both cases, denial leads to a threat of undermining the core assumption of the AMA debate that human beings can be moral agents. But if we assume that they can be, it will be in virtue of them having relevant dispositions, and whether or not a machine can qualify as an AMA will
38
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
then depend on whether or not its observable features suggest similar dispositions to be in place. But what dispositions and features more exactly? On that subject, both advocates and critics of AMA pull apart significantly when employed concepts of rational decision-making, moral competence, autonomy or free will, and moral responsibility are made more precise. In effect, it is often difficult to decide whether or not discussants who conceive each other as opponents actually do hold ideas that are incompatible, or whether or not they are simply applying different concepts and talking past each other. In effect, the standardist-functionalist divide plays out very differently depending on what features are in focus, and how they are made more precise: rationality, moral competence, free will or responsibility. With regard to some features, such as moral responsibility and free will, advocates of AMA who side with a functionalist view of moral agency, still disagree significantly on its relationship to moral responsibility and whether or not an AMA should be ascribed free will or autonomy – some even invoking a phenomenal consciousness requirement for one or the other feature. Similarly, several suggest radical conceptual revisions implying that moral agency and moral responsibility may come in degrees or parts, and that the AMA debate should be accordingly reconceived. We have pointed to fundamental problems of such ideas, as they seem to easily dislodge the very concept of moral agency from some of its core assumptions, e.g., regarding how it links to moral responsibility. When alternative moral agency concepts effectively decide substantial normative debates on terminological grounds, or abandons widely shared core assumptions about the relationship between the concepts of moral agency and moral responsibility, we
39
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
should conclude that they are misnomers. At the same time, it may make sense to spool the AMA debate tape back to its starting point and ask what difference it makes whether or not machines can be moral agents. From such a starting point, the idea of normative arguments supporting that we may have reason to include machines in moral responsibilityrelated practices that we normally link to a perception of moral agency can make sense if machines have enough of features sufficiently similar to those that make us include humans in such practices. These features may come in degrees as well as partially. Thus, some of today´s most 'autonomous' and 'intelligent' artificial entities (such as advanced combat or vehicle systems) may plausibly be ascribed a sort of agency that has moral significance, and that may give us good reasons to respond to them in various ways. If such machines are developed to become more independent in their decision-making, capable of moral deliberation and selfdevelopment, it may even make good sense to respond to their actions in typical responsibility practice ways, e.g., expressing blame and praise to trigger their moral self-learning capacities. Most versions of the independency argument, that tries to deny the possibility for even such an advanced machine to shape its own action in a morally relevant way, and thus be a moral agent, fail by also undermining the notion of human moral agency. We have acknowledged that one sort of independency condition may survive this challenge, but then it will allow for AMA. At the same time, a machine satisfying this independency requirement would have to be far more advanced than any presently existing artificial entity. But if it does evolve, it would possess
40
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
features in terms of rationality, moral competence, and independency of its creator's original intentions that would be analogous to a person who has grown from a child into an adult, thus satisfying the sort of "source control" condition we use in our responsibility practices to attribute the moral qualities of a person's action to this very person. In that case, we have argued, it makes sense to view this machine as capable both of right- or wrongdoing, and of being praise- and blameworthy. A committed AMA skeptic may still attempt to resist the conclusion that such a machine is a moral agent by simply denying that the features and dispositions of the machine are analogous enough to those of a paradigmatic human moral agent. To get around the epistemic pragmatic challenge for all such proposals (that they threaten to undermine the validity of our practice of ascribing moral agency to humans, and our linked responsibility practices), the ambition of such skepticism may be reduced into a mere terminological point. But then the issue pressed also becomes normatively and pragmatically moot, no more than a declaration of a peculiar semantic preference. Like the AMA advocate ideas about radically revised moral agency or responsibility concepts that machines can satisfy more easily, the outcome is a view that has no relationship to the original challenge. We are left with the contention that we may have good reason indeed to include machines in our moral community on the same conditions as humans, see them as co-agents, and their actions as morally significant, attribute the moral qualities of these actions to them, and hold them responsible by blaming and praising them. An advocate of AMA, we argue, should be fully content with that solution, even if someone adds "but they are not really moral agents" or not "really morally
41
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
responsible". Alternatively, an AMA skeptic needs to develop a normative argument to the effect that what is denied such machines makes a fundamental difference for how we should relate to them in moral terms. All such arguments, in turn, will be vulnerable to the sort of objections we have seen coming from the epistemic pragmatic turn, as well as against most versions of the independency argument: whatever difference between (these especially sophisticated) machines and humans are pointed at to justify why the machines should not be included into our moral agency and responsibility practices, there is likely to be some class of human beings who will then be excluded. This is a well-known challenge from debates about the moral status of non-human entities, often called the demarcation problem (Warren 1997). At the same time, the AMA debate variant of this problem is different, as the challenge is not that humans who are denied (practice relevant) moral agency are less morally considerable. Perhaps denial of substantial moral agency to some humans that have been viewed as moral agents before (i.e., exclusion from moral responsibility practices where they used to be included) can be justified on sound normative ethical grounds. On the other hand, as observed in the introduction, some assume a strong link between moral agency and moral status (or patiency), so the relationship between the normative ethical and the moral responsibility practice demarcation problems may be more or less intimate, depending on background assumptions in normative ethics.12
12 See footnote 3.
42
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
For these reasons, we suggest that the AMA debate moves more wholeheartedly into this explicitly normative ethical landscape. We need to systematically address the issue of who should and who should not be included into different moral responsibility practices, in a context where humans and artificial entities interact and share decision-making and action in different areas, and under different assumptions about the features and abilities of these machines. The practices we have lifted in this paper include sharing decision-making and the execution of morally significant actions, identifying and communicate moral qualities of such actions and decisions (performed jointly by humans and machines, or by machines alone), attributing such qualities to machine agents (effectively designating them as right- or wrongdoers), having such attribution guide responses to machine decision-making and action, e.g. with regard to education or redesign, having such responses taking the form of ascribing blame- or praiseworthiness to machines, as well as actually blaming or praising them. If some machines should be excluded from some such responsibility practices on some ground in some context, does this imply that some humans should be similarly excluded? If so, does that pose a problem? Reversely, if some humans should be excluded from some responsibility practices, does that imply that some machines should be similarly excluded, and would that pose a problem? All of this can be done without ever using the term "moral agent", thus avoiding much of the conceptual confusions that have been riddling a lot of the AMA debate so far.
43
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Bibliography
Adams, T. K. (2001). Future warfare and the decline of human decisionmaking.
44
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251-261, doi:10.1080/09528130050111428. Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15. Anderson, M., Anderson, S. L., & Armen, C. Towards machine ethics. In Proceedings of AAAI, 2004 Arkin, R. (2007). Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture. Retrieved from Arkin, R. C. (2010). The case for ethical autonomy in unmanned systems. Journal of Military Ethics, 9(4), 332-341. Asaro, P. M. (2006). What Should We Want From a Robot Ethic? International Review of Information Ethics, 6(12), 9-16. Asimov, I. (1942). Runaround. Astounding Science Fiction, 29(1), 94-103. Bahrammirzaee, A. (2010). A comparative survey of artificial intelligence applications in finance: artificial neural networks, expert system and hybrid intelligent systems. Neural Computing and Applications, 19(8), 1165-1195. Beavers, A. F. (2011). 21 Moral Machines and the Threat of Ethical Nihilism. Robot ethics: The ethical and social implications of robotics, 333. Björnsson, G., & Persson, K. (2012). The Explanatory Component of Moral Responsibility. Noûs, 46(2), 326-354. Björnsson, G., & Persson, K. (2013). A Unified Empirical Account of Responsibility Judgments. Philosophy and Phenomenological Research, 87(3), 611-639. Bringsjord, S. (1992). What Robots can and can't be: Kluwer Academic. Bringsjord, S. (2007). Ethical robots: the future can heed us. Ai & Society, 22(4), 539-550, doi:10.1007/s00146-007-0090-9.
45
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Bryson, J. J. (2010). Robots should be slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues, 63-74. Champagne, M., & Tonkens, R. (2013). Bridging the Responsibility Gap in Automated Warfare. Philosophy & Technology, 28(1), 125-137, doi:10.1007/s13347-013-0138-3. Christman, J. (2015). Autonomy in Moral and Political Philosophy. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/spr2015/entries/autonomy-moral. Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. Ai & Society, 24(2), 181-189, doi:10.1007/s00146-009-0208-3. Coeckelbergh, M. (2010). Moral appearances: emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235-241, doi:10.1007/s10676-010-9221-y. Davis, M. (2012). “Ain’t no one here but us social forces”: constructing the professional responsibility of engineers. Science and Engineering ethics, 18(1), 13-34. Dennett, D. C. (1973). Mechanism and responsibility. In T. Honderich (Ed.), Essays on Freedom of Action (pp. 157--184): Routledge and Kegan Paul. Dodig Crnkovic, G., & Çürüklü, B. (2011). Robots: ethical by design. Ethics and Information Technology, 14(1), 61-71, doi:10.1007/s10676-011-9278-2. Dodig-Crnkovic, G., & Persson, D. (2008). Sharing moral responsibility with robots: A pragmatic approach. Dreyfus, H. L. (1992). What computers still can't do: a critique of artificial reason. Eshleman, A. (2014). Moral Responsibility. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2014 ed.). http://plato.stanford.edu/archives/sum2014/entries/moral-responsibility/. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379.
46
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Friedman, B., & Kahn, P. H. (1992). Human agency and responsible computing: Implications for computer system design. Journal of Systems and Software, 17(1), 7-14. Gerdes, A., & Øhrstrøm, P. (2015). Issues in robot ethics seen through the lens of a moral Turing test. Journal of Information, Communication and Ethics in Society, 13(2), 98109, doi:10.1108/jices-09-2014-0038. Gladden, M. E. (2016). The diffuse intelligent other: An ontology of nonlocalizable robots as moral and legal actors. In M. Nørskov (Ed.), Social robots: Boundaries, potential, challenges (pp. 177–198). Burlington, VT: Ashgate. Grodzinsky, F. S., Miller, K. W., & Wolf, M. J. (2008). The ethics of designing artificial agents. Ethics and Information Technology, 10(2-3), 115-121, doi:10.1007/s10676008-9163-9. Gunkel, D. J. (2014). A vindication of the rights of machines. Philosophy & Technology, 27(1), 113-132. Hanson, F. A. (2009). Beyond the skin bag: on the moral responsibility of extended agencies. Ethics and information technology, 11(1), 91-99. Hellström, T. (2012). On the moral responsibility of military robots. Ethics and Information Technology, 15(2), 99-107, doi:10.1007/s10676-012-9301-2. Himma, K. E. (2008). Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19-29, doi:10.1007/s10676-008-9167-5. Irrgang, B. (2006). Ethical acts in robotics. Ubiquity, 7(34), doi:10.1145/1164069.1164071. Johansson, L. (2010). The Functional Morality of Robots. International Journal of Technoethics, 1(4), 65-73, doi:10.4018/jte.2010100105. Johnson, D. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8(4), 195-204, doi:10.1007/s10676-006-9111-5. Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2-3), 123-133, doi:10.1007/s10676-008-9174-6.
47
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Johnson, D. G., & Powers, T. M. (2005). Computer systems and responsibility: A normative look at technological complexity. Ethics and Information Technology, 7(2), 99-107. Kolodny, N. a. B., John (2016). Instrumental Rationality. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy. {http://plato.stanford.edu/archives/spr2016/entries/rationality-instrumental/}. Korsgaard, C. M. (2004). Fellow creatures: Kantian ethics and our duties to animals. Tanner Lectures on Human Values, 25, 77. Lin, P., Bekey, G., & Abney, K. (2008). Autonomous military robotics: Risk, ethics, and design. California Polytechnic State Univ San Luis Obispo: DTIC Document. Lokhorst, G.-J., & van den Hoven, J. (2012). Responsibility for military robots. In I. P. Lin, G. A. Bekey, & K. Abney (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 145–156). Cambridge: MIT Press. Macnamara, C. (2015). Blame, communication, and morally responsible agency. The Nature of Moral Responsibility: New Essays, 211. Matheson, B. (2012). Manipulation, moral responsibility, and machines. THE MACHINE QUESTION: AI, ETHICS AND MORAL RESPONSIBILITY, 11. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175-183. McDermott, D. Why ethics is a high hurdle for AI. In, 2008: Citeseer McKenna, M. a. C., D. Justin (2015). Compatibilism. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/sum2015/entries/compatibilism/. Moor, J. (2009). Four Kinds of Ethical Robots. Philosophy Now, 72, 12-14. Moor, J. M. (2006). The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE, 21(4), 18-21. Musen, M. A., Middleton, B., & Greenes, R. A. (2014). Clinical decision-support systems. In Biomedical informatics (pp. 643-674): Springer.
48
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Nadeau, J. E. (Ed.). (2006). Only androids can be ethical (Thinking about android epistemology): MIT Press. Nagel, T. (1974). What is it like to be a bat? The philosophical review, 435-450. Nagenborg, M. (2007). Artificial moral agents: an intercultural perspective. International Review of Information Ethics, 7(09), 129-133. Noorman, M. (2014). Responsibility practices and unmanned military technologies. Science and Engineering ethics, 20(3), 809-826. Noorman, M., & Johnson, D. G. (2014). Negotiating autonomy and responsibility in military robots. Ethics and Information Technology, 16(1), 51-62, doi:10.1007/s10676-0139335-0. O'Connor, T. (2016). Free Will. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/sum2016/entries/freewill: Metaphysics Research Lab, Stanford University. Parthemore, J., & Whitby, B. (2013). What makes any agent a moral agent? Reflections on machine consciousness and moral agency. International Journal of Machine Consciousness, 5(02), 105-129. Picard, R. W. (1997). Affective computing (Vol. 252). Cambridge: MIT Press. Powers, T. M. (2006). Prospects for a Kantian machine. Intelligent Systems, IEEE, 21(4), 4651. Powers, T. M. (2013). On the Moral Agency of Computers. Topoi, 32(2), 227-236, doi:10.1007/s11245-012-9149-4. Schulzke, M. (2013). Autonomous weapons and distributed responsibility. Philosophy & Technology, 26(2), 203-219. Shen, S. The curious case of human-robot morality. In Proceedings of the 6th international conference on Human-robot interaction, 2011 (pp. 249-250): ACM Singer, A. E. (2013). Corporate and Artificial Moral Agency. 4525-4531, doi:10.1109/hicss.2013.149.
49
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Sliwa, P. (2015). Moral Worth and Moral Knowledge. Philosophy and Phenomenological Research. Sparrow, R. (2007). Killer robots. Journal of applied philosophy, 24(1), 62-77. Stahl, B. C. (2004). Information, ethics, and computers: The problem of autonomous moral agents. Minds and Machines, 14(1), 67-83. Stahl, B. C. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8(4), 205-213. Strannegård, C., von Haugwitz, R., Wessberg, J., & Balkenius, C. A cognitive architecture based on dual process theory. In International Conference on Artificial General Intelligence, 2013 (pp. 140-149): Springer Strawson, P. F. (1962). Freedom and Resentment. Proceedings of the British Academy, 48, 125. Sullins, J. P. (2006). When is a robot a moral agent? International Review of Information Ethics, 6(12), 23-30. Sullins, J. P. (2010). RoboWarfare: can robots be more ethical than humans on the battlefield? Ethics and Information Technology, 12(3), 263-275. doi:10.1007/s10676-010-9241-7 Swiatek, M. S. (2012). Intending to err: the ethical challenge of lethal, autonomous systems. Ethics and Information Technology, 14(4), 241-254. doi:10.1007/s10676-012-9302-1 Tonkens, R. (2009). A challenge for machine ethics. Minds and Machines, 19(3), 421-438. Tonkens, R. (2012). Out of character: on the creation of virtuous machines. Ethics and Information Technology, 14(2), 137-149. Torrance, S. (2007). Ethics and consciousness in artificial agents. Ai & Society, 22(4), 495521, doi:10.1007/s00146-007-0091-8. van Wynsberghe, A., & Robbins, S. (2018). Critiquing the Reasons for Making Artificial Moral Agents. Science and engineering ethics, 1-17.
50
This is a ”postprint”, the authors’ submitted manuscript after peer review to a scientific journal. Citations should refer to the published version of the article, once that exists
Verbeek, P. P. (2011). Moralizing technology: Understanding and designing the morality of things. University of Chicago Press. Versenyi, L. (1974). Can robots be moral? Ethics, 84(3), 248-259. Veruggio, G., & Operto, F. (2008). Roboethics: Social and ethical implications of robotics. In Springer handbook of robotics (pp. 1499-1524): Springer. Wallace, R. J. (2014). Practical Reason. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. {http://plato.stanford.edu/archives/sum2014/entries/practical-reason/}. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press. Wang, F.-Y. (2016). Let’s Go: From AlphaGo to parallel intelligence. Science & Technology Review, 34(7), 72-74. Warren, M. A. (1997). Moral status: Obligations to persons and other living things: Clarendon Press. Yampolskiy, R. V. (2013a). Attempts to attribute moral agency to intelligent machines are misguided. In Proceedings of Annual Meeting of the International Association for Computing and Philosophy, University of Maryland at College Park, MD. Yampolskiy, R. V. (2013b). Artificial intelligence safety engineering: Why machine ethics is a wrong approach. In Philosophy and theory of artificial intelligence (pp. 389-396). Springer, Berlin, Heidelberg.
51