Artificial Agents and the Contracting Problem: A Solution Via an ...

3 downloads 16249 Views 405KB Size Report
ucts, vendors, and services) are becoming increasingly common.2 Current re- .... ciples of good faith negotiation prevalent in civil and common law jurisdictions). 4. ..... Cevenini, Contracts for the Use of Software Agents in Virtual Enterprises (2003) in ...... In 2000, the U.S. Congress passed the Electronic Signatures in Global.
ARTIFICIAL AGENTS AND THE CONTRACTING PROBLEM: A SOLUTION VIA AN AGENCY ANALYSIS Samir Chopra and Laurence White†

I. INTRODUCTION Artificial agents, and the contracts they make, are ubiquitous. Every time we interact with a shopping website, we interact with a more or less autonomous artificial agent that queries the operator‘s database, uses our input to populate it, and sets out the terms of the transaction. Other than establishing the rules that the artificial agent must follow during transactions with customers, the operator does not exercise direct control over the agent‘s choices in particular cases, at least until the operator has a chance to confirm or reject the transaction entered into.1 Websites that offer users agent-like functionality for their use in their activities online (such as comparing and recommending products, vendors, and services) are becoming increasingly common.2 Current re† Samir Chopra is Associate Professor Department of Philosophy, Brooklyn College and the CUNY Graduate Center and received his Ph.D. in Philosophy from the CUNY Graduate center. Laurence White recieved both his LL.B. (with honors) and B.A. (with honors) from the University of Melbourne, Australia and would like to thank his partner Jane Brown for her constant support. Both authors extend their thanks to James Grimmelmann for very useful comments. 1. See, e.g., Ebay, http://pages.ebay.com/help/buy/bidding-overview.html (last visited Sept. 22, 2009) (describing the functionality of an artificial agent that bids up to a certain amount on behalf of a user trying to capture the lowest price in their range). 2. See, eg., Robert H. Guttman & Pattie Maes, Agent-Mediated Integrative Negotiation For Retail Electronic Commerce, in AGENT MEDIATED ELECTRONIC MEDIATED ELECTRONIC COMMERCE: FIRST INTERNATIONAL WORKSHOP ON AGENT MEDIATED ELECTRONIC TRADING AMET-98 MINNEAPOLIS, MN, USA, MAY 10TH, SELECTED PAPERS, 70, 70 (Pablo Noriega & Carles Sierra eds., 1999) (noting the increased usage of online services to provide consumers comparisons on the price of products); B.T. Krulwich, The BargainFinder Agent: Comparing Price Shopping on the Internet, in BOTS AND OTHER INTERNET BEASTIES, 258–263 (J. Williams ed., 1996); Lucas Drumond & Rosario Girardi, A Multi-Agent Recommender System, 16 ARTIFICIAL INTELLIGENCE L. 175, 175–207 (2008) (describing how certain internet sites are designed to function like an agent and provide users with ―recommendations of legal normative instruments‖); Maria Fasli, On Agent Technology for E-commerce: Trust, Security and Legal Issues, 22(1) KNOWLEDGE ENGINEERING REV., 3–35 (2007); Pattie Maes et al., Agents That Buy and Sell, 42 COMM. OF THE ACM 81, 81 (1999) (―The Internet and World-Wide Web represent an increasingly important channel for retail commerce as well as businessto-business transactions.‖); Greg Linden et al., Amazon.com Recommendations: Item-to-Item Collaborative Filtering, 7 IEEE INTERNET COMPUTING 76, 76–80 (2003) (stating that Ebay uses an artificial agent to develop consumer preferences and provide consumer information on products and prices); Robert B. Doorenbos et al., A Scalable Comparison-Shopping Agent for the World-Wide Web, Paper Presented at Proceedings of the First International Conference on Autonomous Agents (1997), available at http://www.cs.washington.edu/ homes/etzioni/papers/agents97.pdf (describing how Internet sites are currently set up to provide consumers

363

Electronic copy available at: http://ssrn.com/abstract=1589564

364

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

search programs in artificial intelligence aim to enhance the sophistication of agent systems for use in complex negotiation situations, as simulated in the Trading Agent Competition.3 Common examples of more autonomous artificial agents are the automated trading systems employed by investment banks4 information on various products and price comparisons); Filippo Menczer et al., IntelliShopper: A Proactive, Personal, Private Shopping Assistant, Paper Presented at Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems 1 (AAMAS-02) (2002), available at http://portal.acm.org/citation.cfm?id=545059 (noting internet commerce sites provide consumers comparisons of products and prices). 3. See, e.g., Jeffrey K. MacKie-Mason & Michael P. Wellman, Automated Markets and Trading Agents, in 2 HANDBOOK OF COMPUTATIONAL ECONOMICS: AGENT-BASED COMPUTATIONAL ECONOMICS 1381, 1413–14 (Leigh Tesfatsion & Kenneth L. Judd eds., 2006) (noting how the original Trading Agent Competition involved various researchers, who competed and utilized artificial intelligence systems to develop an agency system that would maximize the value of a consumer‘s travel vacation); MICHAEL P. WELLMAN ET AL., AUTONOMOUS BIDDING AGENTS: STRATEGIES AND LESSONS FROM THE TRADING AGENT COMPETITION 3–9 (2007) (describing how the Trading Agent Competition developed to utilize autonomous agents, in which this software aimed at simulating the complex ―exchange of goods and services at dynamically negotiated prices‖); Michael P. Wellman et al., The 2001 Trading Agent Competition, 13 ELECTRONIC MARKETS, 4–12 (2003) (describing the results from a competitive game that utilized automated trading in online markets specifically for travel agents); Raghu Arunachalam & Norman Sadeh, The 2003 Supply Chain Management Trading Agent Competition, Paper Presented at Proceedings of the Trading Agent Design and Analysis Workshop (TADA) (2004), available at http://www.eecs.harvard.edu/tada04/raghua.pdf (suggesting the Trading Agent Competition include a problem requiring autonomous agents that simulate production and consumer bidding); Seong Jae Lee et al., RoxyBot-06: An 2 TAC Travel Agent, Paper Presented at Proceedings of the 20th International Joint Conference on Artificial Intelligence (2007), available at http://www.aaai.org/Papers/IJCAI/2007/ IJCAI07-222.pdf (describing the autonomous agent the research group entered in the 2006 Trading Agent competition). Cf. Clayton P. Gillette, Interpretation and Standardization in Electronic Sales Contracts, 53 SMU L. REV. 1431, 1431–32 (2000) (discussing legal implications within contract law that concern the use of internet automated systems); Robert Glushko et al., An XML Framework for Agent-Based E-commerce, 42 Communications of the ACM 106, 106–07 (1999) (suggesting the internet should promote automated system to devise a framework for agent-to-agent contracting); Lin Liao et al., Learning and Inferring Transportation Routines, 171 ARTIFICIAL INTELLIGENCE 311, 311–312 (2007) (describing an artificial agent that uses a system that ―can learn and infer [from GPS data] a user‘s daily movements through an urban community‖ and can be used as the basis of an agent architecture that helps ―cognitively-impaired people use public transportation safely‖); Raz Lin et al., Negotiating with Bounded Rational Agents in Environments with Incomplete Information Using an Automated Agent, 172 ARTIFICIAL INTELLIGENCE, 823, 823 (2008) (describing an agent architecture capable of negotiating with and outperforming humans); Emily M. Weitzenböck, Good Faith and Fair Dealing in Contracts Formed and Performed By Electronic Agents, 12 ARTIFICIAL INTELLIGENCE AND LAW 83, 83–84 (2004) (outlining recommendations for agent design so that their functioning might reflect principles of good faith negotiation prevalent in civil and common law jurisdictions). 4. E.g., Shanny Basar, Goldman Sachs Boosts Trading System, FIN. NEWS ONLINE, Jan. 16, 2006, http://www.efinancialnews.com/usedition/index/content/534746 (describing Goldman Sachs‘ Sigma X, an online automated system, that ―bring[s] together displayed liquidity from public markets and non-displayed liquidity from crossing networks, electric market makers and other broker-dealers with the bank‘s internal order flow‖); Piper Jaffray.com, Electronic Trading: Electronic Algorithmic Trading Capabilities, http://www.piperjaffray.com/2col_largeright.aspx?id=275 (last visited Sept. 22, 2009) (describing Piper Jaffray‘s use of automated agents for electronic trading); Ivy Schmerken, Algorithms Sweep Dark Books, WALL ST. & TECH., Oct. 24, 2005, http://www.wallstreetandtech.com/advancedtrading/showArticle.jhtml?articleID= 172900007 (describing Credit Suisse‘s Guerilla algorithm, an automated system capable of scanning alternate trading systems). Cf. Hans R. Stoll, Electronic Trading in Stock Markets, 20 J. ECON. PERSP. 153, 172–173 (2006) (suggesting automated stock trading has reduced transaction costs and improved the accuracy of price signals, contributing to an overall increased efficiency of stock markets); Aline van Duyn, Automatic News Make Headlines and Money, FIN. TIMES, Apr. 15, 2007, http://www.ft.com/cms/s/2/ e3988202-eb7c-11dbb290-000b5df10621.html (describing how new automatic systems provide a great opportunity to make money). See generally Guttman & Maes, supra note 2, at 70–90 (providing an early report addressing e-commerce agents); MacKie-Mason & Wellman, supra note 3, at 1413–14 (providing a theoretical analysis of automated markets and trading agents); Feng Yi et al., Two Stock-Trading Agents: Market Making and Technical Analysis, in AGENT MEDIATED ELECTRONIC COMMERCE V: DESIGNING MECHANISMS AND SYSTEMS 18, 18 (Peyman Faratin et al., eds., 2004) (providing an empirical study of automated systems utilized in electronic com-

Electronic copy available at: http://ssrn.com/abstract=1589564

No. 2]

ARTIFICIAL AGENTS

365

that ―use intelligent algorithms that respond to market information in real time using short-term predictive signaling techniques to determine the optimal execution timing, trading period, size, price and execution venues, while minimizing market impact.‖5 In the electronic networked market place, agents are both common and busy. Rarely is a contract concluded on the Internet without the deployment of some sort of electronic intermediary.6 II. THE CONTRACTING PROBLEM A traditional statement of the requirements of a legally valid contract: (1) there must be two or more separate and definite parties to the contract; (2) those parties must be in agreement, that is, there must be a consensus ad idem; (3) those parties must intend to create legal relations in the sense that the promises of each side are to be enforceable simply because they are contractual promises;7 (4) the promises of each party must be supported by consideration, i.e. something valuable given in return for the promise.8 These traditional requirements can give rise to difficulties in accounting for contracts reached through artificial agents. Thus, there is a lively doctrinal debate as to how the law should account for contracts that are concluded in this way. Most fundamentally, doctrinal difficulties stem from the requirement that there be two parties involved in contracting: as artificial agents are not currently considered to be legal persons, they are therefore not parties to the contract. Therefore, in the case of a sale brought about by means of an artificial agent, only the buyer and seller can be the relevant parties to the contract. This in turn entails difficulties in satisfying the requirement that the two parties should be in agreement, since in many cases one party will be unaware of the terms of merce); Maes et al., supra note 2, at 81 (providing an early report addressing e-commerce agents); U.S. SECURITIES AND EXCHANGE COMMISSION, TRADING ANALYSIS OF OCTOBER 27 AND 28, 1997 (1998), http://www.sec.gov/news/studies/tradrep.htm (providing an early report on the use of such technology, which discusses ―program trading‖ to automate the decision to place securities trades for large institutional traders); Harish Subramanian et al., Designing Safe, Profitable Automated Stock Trading Agents Using Evolutionary Algorithms, Paper Presented at Proceedings of the Genetic and Evolutionary Computation Conference (July 11, 2006), available at http://www.cs.bham.ac.uk/~wbl/biblio/gecco2006/docs/p1777.pdf (including the results of a research project designed to equip trading agents with evolutionary algorithms). 5. Piper Jaffray, Algorithmic and Program Trading, http://www.piperjaffray.com/ 2col_largeright.aspx?id=275 (last visited Sept. 25, 2009). 6. See generally MICHAEL LUCK ET AL., AGENT TECHNOLOGY: COMPUTING AS INTERACTION (A ROADMAP FOR AGENT-BASED COMPUTING) (Agentlink III 2005) (providing an extensive description of the technologies underwriting the infrastructure for agent systems). 7. This intention is sometimes referred to as the animus contrahendi. BLACK‘S LAW DICTIONARY 97 (8th ed. 2004). 8. 9 HALSBURY‘S LAWS OF ENGLAND ¶ 203, (4th ed. 2005) (paragraph breaks added); c.f. RESTATEMENT (SECOND) OF CONTRACTS, § 3 (1981) (―An agreement is a manifestation of mutual assent on the part of two or more persons. A bargain is an agreement to exchange promises or to exchange a promise for a performance or to exchange performances.‖).

366

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

the particular contract entered into by its artificial agent. Furthermore, in relation to the requirement that there should be an intention to form legal relations between the parties, a similar issue arises: if the agent‘s principal is not aware of the particular contract being concluded, how can the required intention be attributed? III. POSSIBLE SOLUTIONS TO THE CONTRACTING PROBLEM A. Closed Systems as a Means of Solving the Contracting Problem In so-called ―closed systems‖, where rules and protocols of interaction are specified in advance, doctrinal difficulties about the formation of contracts by artificial agents can, in theory, be addressed through the use of ordinary contractual terms and conditions, which govern in detail when and how contracts can be formed by electronic means.9 This solution would require at least two separate contracts: the first, ―master‖ or ―umbrella‖ agreement, and the second or subsequent particular contract governing a sale or other transaction (such as the granting of a license for music).10 Besides settling the question of the existence of the particular contract, the terms of such a master agreement could be used for addressing issues such as the time of contract formation, or the choice of governing law or appropriate forum for resolving disputes. Some websites, accordingly, seek to avoid doctrinal problems by requir9. Tom Allen et al., Can Computers Make Contracts?, 9 HARV. J.L. & TECH. 25, 25–52 (1996); Anthony Bellia Jr., Contracting with Electronic Agents, 50 EMORY L.J. 1047, 1063 (2001); Emad Abdel Rahim Dahiyat, Intelligent Agents and Contracts: Is a Conceptual Rethink Imperative?, 15 ARTIFICIAL INTELLIGENCE & L. 375, 375–90 (2007); J. GRIJPINK & CORIEN PRINS, New Rules for Anonymous Electronic Transactions? An Exploration of the Private Law Implications of Digital Anonymity, in DIGITAL ANONYMITY AND THE LAW: TENSIONS AND DIMENSIONS. 249–70 (C. Nicoll, C. Prins, J. E. J. Prins & M.J.M. van Dellen, eds., 2003); Ian R. Kerr, Ensuring the Success of Contract Formation in Agent-Mediated Electronic Commerce, 1 ELECTRONIC COM. RES. 183, 183–202 (2001) [hereinafter Success of Contract Formation]; Jean F. Lerouge, The Use of Electronic Agents Questioned Under Contractual Law: Suggested Solutions on a European and American Level. 18(2) J. MARSHALL J. COMPUTER & INFO. L. 403, 403–33 (2000); Emily M. Weitzenböck, Electronic Agents and the Formation of Contracts, 9(3) INT‘L J. L. & INFO. TECH.204, 204–34. (2001); Weitzenböck, supra note 3, at 83–110; M. Apistola et al., Legal Aspects of Agent Technology (2002), in PROC. 17TH BILETA CONF., Apr. 2002, at 11; Martine Boonk & Arno R. Lodder, “Halt, who goes there?”: On Agents and Conditional access to websites (2006), in PROC. OF THE 21ST BILETA CONF., Apr. 2006; Frances Brazier et al., Can Agents Close Contracts? in PROC. LEA 2003: THE LAW AND ELECTRONIC AGENTS, at 9–20 (2003); Claudia Cevenini, Contracts for the Use of Software Agents in Virtual Enterprises (2003) in PROC. L. & ELECTRONIC AGENTS WORKSHOP, at 21–32 (2003); Burkhard Schafer, It’s Just Not Cricket - RoboCup and Fair Dealing in Contract (2003), in PROC. L. & ELECTRONIC AGENTS WORKSHOP, at 33–46; Samir Chopra, Artificial Agents – Personhood in Law and Philosophy (2004), in 16TH EUR. CONF. ON ARTIFICIAL INTELLIGENCE, at 635–39 (2004); Oliver van Haentjens, Shopping Agents and Their Legal Implications Regarding Austrian Law (2002), Paper presented at LEA 2000 ―Workshop on the Law of Electronic Agents,‖ available at http://www.leaonline.net/publications/Vanhaentjens.pdf, at *81; Irene Kafeza et al., Legal Issues in Agents for Electronic Contracting (2005), in PROC. 38TH ANN. HAW INT‘L CONF. ON SYSTEM SCIENCES (HICSS‘05) 5: 134a (2005); Ian R. Kerr, Providing for Autonomous Electronic Devices in the Uniform Electronic Commerce Act, Paper presented at Uniform Law Conference of Canada (1999) [hereinafter Autonomous Electronic Devices]. 10. Such a master agreement – subsidiary agreement structure is familiar in the world of financial markets. E.g., INTERNATIONAL SWAPS AND DERIVATIVES ASSOCIATION, INC., ISDA MASTER AGREEMENT (2002), available at http://www.isda.org/publications/isdamasteragrmnt.html (providing an example form of such agreements).

No. 2]

ARTIFICIAL AGENTS

367

ing the user to accept that a binding contract will be formed. For example, when using the eBay.com bidding agent, before placing bids, users are warned that the auction will result in a legally binding contract between the highest bidder and the seller.11 An important category of such master contracts is embodied in the rules of various trading organizations (such as stock exchanges).12 Members are typically bound by such rules not only about the organization in question but visà-vis each other.13 Such rules can explicitly attribute electronic messages to relevant member firms, thus validating contracts concluded by electronic means.14 In the reverse case, a website‘s terms and conditions can seek to avoid the doctrinal issues by preventing the creation of a contract that might need to be explained. For example, a number of shopping websites‘ terms and conditions provide: Your order represents an offer to us to purchase a product which is accepted by us when we send e-mail confirmation to you that we‘ve dispatched that product . . . That acceptance will be complete at the time we send the Dispatch Confirmation E-mail to you.15 Here, the software with which the user interacts does not immediately conclude the contract for the seller. However, this does not mean that in every case, human clerks are involved. Obviously, the seller‘s artificial agents are capable of sending any required confirmation email (and of pasting an address label to a parcel).16 It appears unlikely, given the huge volume of transactions involved, that human clerks review individual transactions, unless automated alerts are triggered suggesting there are problems with a particular transaction. These strategies introduce another contract that itself needs to be explained. Such contracts are often referred to as ―browse-wrap‖ contracts, by analogy with the ―shrink-wrap‖ contracts formed when users purchase shrink-

11. EBay User Agreement, http://pages.ebay.com/help/policies/user-agreement.html (last visited Sept. 22, 2009). 12. E.g., London Stock Exchange, Rules of the London Stock Exchange (2009), available at http://www.londonstockexchange.com/traders-and-brokers/rules-regulations/rules-lse-2009.pdf (outlining the rules of the stock exchange). 13. See, e.g., id. (including numerous rules regulating the interactions between Exchange members). 14. Id. at 34. 15. These conditions were originally found at Amazon‘s UK website but are no longer found there. Amazon.co.uk, http://amazon.co.uk (last visited Sept. 22, 2009). Similar formulae can be found on a number of shopping websites. See e.g., The National Trust, http://shop.nationaltrust.org.uk/information/termsconditions/3/ (last visited Sept. 22, 2009) (―Your order represents an offer to purchase a product from us acting in our capacity as a Seller . . . .‖); 12 tribe records, http://www.12triberecords.com/termsofsale.aspx (last visited Sept. 22, 2009) (―Your order represents an offer to us to purchase a product which is accepted by us when we send e-mail confirmation . . . . ―). Amazon‘s terms and conditions do, however, specify that prices are only indicative; it seems to follow implicitly that contracts are not formed immediately through its website but are dependent on confirmation of an order by Amazon. See Amazon.co.uk, http://www.amazon.co.uk/ gp/help/customer/display.html?ie=UTF8&nodeId=1040616 (last visited Sept. 22, 2009) (indicating that a small number of products are mispriced and that discrepancies will be resolved before an order is confirmed and shipped). 16. See, e.g., Weitzenböck, supra note 9 (listing some of the many sources discussing artificial agents‘ abilities).

368

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

wrapped software containing license conditions.17 In theory, browse-wrap contracts should, doctrinally, be easier to justify, since the terms governing them will usually be static in nature, and the artificial agent communicating the terms to users will have little, or very limited, autonomy in ―tweaking‖ those terms for individual users.18 However, such an appeal to governing conditions in the case of websites is ultimately of limited usefulness, for two reasons. First, the enforceability of browse-wrap contracts is a matter of ongoing controversy.19 Generally speaking, unless the user is obliged to peruse the browse-wrap conditions before transacting, there will be no binding contract,20 and most important internet retailers do not adhere to these conditions.21 Second, whenever an electronic agent tweaks such conditions according, for example, to the geography or language of the user, the same doctrinal questions for the browse-wrap contract, intended to be avoided in the first place (given that the agent‘s operator has no independent knowledge of the particular transaction), will arise.22 For these reasons, and for inherent doctrinal interest, we must explore how other doctrines could justify contracts entered into by artificial agents in the absence of a master agreement. B. Possible Solutions to the Doctrinal Problem in Open Systems The first four potential solutions to the contracting problem in open systems involve minor changes to the law, or suggestions that existing law, perhaps with minor modifications or relaxations, can accommodate the problem. The fifth is more radical and involves treating artificial agents as legal agents without legal personality, while the sixth, the most radical, giving legal personality to artificial agents. The fifth and sixth potential solutions are collectively referred to as the ―agency law approach.‖ In examining the doctrinal rigor and fairness of outcome of each possible solution, we seek to establish a treatment capable of dealing with a wide range of abilities on the part of artificial agents. An appropriate legal doctrine should be sensitive to the rapidly evolving nature of the technologies involved in contracting scenarios. Typically, artificial agents actually deployed today have no discretion in the contracts they finalize; their behavior in response to a particular universe of possible user input is exhaustively specified by the explicit rules the agent applies.23 While agents may make choices about contractual terms, these tend to 17. Mark A. Lemley, Terms of Use, 91 MINN. L. REV. 459, 460, 470–72 (2006); Ronald Mann et al, Just One Click: The Reality of Internet Retail Contracting. 108 COLUM. L. REV. 984, 989–95 (2008). 18. Mann, supra note 17, at 990–91. 19. See id. at 990–93 (discussing the evolving status of this area of law). 20. Specht v. Netscape Commc‘ns Corp., 306 F.3d 17, 20 (2d Cir. 2002); see also Defontes v. Dell Computers Corp., No. C.A. PC 03-2636, 2004 R.I. Super.. LEXIS 32, at *17 (R.I. Super. Ct. Jan. 29, 2004) (applying Specht rule to sale of computers over internet). 21. Mann, supra note 17, at 998. 22. See id. at 1003–04 (discussing battle of the forms problems that arise regarding what is considered an offer and what is considered acceptance). 23. See, e.g., Weitzenböck, supra note 9, at 204–34 (listing some of the many sources discussing artifi-

No. 2]

ARTIFICIAL AGENTS

369

be rather limited in scope; for instance, an agent might calculate the cost of the item to be bought by reference to rules about the location of the user, postal charges, taxes and so on.24 Most of the doctrinal solutions offered are capable of dealing with artificial agents of this kind. However, as the degree of operational autonomy of a given artificial agent increases, some of the solutions lead to counter-intuitive results in the kinds of situations these agents would act in, while others become more defensible. Consider an artificial agent that calculates the credit limit to accord to a credit card applicant, referencing factors such as credit history, income, assets, and the like. The rules such an agent applies are embodied in the agent‘s control program; in this sense the agent has no discretion. However, for riskassessment agents that employ more sophisticated decision-making mechanisms using statistical or probabilistic machine learning algorithms, there may be no strictly binary rules which determine the outcome. It is the combination of relevant factors, and the relative weights the system accords them, that determines the outcome.25 Agent architectures are capable of learning and adapting over time: a credit card risk assessor might learn which users eventually turned out to have bad credit records and adjust the weightings accorded the various factors pertaining to the applicant‘s situation.26 While conceptually such agents are still rulecial agents‘ abilities). 24. Id. 25. See J. Galindo & P. Tamayo, Credit Risk Assessment Using Statistical and Machine Learning: Basic Methodology and Risk Modeling Applications, 15 (1–2) COMPUTATIONAL ECON. 107, 110–11 (2000) (discussing approaches to and problems with developing statistical modeling algorithms for financial risk analysis); ; LEAN YU ET AL., BIO-INSPIRED CREDIT RISK ANALYSIS 105–08 (2008) (discussing limitations and challenges in developing models and algorithms to predict credit risk). 26. Yu, supra note 25, at 105 (―[A]n increasing number of credit scoring models have been developed as a scientific aid to the traditionally heuristic process of credit risk evaluation. Typically, linear discriminant analysis, linear programming, integer programming, k-nearest neighbor, classification tree, artificial neural networks, genetic algorithms, support vector machine. . . [and] neuro-fuzzy systems, were widely applied to credit risk analysis tasks.‖ (citations omitted)). The International Conferences on Autonomous Agents and Multi-Agent Systems, and the Workshops on Agent-Mediated Electronic Commerce are but two of the many forums where state of the art research in the implementation of various theoretical techniques may be found. See Foundations of Multi-Agent Learning: Introduction to the Special Issue, 171 ARTIFICIAL INTELLIGENCE 363–64 (2007) (gathering a collection of papers summarizing current research on cooperating and competing agents capable of a variety of learning techniques). A number of U.S. patents provide for intelligent agents deploying learning techniques. See, e.g., U.S. Patent No. 6,917,926 (filed Jun. 15, 2001) (describing ―[a] method for using machine learning to solve problems‖); U.S. Patent No. 6,751,614 (filed Nov. 9, 2000) (describing ―[a]n informational filtering process designed to sort . . . information, incrementally learning process that learns‖); U.S. Patent No. 6,341,960 (filed Jun. 4, 1999) (describing ―assistance [] based on various intelligent [a]gents which act collaboratively to support learning at different steps‖); U.S. Patent No. 6,038,556 (filed Aug. 31, 1998) (describing ―[a]n autonomous adaptive agent which can learn verbal as well as nonverbal behavior‖); U.S. Patent No. 6,449,603 (filed May 23, 1997) (describing a [s]ystem and method for improving the performance of learning networks such as neural networks, genetic algorithms and decision trees that derive prediction methods from a training set of data‖); U.S. Patent No. 5,852,814 (filed Dec. 10, 1996) (describing ―[a] software agent which performs autonomous learning in a real-world environment‖). See also Charles L. Isbell, Jr. et al., Cobot in LambdaMOO: A Social Statistics Agent, paper presented at the Seventeenth National Conference on Artificial Intelligence, in PROCEEDINGS OF THE SEVENTEENTH NAT‘L CONFERENCE ON ARTIFICIAL INTELLIGENCE (2000) (providing an example of an autonomous learning agent working in the online community LambdaMOO). See generally Wellman (2007), supra note 3 (providing a full survey of strategies by autonomous bidding agents entered in historical Trading Agent Competitions); Wellman (2003), supra note 3 (providing descriptions of autonomous bidding agents); Peter Stone et al., ATTac-2000: An Adaptive Autonomous Bidding Agent 15 J. ARTIFICIAL INTELLIGENCE RESEARCH 189–206 (2001) (describing

370

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

bound, heuristically, that is, from the point of view of explaining the agent‘s behavior, it could be said to be exercising discretion as to how much credit to extend. Furthermore, the rules the agent applies evolve in response to learning data and, are not explicitly set out in advance. An agent exercising discretion in a predictable way is consistent with the idea that, had the principal considered the matter herself, she might have reached a different decision. It may be, for example, that a credit-card application agent‘s program takes no account of information about a user known to the principal that would motivate the principal to reach a different decision.27 Of relevance here is the concept of authority found in agency law, which circumscribes the authority the agent has to make decisions on behalf of the principal.28 Implicit in the existence of that authority is the discretion to take decisions the principal herself would not have taken.29 Thus, prima facie, artificial agents, concluding contracts on behalf of the corporations or users that deploy them, function like legal agents. 1. Relax the Specificity of the Intention Requirements Allen and Widdison suggest the most likely route courts would take to grant legal efficacy to contracts entered into by artificial agents would be to relax ―the requirement that the intention of the parties must be referable to the offer and acceptance of the specific agreement in question.‖30 While superficially appealing, this solution leaves too much to be explained. Even if the requisite intention need not be referable to a specific agreement, we still need to understand, in doctrinal terms, how a contract could emerge from an interaction between a user who seeks a particular contract and an operator who has only a ―generalized intent‖ or who, more likely, has nothing resembling an intention, but only sets certain rules or parameters for the agent to follow. For this reason, we discard this solution. 2. Agents as “Mere Tools” Most current approaches to the problem of electronic contracting, includthe principled bidding strategy implemented in an autonomous bidding agent that finished in first place in the 2000 Trading Agent Competition); Peter Stone et al., The First International Trading Agent Competition: Autonomous Bidding Agents, 5 ELECTRONIC COMMERCE RESEARCH 229–65 (2005) (discussing a competition wherein its entrants were challenged to design an automated trading agent that was capable of bidding in simultaneous online auctions for complementary and substitutable goods). 27. For instance the applicant might be a director of an important client. The principal might wish to treat this person more favorably than an ordinary applicant in making the decision whether to extend credit. 28. See RESTATEMENT (THIRD) OF AGENCY § 1.01 (2006) (―Agency is the fiduciary relationship that arises when one person (a ―principal‖) manifests assent to another person (an ―agent‖) that the agent shall act on the principal‘s behalf and subject to the principal‘s control, and the agent manifests assent or otherwise consents so to act.‖). 29. See RESTATEMENT (THIRD) OF AGENCY § 2.01 cmt. c, illus. 6 (2006) (―If a principal states directions to an agent in general or open-ended terms, the agent will have to exercise discretion in determining the specific acts to be performed to a greater degree than if the principal‘s statement specifies in detail what the agent should do. It should be foreseeable to the principal that an agent‘s exercise of discretion may not result in the decision the principal would make individually.‖). 30. Allen, supra note 9, at 52.

No. 2]

ARTIFICIAL AGENTS

371

ing legislative solutions such as the U.S. Uniform Computer Information Transactions Act (―U.S. UCITA‖), treat artificial agents as ―mere tools of communication‖ of their principals.31 All actions of artificial agents are attributed to the agent‘s principal (whether an operator or a user), whether or not they are intended, predicted, or mistaken.32 On this basis, contracts entered into through an artificial agent always bind the principal, simply because all acts of the artificial agent are treated as acts of the principal.33 This is a stricter liability principle than that which applies to human agents and their principals, where the doctrine of authority limits those actions for which the principal is required to take responsibility. Such a liability principle might not be fair to operators and users who could not ―anticipate the contractual behavior of the agent in all possible circumstances‖ and could not reasonably be said to be assenting to every contract entered into by the agent.34 An obvious example where such a rule would be unfair is where design flaws or software bugs in the artificial agent caused it to malfunction. Extensive legal change would not be required in order to implement this approach; it appears commonsensical and captures intuitions that reflect the present limited capacity of agents to act autonomously.35 However, there is good reason to think this approach will not lead to satisfactory results in all cases. First, the approach involves an unsatisfactory legal fiction by which an agent is treated as a mere tool of communication of the principal. In many realistic settings, such as that of a modern shopping website, the principal cannot be said to have a pre-existing ―intention‖ in respect of a particular contract which is ―communicated‖ to the user. Most likely, in the case of a human principal, the principal has knowledge only of the rules the artificial agent adopts. If the agent‘s price list is downloaded from another source or determined by reference to prices quoted by a third party or at a reference market,

31. UNIF. COMPUTER INFO.TRANSACTIONS ACT § 102(27) (amended 2002), 7 U.L.A. 212 (2009), (―‗Electronic agent‘ means a computer program, or electronic or other automated means, used by a person to initiate an action, or to respond to electronic messages or performances, on the person‘s behalf without review or action by an individual at the time of the action or response to the message or performance.‖). 32. See UNIF. COMPUTER INFO. TRANSACTIONS ACT § 206(a) (amended 2002), 7 U.L.A. 305 (2009) (―A contract may be formed by the interaction of electronic agents. If the interaction results in the electronic agents‘ engaging in operations that under the circumstances indicate acceptance of an offer, a contract is formed, but a court may grant appropriate relief if the operations resulted from fraud, electronic mistake, or the like.‖); cf. UNIF. COMPUTER INFO. TRANSACTIONS ACT § 214(b) (amended 2002), 7 U.L.A. 326 (2009) (―In an automated transaction, a consumer is not bound by an electronic message that the consumer did not intend and which was caused by an electronic error, if the consumer: (1) promptly on learning of the error: (A) notifies the other party of the error; and (B) causes delivery to the other party or, pursuant to reasonable instructions received from the other party, delivers to another person or destroys all copies of the information; and (2) has not used, or received any benefit or value from, the information or caused the information or benefit to be made available to a third party.‖). 33. Lerouge, supra note 9, at 417; Emily M. Weitzenböck, Electronic Agents and the Formation of Contracts, 9 INT‘L J. L. & INFO. TECH. 204, 215–218 (2001). 34. Giovanni Sartor, Agents in Cyberlaw, Presentation at the Workshop on the Law of Electronic Agents (2002), available at http://www.cirfid.unibo.it/~agsw/lea02/pp/Sartor.pdf. 35. Just how limited this capacity is taken to be is evident in the fact that this liability principle would be the same as that applies to owners of cars. Lerouge, supra note 9, at 422; Francisco Andrade et al., Contracting Agents: Legal Personality and Representation, 15 ARTIFICIAL INTELLIGENCE & L. 357–73, 361 (2007).

372

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

the principal will not necessarily even be aware of the prices of goods for sale. In the case of a corporate principal, the human agents of the corporation might rely on the artificial agent to know the applicable prices and terms. In such a setting, the artificial agent, far from being a mere tool of communication of the principal, is how the principal‘s intention is constituted with respect to particular users. Treating an agent as a mere tool of communication simply does not capture the essence of such transactions. Second, wherever there is discretion on the part of the agent, there is the possibility of communicating something on behalf of the principal, which, had the principal had the opportunity to review the communication, would not have been uttered. In such circumstances, to treat the agent as a mere tool of communication of the principal is an unsatisfactory legal fiction. In some circumstances, it should be obvious to a user that the artificial agent is acting in a faulty manner in making an offer to enter a particular contract. Intuitively, it appears unfair to bind the principal to a contract which the user would not believe the principal would ever assent to. 3. The Unilateral Offer Doctrine The third possible solution, which addresses requirements (2) and (3) above, deploys the unilateral offer doctrine of contract law. Under this doctrine, contracts can be formed by a party‘s unilateral offer addressed to the whole world, together with assent or acceptance, in the form of conduct stipulated in the offer, by the other party.36 Competitions open to the public, and terms and conditions of entry to premises, are among the most common examples of unilateral contracts. The theory of unilateral contract underwrites the shrink-wrap and ―clickwrap‖ licenses under which most commercial consumer software is delivered today.37 A shrink-wrap contract relates to the purchase of software in material form (usually, on a CD or DVD). The license agreement generally explains that if the purchaser does not wish to enter into a contract, he or she must return the product for a refund, and that failure to return it within a certain period will constitute assent to the license terms.38 Courts have generally held these licenses to be enforceable.39 Similarly, a click-wrap license draws the user‘s 36. See, e.g., Carlill v. Carbolic Smoke Ball Co., (1893) 1 Q.B.D. 256, 256 (A.C.) (concluding that defendant was bound to a contract when defendant offered to pay a reward if anyone contracted influenza after using their product); Great N. Ry. v. Witham, (1873) 9 L.R.C.P. 16, 16 (U.K.) (holding that there was sufficient consideration for defendant to supply goods to plaintiff, yet no obligation for plaintiff to order). See also BRIAN A. BLUM, CONTRACTS: EXAMPLES AND EXPLANATIONS 79–85 (Aspen Publishers 4th ed. 2007) (explaining unilateral contracts through offer and acceptance). 37. David A. Einhorn, Shrink-Wrap Licenses: The Debate Continues, 38 IDEA: JOURNAL OF LAW & TECH. 383, 387–88 (1998). 38. Specht v. Netscape Commc‘ns Corp., 306 F.3d 17, 32 (2d Cir. 2002). 39. See, e.g., ProCD, Inc. v. Zeidenberg, 86 F.3d 1447, 1450 (7th Cir. 1996); Hill v. Gateway 2000, Inc., 105 F.3d 1147 (7th Cir. 1997), cert. denied, 522 U.S. 808 (1997); See also Novell, Inc. v. Network Trade Ctr., Inc., 25 F. Supp. 2d 1218, 1230−31 (D. Utah 1997), vacated in part, 187 F.R.D. 657 (D. Utah 1999)(holding that shrinkwrap license included with the software is invalid as against a purchaser in regards to maintaining title to the software in the copyright owner). But see Morgan Labs., Inc. v. Micro Data Base Sys., Inc., No. C96-3998TEH, 1997 WL 258886, at 4 (N.D. Cal. Jan 22, 1997) (refusing to allow a shrinkwrap license to

No. 2]

ARTIFICIAL AGENTS

373

attention to the license conditions, and requires him or her to assent to those conditions by clicking on a button. These agreements are, in principle, enforceable, as the clicking is considered the conduct specified in the offer by which the offer is accepted and the user‘s assent to the license terms is manifested.40 The theory of unilateral contract has also been used to explain contracts made by machines. The case of a parking ticket machine has been analyzed as follows: The offer is made when the proprietor of the machine holds it out as being ready to receive the money. The acceptance takes place when the customer puts his money into the slot. The terms of the offer are contained in the notice placed on or near the machine stating what is offered for the money.41 This theory can also be used to explain contract formation in the familiar vending machine case, where often the only terms displayed are the price of the item.42 Applying the theory by analogy to artificial agents presents no difficulties where the artificial agent merely communicates a set of pre-existing conditions specified by the operator to the user (for example, by displaying them on a website). In such a case, the artificial agent acts as a mere instrument of communication between operator and user, analogous to the notice in the parkingticket case. Note that in the parking ticket machine example the seller does not assent to the terms of every particular contract, since the seller is unaware of each particular sale.43 Rather, the doctrine allows a general offer to enter contracts of a particular kind, communicated to the offeree, to be translated into contracts that satisfy the doctrinal conditions (2) and (3) for agreement ad idem and intention to contract.44 In the parking ticket machine example where the parking is paid for in advance, parkers might, depending on the number of coins introduced, effectively choose their permitted parking time. Each parking contract would then contain a different term as to duration. As long as the machine operator has assented to a rule for determining the terms of particular contracts, which is notified in advance to users (i.e., a rate or rate(s) by which parkers are charged), there is no need for the operator to turn his or her mind to all possible durations.45 But, where the artificial agent plays a role in determining the content of the offer for a particular contract, it is not clear the unilateral offer doctrine can

modify a prior signed contract). Cf. Klocek v. Gateway, Inc., 104 F.Supp.2d 1332,1340–41 (D. Kan. 2000) (court stating that Gateway provided no evidence that at the time of the sales transaction, it informed plaintiff that the transaction was conditioned on plaintiff‘s acceptance of the Standard Terms). 40. ProCD, Inc., 86 F.3d at 1450. 41. Thornton v. Shoe Lane Parking Ltd., (1971) 2 Q.B. 163, 169. 42. Sharon Christensen, Formation of Contracts by Email: Is it just the same as the post?, 1 QUEENSLAND UNIV. TECH. L. & JUST. J. 22–38 (2001). 43. Shoe Lane Parking Ltd., 2 Q.B. at 169. 44. Id. 45. Id. at 169–70.

374

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

explain contract formation.46 Many shopping websites do not communicate terms and conditions of all possible contracts in advance (unlike the parking machine example). For a particular transaction, the total combination of factors such as price, shipping charges, sales tax, or special discounts is often only revealed when the user is close to the ―checkout‖ phase of finalizing the transaction, and parameters such as the user‘s means of payment and geographical location have been determined by the agent. It is possible for the contractual terms or ―boilerplate‖ itself to be adjusted to reflect the location of the user, for instance so as to reflect language requirements or consumer law requirements of the user‘s home jurisdiction. It is likewise easy to imagine a shopping website agent that gives discounts to frequent shoppers akin to loyalty schemes in non-internet retailing. Unlike the parking machine example, which involves a simple, unchanging offer communicated directly by the principal, the usual shopping website setting involves a case of an offer communicated by the agent on behalf of the principal in response to the particular circumstances of the user.47 While the doctrine of unilateral contract is useful and indeed necessary to understand how the offer leads to a contract by acceptance on the part of the user, it cannot explain the attribution of the offer to the principal, in the absence of a master agreement. And while master agreements can be formed by unilateral means, they have limited efficacy in curing doctrinal problems, as we have already seen. 4. The Objective Theory of Contractual Intention The fourth potential solution is to deploy the objective theory of contractual intention, on which a contract is an obligation attached by the force of law to certain acts of the parties, usually words, which accompany and represent a known intent.48 The party‘s subjective assent is not necessary to make a contract; the ―manifestation of intention to agree,‖ judged according to a standard of reasonableness, is sufficient, and the real but unexpressed state of the first party‘s mind is irrelevant.49 It is enough the other party had reason to believe the first party intended to agree.50 Applying the doctrine to contracts made by artificial agents, the actions of the agent would constitute the ―manifestation of intention to agree‖ referred to; that the principal‘s intent was not directed to a particular contract at the time it was entered into is irrelevant. Kerr suggests this doctrine has limitations and fails when: an offer can be said to be initiated by the electronic device autonomously, i.e., in a manner unknown or unpredicted by the party em46. Great Northern Railway. v. Witham, (1873) LR 9 C.P.D. 16,19; Carlill v. Carbolic Smoke Ball Company (1893), 1 QB 256, 258–59. 47. Sartor, supra note 34, at 6. 48. Judge Learned Hand summarized the objective theory of contracts in Hotchkiss v. National City Bank, 200 F. 287, 290–91 (S.D.N.Y. 1911). 49. Id. at 293. 50. Id.

No. 2]

ARTIFICIAL AGENTS

375

ploying the electronic device. Here it cannot be said that the party employing the electronic device has conducted himself such that a reasonable person would believe that he or she was assenting to the terms proposed by the other party.51 However, we reject the relevance of the proposed distinction between the normal behavior of a shopping-website agent and the kind of autonomous behavior referred to. Even in a relatively straightforward shopping-website agent example, without anything resembling autonomous behavior, a ―reasonable person‖ would not plausibly believe the principal ―was assenting to the terms proposed by the other party.‖ At most, the reasonable person might believe the principal, if she turned her mind to the contract in question, would agree to it. This is equivalent to modifying the doctrine to refer to the principal‘s assent to a class of possible contracts of the same kind as the particular contract entered into. Despite the possibility of amending the doctrine in this fashion, deploying the objective theory in this context makes a category mistake. The objective theory refers to the relationship between actions and intentions of a single actor: if those actions are such as conventionally to express a particular intention, that intention will be attributed to an actor employing those actions even if on a particular occasion the actor does not have those intentions. In order to explain contracts entered into by artificial agents, the theory seeks to provide a doctrinal bridge between the actions of the agent in manifesting assent, and the (presumed) intention of the principal referred to by those actions; it seeks to link the intentions of the principal with the actions of the agent. But unlike in the case of a single actor, in the artificial agent case the principal will never, or almost never, have a specific intention referable to a particular contract. The objective theory therefore is quite different in kind when deployed in this kind of instance. Furthermore, seeking to relate the manifestation of intent on the part of the agent to an actual ―contractual intention‖ of the principal, whether to a particular contract or a class of contracts, is misguided. The artificial agent is the means by which the principal both forms intentions with respect to particular contracts and expresses those intentions. For it is normally the artificial agent, rather than the principal, that is the best source of information as to particular contracts. In the case of a shopping website the artificial agent operating the website, or its associated ―backend‖ databases, rather than any human agent of the principal, is the most authoritative source of information on the principal‘s contractual relations. Further, it is possible for an artificial agent to be programmed so as to autonomously change the contractual terms offered to counterparties, e.g. to reflect changes in market prices, without referring back to the principal.52 Reference to the principal‘s intentions, whether in respect of a par51. Kerr, supra note 9, at 23. 52. See Jeffrey O. Kephart et al., Dynamic Pricing by Software Agents, 32 COMPUTER NETWORKS 731, 735 (2000) (discussing the use of ‗pricebots‘ to meet the increasing demands of responsive pricing); Joan Morris DiMicco et al., Dynamic Pricing Strategies under a Finite Time Horizon, in Proceedings of the 3rd ACM Conference on Electronic Commerce 95, 95 (2001) (suggesting the use of dynamic pricing algorithms to predict buyer behavior and develop a pricing strategy).

376

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

ticular contract or a class of contracts, adds nothing to the description of the situation. It is the agent‘s intentions, as revealed by its actions, which are the salient ones. The next two solutions to the contracting problem give legal form to this insight by abandoning the search for a link between the actions of the agent and the intentions of the principal, and focus on the attribution of the contracts entered into by the agent to the principal by means of the doctrines of agency law. We refer to these solutions collectively as the agency law approach. 5. Artificial Agent as Legal Agent without Legal Personality The ―unilateral offer‖ or ―objective theory‖ doctrines should not, then, be used as attribution rules, whereby the actions of an artificial agent can be attributed to its principal. In current law relevant to human agents, this doctrinal work is done by the law of agency, which is intimately connected with the notion of the agent‘s authority to act, rather than by a unilateral offer doctrine or the objective theory. Under the law of agency, legal acts committed by the agent on behalf of the principal, within the scope of the agent‘s actual authority, such as entering a contract or giving or receiving a notice, become the acts of the principal.53 The doctrine extends to cases of apparent authority where the agent has no actual authority, but where the principal permits third parties to believe he or she has authority.54 The central doctrine of agency law is the complex and multifaceted concept of authority: In respect of the acts which the principal expressly or impliedly consents that the agent shall so do on the principal‘s behalf, the agent is said to have authority to act; and this authority constitutes a power to affect the principal‘s legal relations with third parties. . . . Where the agent‘s authority results from a manifestation of consent that he or she should represent or act for the principal expressly or impliedly made by the principal to the agent himself, the authority is called actual authority, express or implied. But the agent may also have authority resulting from such a manifestation made by the principal to a third party; such authority is called apparent authority.55 Since there is no need, on this formulation, for both actual and apparent authority, the notion of apparent authority dispenses with the need for a manifestation of assent by the principal to the agent. In the case of artificial agents this avoids any need to invoke the agent‘s consent to such a manifestation. There is thus sufficient authority by reason of the conduct of the principal in establishing the agent and providing it with the means of entering into contracts with third parties. However, it might be convenient to establish a work53. RESTATEMENT (THIRD) OF AGENCY § 2.01 (2006). 54. RESTATEMENT (THIRD) OF AGENCY § 2.03 (2006). 55. F WILLIAM BOWSTEAD & FRANCIS M.B. REYNOLDS, BOWSTEAD AND REYNOLDS ON AGENCY Art. 13 (17th ed. 2001).

No. 2]

ARTIFICIAL AGENTS

377

able concept of actual authority for an artificial agent, to accompany the apparent authority that the agent has by virtue of the principal‘s conduct vis-à-vis third parties. One motivation for this could be to align the treatment in the common-law world with that in civil-law codes, where only a contract between the agent and the principal is sufficient to confer authority.56 In doing so the law could find an appropriate analog for the agreement between agent and principal that is the usual source of actual authority, either in the form of the agent‘s programmed instructions, or perhaps in the form of a mandate or charter of rules that specify the scope of the agent‘s activities and the breadth of its discretion. That mandate might even be expressed in the form of a contract between the agent and the principal; however, as a contract is a legally binding instrument, this would require the artificial agent to be a legal person, a possibility dealt with below. Although the term ―slave‖ is obviously an emotive one, treating an artificial agent as a slave would merely entail treating an artificial agent as having authority to enter contracts on behalf of its operator, but without itself having legal personality. As Kerr notes, ―the problem of intermediaries in commercial transactions is by no means novel,‖ for Roman law, in dealing with slaves, had to deal with legal complexities akin to ours.57 Like artificial agents, Roman slaves were skilful, and often engaged in commercial tasks on the direction of their masters. They were not recognized as legal persons by the ius civile, or civil law, and therefore lacked the power to sue in their own names. But, Roman slaves were enabled, by a variety of legal stipulations, to enter into contracts on behalf of their masters.58 These could only be enforced through their masters, but, nevertheless, slaves had the capacity to bind a third party on their master‘s behalf. From this, Kerr concludes that ―[i]f . . . electronic commerce falls mainly in the hands of intelligent agent technology, the electronic slave metaphor could turn out to be more instructive than typical metaphors . . . such as the ‗personal digital assistant.‘‖59 So, artificial agents, because of their possible sophistication and ―high levels of autonomy and intelligence,‖ might require treatment as ―intermediaries rather than as mere instruments.‖60 What motivates this suggestion by Kerr is not concern for the artificial agents, for ―[t]he aim of doing so is not to confer rights or duties upon those devices‖; it is, instead, ―the first step in the devel56. Sartor, supra note 34, at 20. 57. John Fischer, Computers as Agents: A Proposed Approach to Revised U.C.C. Article 2, 72 IND. L.J. 545, 557 (1997); Kerr, supra note 9, at 26; Stephen T. Middlebrook & John Muller, Thoughts on Bots: The Emerging Law of Electronic Agents, 56 BUS. LAW. 341 (2000). 58. Kerr, supra note 9. Cf. A. Leon Higginbotham, Jr. & Barbara K. Kopytoff, Property First, Humanity Second: The Recognition of the Slave’s Human Nature in Virginia Civil Law, 50 OHIO ST. L.J. 511, 517–18 (1989) (discussing the implicit recognition of slaves as legal agents of their owners by pre-Civil War Virginia state law.) The relevant cases here include Gore v. Buzzard’s Adm’rs, 31 Va. (4 Leigh) 231 (1833), and Randolph v. Hill, 34 Va. (7 Leigh) 383 (1836). In commenting on Randolph, Higginbotham and Kopytoff note: ―The automatic acceptance of the slave‘s agency was a recognition of his peculiarly human qualities of expertise, judgment, and reliability, which allowed owners to undertake dangerous and difficult work with a labor force composed mainly of slaves. Far from conflicting with the owner‘s rights of property, such recognition of the humanity of the slave allowed owners to use their human property in the most profitable ways.‖ 59. Kerr, supra note 9. 60. Id.

378

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

opment of a more sophisticated and appropriate legal mechanism that would allow persons interacting through an intermediary to be absolved of liability under certain circumstances.‖61 But this power to bind both parties was not symmetrical in ancient Rome. A master was only bound to the third party if the slave had prior authority to enter into the contract on his behalf. If the master had not granted such authority to his slave, then liability could be escaped by him.62 This situation was ripe for exploitation: masters could bind third parties without binding themselves by sending slaves to make contracts to which they had not given prior authority. Third parties were unjustly treated in such an arrangement.63 One possible amendment to the Roman law of slavery, then, would be to import or overlay the modern law of agency, with its sophisticated and flexible notions of real and apparent authority, onto the basic Roman-law idea of an intelligent non-person actor with legal capacity to bind its principal. Modern law already contains a somewhat analogous treatment of children. While lacking in contractual capacity to bind themselves to many categories of contract, they do have the capacity to make contracts binding on their parents or guardians.64 Full contractual capacity, therefore, is not a prerequisite for capacity as an agent. 65 6. Artificial Agent as Legal Person The sixth potential solution to the contracting problem would be to treat artificial agents as legal agents who have legal personality and contracting capacity in their own right.66 While a contracting agent need not, historically, be treated as a legal person to be effectual as such, treating artificial agents as legal persons in the contracting context would provide a complete parallel with the situation of human agents.67 Such a move has distinct advantages. Firstly, it would ―solve the question of consent and of validity of declarations and contracts enacted or concluded by electronic agents‖ with minimal impact on ―legal theories about con-

61. Id. 62. Id. 63. Id. 64. See RESTATEMENT (THIRD) OF AGENCY § 3.05 cmt. b, illus. 1 (2006) (demonstrating a minor‘s capacity to act as an agent for his or her parent); see also C. CIV. art. 1990 (establishing the capacity of a nonemancipated minor to act as an agent under the Civil Code of France). 65. See Thrifty-Tel, Inc. v. Bezenek, 54 Cal. Rptr. 2d 468, 474 (Cal. Ct. App. 1996) (holding that Thrifty-Tel‘s computerized network was an ―agent or legal equivalent‖ and that its reliance on a misrepresentation made to it could be attributed to its principal). 66. For discussions of the implementation of such a solution and legal issues involved, see Allen, supra note 9; Andrade, supra note 35; Woodrow Barfield, Issues of Law for Software Agents Within Virtual Environments, 14 PRESENCE: TELEOPERATORS AND VIRTUAL ENV‘TS 741–48 (2005); Federica De Miglio et al., Electronic Agents and the Law of Agency, 2002 PROC. OF THE WORKSHOP ON THE LAW OF ELECTRONIC AGENTS, available at http://www.lea-online.net/publications/DemiglioOnidaRomanoSantoro.pdf; Kerr, supra note 9; Steffen Wettig & Eberhard Zehendner, The Electronic Agent: A Legal Personality Under German Law?, 2003 PROC. OF THE LEA 2003: THE LAW AND ELECTRONIC AGENTS 97–113, available at http://www.lea-online.net/publications/Paper_8_Wettig.pdf ; Kerr, supra note 9. 67. Chopra, supra note 9, at 637.

No. 2]

ARTIFICIAL AGENTS

379

sent and declaration, contractual freedom, and conclusion of contracts.‖68 Secondly, it would reassure principals of agents because, via consideration of the agent‘s liability, it would limit their responsibility for the agent‘s behavior.69 Thirdly, if the agent is acting beyond his or her actual or apparent authority in entering a contract, so that the principal is not bound by it, the disappointed third party would gain added protection as she could sue the agent for a breach of the agent‘s so-called ―warranty of authority‖ (the implied promise by the agent to the third party dealing with the agent that the agent has the authority of the principal to enter the transaction).70 Where the agent is a non-person no such legal action is open. The type of legal personality involved, whether like that accorded to corporations, or like that accorded to mature human beings, would depend on the abilities of the agent.71 In the former case, the agent would need always to act through the auspices of another person (presumably its operator) acting as a type of ―guardian.‖ This other person would make the necessary decisions with respect to any legal action brought by third parties against the agent. A full consideration of the question of legal personhood for artificial agents requires careful treatment of many fundamental legal questions; we defer such investigation for now.72 For present purposes, the solution is not necessary to solve the contracting problem. 7. Risk Allocation Implications of the Proposed Solutions One motivator for a choice among the various possible solutions offered above would be the desire to reach the most efficient allocation of the risk of unpredictable, erratic or otherwise erroneous activity of artificial agents among the various parties involved. According to neo-classical economic theory of law73 this risk should generally fall on the person best able to control the risk i.e., the person able to avoid the cost of an error on the part of the agent with least cost: The least-cost avoider principle . . . asks which party has the lower cost of avoiding harm, and assigns liability to that party. This reduces the incentive for the other party to take care . . . but the principle has wide application and is simple enough for judge and jury to use. . . .[N]ot only does it have desirable efficiency properties, encouraging the parties to take some but not excessive precautions ex ante, but it also accords with common ideas of fairness, putting the loss on the 68. Andrade, supra note 35, at 366. 69. Id.; Sartor, supra note 34 at 7–8. 70. RESTATEMENT (THIRD) OF AGENCY § 6.10 (2006). 71. Andrade, supra note 35, at 362–63. 72. Some preliminary considerations are offered in Chopra, supra note 9, at 635–39; a more detailed treatment will be available in SAMIR CHOPRA & LAURENCE WHITE, ARTIFICIAL AGENTS AND THE LAW (University of Michigan Press)(forthcoming)(working title). 73. See generally RICHARD POSNER, ECONOMIC ANALYSIS OF LAW (Aspen Publishers 7th ed., 2007) (providing broad topical coverage of economic analysis of various legal concepts); Eric Rasmusen, Agency Law and Contract Formation, 6 AM. L. & ECON. REV. 369 (2004) (addressing through an economic approach the problem of an agent making mistaken contracts on behalf of his principal with a third party).

380

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

party that most people think deserves it.74 We focus here on potential losses resulting from the agent erroneously attempting to enter a contract that the principal would not have entered had the principal acted personally. The loss to the principal, assuming the legal system enforces the contract, can be measured as the (negative) net present value of the contract to the principal, i.e., the amount the principal would be prepared to pay to be relieved of the contract. If the legal system on the other hand refuses to enforce the contract, the loss to the third party could be measured as the net present value of the contract to the third party, i.e., the amount the third party would be prepared to pay to have the contract enforced. We distinguish the risk of three kinds of error relevant to such potential losses: Specification errors: the agent applies rules the principal specifies, but the rules are not specified carefully enough; Induction errors: a discretionary agent incorrectly inducts from contracts where the principal has no objections to a contract the principal does object to; Malfunction errors: software or hardware problems whereby the principal‘s rules or parameters for the agent do not result in the intended outcome.75 The risk allocation consequences of the various solutions considered will differ according to whether the transaction in question is one where the principal is the agent‘s operator, or where the principal is a user of the agent; we deal with each of these cases in turn. a. Operator as Principal (Amazon.com example) An example of the first kind of transaction is where the principal is the operator of a shopping website (such as Amazon.com), the agent is the website interface and backend, and the third party is a user shopping on the website (FIGURE 1). The contract is formed between the principal and the third party. The operator relationship is shown by the shaded oval; arrowheads point to the parties to the contract.

74. 75.

Rasmusen, supra note 52 at 380. See Malcolm Bain & Brian Subirana, Legalising Autonomous Shopping Agent Processes, 19(5) COMPUTER L. & SECURITY REP. 375 (2003) (comparing Bain and Subirana‘s similar quadripartite scheme: unexpected acts (through agent learning and autonomy); mistake (human or agent mistakes – either in programming or parameterisation); system failures (machine faults – power surges, etc.); exterior interference (viruses and other damaging acts)).

No. 2]

ARTIFICIAL AGENTS

Principal

Agent

381

Third Party

FIGURE 1: OPERATOR AS PRINCIPAL (AMAZON.COM EXAMPLE) Where the principal is the agent‘s operator, generally speaking, specification and induction errors will be less obvious to third parties than to principals, and therefore the principal will normally be the least cost avoider of the loss. Where, for example, because of specification or induction error, a book is advertised very cheaply, the third party may simply understand the price to be a ―loss leader‖ rather than the result of an error. In such cases the principal/operator is in a better position to monitor ongoing conformity of the agent with her own intentions than the third party, simply because the third party has no privileged access to the principal‘s intentions. In the case of malfunction, it may be obvious to the third party because of other indications that a particular price is the result of error. Even leaving aside other indications, a malfunction will often result in a grosser and hence more obvious error (say, a price of $1.99 instead of $199.00) than a specification or induction error. Therefore, often, the least cost avoider of malfunction errors will be the third party. The principal would be liable for all three types of malfunction errors in all cases, since the agent is understood to be merely a tool. This approach would not be efficient in those cases where the third party is the least-cost avoider of the risk, in particular many cases of malfunction error. Under the ―unilateral offer‖ solution, the principal would bear the risk of all three types of error unless it could be shown the unilateral offer relating to the artificial agent contained an express or implied limitation as to liability for erroneous behavior of the agent. Generally, the conditions for implying a limitation into a contract are very strict,76 and such a limitation would likely not qualify for implication. Hence, absent any express limitation of liability, the solution would misallocate the risk of errors where the third party is the leastcost avoider, such as many malfunction errors in particular. Under the ―objective intention‖ solution, the principal would be attributed with the actions of the agent, if it were reasonable for the user interacting with 76. In US law, courts will only read limitations into the contract if there is a clear demonstration that such terms could be supplied by reasonable inference. The standard for determining whether a term is implicit in a contractual offer is not whether the offeror would have been wise to make the term explicit, but rather whether the offeree should know under the circumstances the term is a part of the offer. E.g., In re Briggs, 143 B.R. 438 (Bankr. E.D. Mich. 1992). In English law, the contractual term needs, inter alia, to ―be so obvious as to go without saying.‖ Shirlaw v. Southern Foundries (1926), Ltd., (1939) 2 K.B. 206, 227. The doctrine of implied terms is part of a broader theoretical framework relevant to the interpretation and construction of contracts. Blum, supra note 36.

382

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

the agent to believe the principal was assenting to that behavior. Modifying the doctrine to refer to the principal‘s assent to contracts of the same kind as the particular contract entered into would limit the principal‘s liability to those contracts where it would be reasonable to suppose the principal would approve the terms, if aware of them. But this rule would unfairly give the principal an option to withdraw from certain contracts disadvantageous to her, even where the agent was authorized to enter such contracts. In other words, it would misallocate the risk of specification or induction errors in these cases to the user, not the principal on whom they should usually fall.77 Under the ―agent as legal agent without full personality‖ solution, the principal would only be bound by conduct of the agent within the scope of the agent‘s actual or apparent authority. In broad terms, this rule would generally correctly allocate the risks of specification or induction error to the principal/operator and of malfunction to the principal or user respectively, depending on which was the least-cost avoider. We evaluate the agency law approach in greater detail in Section V below. Under the ―agent as legal person‖ solution, similarly, the principal would only be liable for the agent‘s behavior within the scope of its actual or apparent authority. The liability consequences would be the same as in the previous solution, except that if the agent falsely represented to the third party it had authority to enter a particular contract, the third party would be able to sue the agent for breach of its authority. The ultimate risk of this type of erroneous behavior would fall on the agent, not the third party, providing a complete parallel with the treatment of human agents. b. User as Principal (ebay.com example) An example of the second kind of transaction is where the agent is operated by an auction website (such as ebay.com), the agent‘s principal is a user of the website, and the user employs the agent to enter a contract with a third party, say by instructing the agent to bid on his or her behalf up to a specified maximum in an auction being conducted by the third party. In FIGURE 2, the operator relationship is shown by the shaded oval; arrowheads point to the parties to the contract.

77. It seems plausible that the risk that such contracts could be readily opted out of by principals would limit the extent to which users were willing to rely on artificial agents to make contracts. On the other hand, Amazon.com opts out of disadvantageous sales on the basis that a contract is not finalized until confirmed by Amazon. See Owen Gibson, Amazon Backtracks on Costly Mistake, MEDIAGUARDIAN, March 19, 2003, http://www.guardian.co.uk/technology/2003/mar/19/newmedia.media (last visited Sept. 22, 2009) (explaining how Amazon.com refused thousands of orders for goods which had accidentally been priced incorrectly). This does not appear to have unduly limited the operational success of Amazon.com in practice.

No. 2]

ARTIFICIAL AGENTS

383

Operator Third party

Agent

User/Principal

FIGURE 2: USER AS PRINCIPAL (EBAY.COM EXAMPLE) In this case, as in the operator as principal case, the risk of specification errors should normally fall on the principal, i.e., the user of the agent. However, the risk of induction errors should normally fall on the operator of the agent (who has control over the agent‘s design and operation) rather than the principal. The risk of malfunction errors will often most fairly fall on the third party, for the reasons given in discussing the operator as principal case. Under the agent as mere tool solution, the user/principal would be primarily liable for all three errors, incorrectly allocating the risk of induction and malfunction errors in particular. This incorrect allocation would also obtain under the ―unilateral offer‖ solution, as the user would again be primarily liable for that behavior. Under the objective intention solution, as modified, the principal would be attributed with the actions of the agent, if it were reasonable for the user interacting with the agent to believe the principal would approve the terms, if aware of them. But, as aforementioned, this rule would unfairly give the principal an option to withdraw from certain contracts disadvantageous to her, thus misallocating the risk of specification or induction errors in these cases to the user, not the principal or operator on whom they should usually fall. Under all three solutions, the user might be able to shift losses onto the operator under rules relating to the quality or fitness of the service provided; but the initial risk allocation would be on the wrong party and in any event the operator could readily limit its liability to the user through the use of website conditions. Under the ―agent without legal personality‖ solution, the risk of specification error would fall correctly on the principal. The risk of induction and malfunction error would fall in the first instance on the principal or third party, depending on whether the particular contract were authorized or not i.e., within the scope of the agent‘s actual or apparent authority. As under previous solutions, the operator would bear the risk if the principal or third party could recover against him under quality or fitness rules. However, under this solution, the principal or third party might also recover against the operator under the doctrine of respondeat superior. Under this doctrine, an employer is responsible for employee actions, including misrepresentations, performed within the course of the employment. This doctrine could be combined with the doctrine

384

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

of dual employment (or the ―borrowed employee‖)78: this would reflect the reality that although the agent is acting for the user, the agent is employed by the principal. Taking the agent metaphor seriously, in contracting cases, would naturally lead to the agent metaphor being taken seriously for liability purposes.79 The ―agent as legal person‖ solution would share these advantages, the difference being the third party would be able to sue the agent directly where the agent misrepresented to the third party the limit of its authority. Where the agent were better able to control its own behavior than the operator, and could control assets that could be used to fulfill a judgment against it, this would be a correct risk allocation. The requisite levels of autonomy and discretion in artificial agents for this solution to be viable are currently unattainable and are likely to remain so in the near future.80 8. Conclusion In the first scenario (operator as principal), the agency law approach clearly gives more equitable results than other possible solutions to the contractual problem. In the second scenario (user as principal), the agency law approach holds out the promise of a broader basis for the principal or third party to recover his or her loss from the operator of the agent, and for that reason also is to be preferred. For this, and other doctrinal reasons, we prefer the agency law approach, and offer an extended defense below. IV. LEGISLATIVE RESPONSES We now consider various legislative responses to the phenomenon of online contracting that are relevant to the contracting problem; some of these instruments are consistent with one or more of the above doctrinal solutions. None of them is straightforwardly in contradiction with the agency law approach. Some of them merit criticism as leading to wrong results or inadequate risk allocation in particular cases. A. International Law Instruments A number of international instruments bear on contracting by artificial agents and, to some extent, limit the freedom of legislatures (and, where applicable, courts) in addressing this issue. 1. UNCITRAL Model Law on Electronic Commerce The first such instrument is the United Nations Commission on Interna78. Under this doctrine, liability for acts of an agent employed by the agent‘s employer, but ―borrowed‖ by a principal for specific tasks, can be allocated among the agent‘s employers according to which one has most control over the operations in question. RESTATEMENT (THIRD) OF AGENCY § 7.03 cmt. d(2) (2006). 79. This topic is considered in greater detail in CHOPRA & WHITE, supra note 72. 80. The possibility of legal personhood for artificial agents is further explored in CHOPRA & WHITE, supra note 72.

No. 2]

ARTIFICIAL AGENTS

385

tional Trade Law (UNCITRAL) Model Law on Electronic Commerce of 1996 (―Model Law‖).81 Three articles, 11 (Formation and Validity of Contracts), 12 (Recognition by parties of Data Messages, 13 (Attribution of Data Messages)) are of particular relevance.82 While the Model Law does not solve all the doctrinal difficulties under discussion, it does address the issues in a number of ways. Firstly, by Article 11, it removes any objection to the validity of a contract, based either on the fact the offer and the acceptance were expressed by means of data messages or on the fact a data message was used in the formation of the contract.83 Secondly, it removes objections to a declaration of will or other statement, based on the fact that the means of communication was a data message.84 Thirdly, it proposes an attribution rule whereby manifestations of assent (and other data messages) made by an artificial agent (and other information systems) can be attributed to the person who programmed, or on whose behalf the agent was programmed, to operate automatically.85 The Model Law is limited in its impact, however. Firstly, as a Model Law, it is not binding on States, and need not be adhered to closely when enacted;86 secondly, the Model Law already allows for indeterminately wide exclusions from its scope.87 2. UN Convention Some aspects of the Model Law have subsequently been adopted in a UN Convention. The United Nations Convention on the Use of Electronic Communications in International Contracts (―UN Convention‖)88 aims to enhance 81. U.N. Comm‘n on Int‘l Trade Law [UNCITRAL], Model Law on Electronic Commerce of 1996, No. 17 (A/51/17) (1996) [hereinafter Model Law], available at http://www.uncitral.org/pdf/english/ texts/electcom/05-89450_Ebook.pdf (last visited Sept. 22, 2009). 82. Id. at 8. 83. Id. 84. Id. 85. Id. 86. Id. at 2. (―[A]ll States [should] give favourable consideration to the Model Law when they enact or revise their laws, in view of the need for uniformity of the law applicable to alternatives to paper-based methods of communication and storage of information‖). The list of adopting states may be found at http://www.uncitral.org/uncitral/en/uncitral_texts/electronic_commerce/1996Model_status.html. 87. See Model Law, supra note 83, at 37 ¶52 (―[T]he matter of specifying exclusions should be left to enacting States . . . .‖). 88. United Nations Convention on the Use of Electronic Communications in International Contracts, Nov. 23, 2005, U.N. Doc. A/60/515, [hereinafter UN Convention] available at http://www.uncitral.org/pdf/ english/texts/electcom/06-57452_Ebook.pdf (providing full text of the Convention together with an Explanatory Note, containing Article-by-Article remarks, encapsulating the discussions within the UNCITRAL Working Group IV on electronic commerce, which had prepared the Convention text (and the previous Model Law)). For commentary on the Convention, see Chong Kah Wei & Joyce Chao Suling, United Nations Convention on the Use of Electronic Communications in International Contracts—A New Global Standard, 18 SING. ACADEMY OF LAW JOURNAL 116–202 (2006) (detailing the provisions of the Convention, comparing it to the Model Law, and examining how it would be adopted into Singapore‘s existing electronic commerce legislation); Chris Connolly & Prashanti Ravindra, First UN Convention on Ecommerce Finalized, 22 COMPUTER LAW AND SECURITY REPORT, 31–38 (2006) (explaining the purpose and principles of the Convention, individual provisions, and other related developments); José Angelo Estrella Faria, The United Nations Convention on the Use of Electronic Communications in International Contracts—An Introductory Note, 55 INT‘L & COMP. L.Q., 689–94 (2006) (outlining in brief the Convention, its background, and the contents of particular provi-

386

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

legal certainty and commercial predictability where electronic communications are used in relation to international contracts.89 The UN Convention, while consistent with an agent as mere tool approach to solving the contracting problem, does not make it mandatory. Once in force,90 it would require Contracting States to cure any doctrinal difficulty associated with the lack of human intervention at the level of individual contracts. It provides that ―[a] communication or a contract shall not be denied validity or enforceability on the sole ground that it is in the form of an electronic communication,‖91 and that ―[a] contract formed by the interaction of an automated message system and a natural person, or by the interaction of automated message systems, shall not be denied validity or enforceability on the sole ground that no natural person reviewed or intervened in each of the individual actions carried out by the automated message systems or the resulting contract.‖92 The negative form of the relevant UN Convention articles (―shall not be denied validity or enforceability‖) means the UN Convention does not specify in positive terms how the contracting problem is to be solved. Furthermore, neither of these articles has direct bearing on the question of attribution of data messages, for which one must turn to the Explanatory Note. The relevant remarks from the Explanatory Note produce the following propositions: 1. ―[T]he person (whether a natural person or a legal entity) on whose behalf a computer was programmed should ultimately be responsible for any message generated by the machine.‖93 2. Electronic communications ―generated automatically by message systems or computers without direct human intervention should be regarded as ‗originating‘ from the legal entity on behalf of which the message system or computer is operated.‖94 3. Questions ―relevant to agency that might arise in that context are to be settled under rules outside the Convention.‖95 Proposition 3 directly supports our contention that the UN Convention is not inconsistent with an agency law approach to the contracting problem. Importantly, the UN Convention is limited in scope: it applies only to certain specified ―international contracts,‖96 and it specifically excludes

sions). 89. UN Convention, supra note 88, at 1. 90. As of September 10, 2009, while some States had notified UNCITRAL of ―signature‖, none were listed as having ratified or acceded. Thus, the Convention has yet to take effect. Status, 2005 - United Nations Convention on the Use of Electronic Communications in International Contracts, http://www.uncitral.org/ uncitral/en/uncitral_texts/electronic_commerce/2005Convention_status.html (last visited Sept. 10, 2009). 91. UN Convention, supra note 90, art. 8(1), at 6. 92. Id. at art. 12, at 7. 93. Id. at 70. 94. Id.; Model Law, supra note 83, at 27. 95. UN Convention, supra note 90, at 70; Model Law, supra note 83, at 27. 96. UN Convention, supra note 90, at 14. The Convention ―applies to the use of electronic communications in connection with the formation or performance of a contract between parties whose places of business are in different States.‖ Id. at 2.

No. 2]

ARTIFICIAL AGENTS

387

―[c]ontracts concluded for ―personal, family or household purposes[.]‖97 Therefore, any doctrinal difficulties cured by the UN Convention might still need to be further ―cured‖ by national legislation to address contracts outside the UN Convention‘s scope. If, contrary to our understanding, the UN Convention requires a mere tool approach to covered contracts, Contracting States would be free to adopt a different approach for contracts outside its scope. 3. European Union: The Electronic Commerce Directive The main relevant instrument in the European Union is Directive 2000/31/EC (―the Electronic Commerce Directive‖).98 While it has a number of provisions that touch on electronic contracting,99 the main provision relevant to present purposes is Article 9, which provides ―Member States shall ensure that their legal system allows contracts to be concluded by electronic means . . . [and] in particular ensure that the legal requirements applicable to the contractual process neither create obstacles for the use of electronic contracts nor result in such contracts being deprived of legal effectiveness and validity on account of their having been made by electronic means.‖ Further than this the Electronic Commerce Directive does not go. It neither posits an attribution rule, nor does it specifically deal with the question of autonomous agents. To this extent, the Electronic Commerce Directive neither requires nor prohibits any particular solution to the contracting problem. In its first report on the application of the Electronic Commerce Directive, the Commission commented: [Article 9(1)], in effect, required Member States to screen their national legislation to eliminate provisions which might hinder the electronic conclusion of contracts. Many Member States have introduced into their legislation a horizontal provision stipulating that contracts concluded by electronic means have the same legal validity as contracts concluded by more ―traditional‖ means. [A]s regards requirements in national law according to which contracts have to be concluded ―in writing‖, Member States‘ transposition legislation clearly states that electronic contracts fulfil such requirement.100 97. UN Convention, supra note 90, at 2. This expression includes, but is not limited to, consumer contracts. Id. at 33. 98. Council Directive 2000/31, 2000 O.J. (L 178) 1 (EC) [hereinafter Electronic Commerce Directive]. 99. See generally, Symposium, Cyberpersons, Propertization, and Contract in the Information Culture: Diverging Perspectives on Electronic Contracting in the U.S. and EU, 54 CLEV. ST. L. REV. 175 passim (2006) (comparing EU and US approaches to electronic contracting). 100. Report from the Commission and the European Parliament, The Council and the European Economic and Social Committee, at 11, COM (2003) 702 final (Nov. 21, 2003), available at http://eurlex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2003:0702:FIN:EN:PDF. The Member States which have introduced horizontal provisions stipulating that contracts concluded by electronic means have the same legal validity as contracts concluded by more traditional means are Belgium, Germany, Spain, France, Luxemburg, Finland. Id. at n.59. Additionally, the report also explains the Member States‘ requirements for such contracts. Id. at n.60 (―Moreover, the Directive has brought about changes in the national interpretation of ‗inwriting‘ requirements, for instance in Germany as regards insurance contracts and the obligation that prior information be given in writing.‖). These directives to the Member States are described similarly elsewhere. See Electronic Commerce Directive, supra note 98, at art. 9(1) (ordering that Member States shall make sure

388

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

Interestingly, the United Kingdom has not sought to specifically implement Article 9 of the Electronic Commerce Directive into domestic law.101 B. U.S. legislation 1. UETA The U.S. Uniform Electronic Transactions Act (1999) (―U.S. UETA‖),102 which broadly applies to electronic records and electronic signatures relating to transactions, adopts a different attribution rule from the Model Law or the UN Convention commentary.103 Section 9 provides that ―[a]n electronic record or electronic signature is attributable to a person if it was the act of the person.‖104 Unlike the mere tool doctrine, the provision provides for the possibility that some errors of artificial agents would not be attributed to the operator or user. However, the precise boundaries of the provision, and what would be required to show that an error was not the ―act‖ of the user, are not clear.105 It will raise difficult questions where a system acts in a faulty manner such that one would be tempted to say a new act (of the agent) intervened, by analogy with the tort law doctrine of novus actus interveniens (new intervening act) which ―breaks the chain of causation‖ between the defendant‘s conduct and the plaintiff‘s loss.106 Nor does it clarify the relative liabilities of the third party and operator, if different from the user.107

2. UCITA The U.S. UCITA,108 which applies to computer information transactions as defined,109 goes further in embracing the mere tools approach.110 Section

their contractual process neither hinders electronic means nor results in invalidity due to the electronic means). 101. This is because it was considered that an order under Section 8 of the pre-existing Electronic Communications Act of 2000 (―ECA‖) could be used to deal with any statutory form requirements which conflict (or which may conflict) with Article 9. See Electronic Communications Act, 2000, c. 7, § 8 (U.K.) (allowing modification of ―(a) any enactment or subordinate legislation‖). 102. U.S. UNIF. ELEC. TRANSACTIONS ACT § 1–21, 7A pt. I, U.L.A. 225 (2006). 103. Id. § 3(b)(3) (―The [Act] does not apply to a transaction to the extent it is governed by: . . . . (3) [the Uniform Computer Information Transactions Act.]‖). 104. U.S. UNIF. ELEC. TRANSACTIONS ACT § 9(a), 7A pt.1, U.L.A. 261 (2006). 105. Id. 106. See RESTATEMENT (SECOND) OF TORTS §441 (1997) (―An intervening force is one which actively operates in producing harm to another after the actor‘s negligent act or omission has been committed.‖). 107. See id. (referring to the fact that § 441 does not discuss third party and operator liabilities). 108. UNIF. COMPUTER INFO. TRANSACTIONS ACT §§1–905 (2002). See generally Brian D. McDonald, The Uniform Computer Information Transactions Act, 16 BERKELEY TECH. L.J. 461, 461–67 (2001) (reacting to UCITA state-wide). 109. The term ―computer information transactions‖ includes transfers (e.g., licenses, assignments, or sales of copies) of computer programs or multimedia products, software and multimedia development contracts, access contracts, and contracts to obtain information for use in a program, access contract, or multimedia product. UNIF. COMPUTER INFO. TRANSACTIONS ACT § 103 (2002). 110. See U.C.C. § 2–204 (2003), available at http://www.law.cornell.edu/ucc/2/article2.htm#s2-204 (allowing for formation of contracts by electronic agents though not widely accepted). But see Daniel Juanda Lowder, Electronic Contracting Under the 2003 Revisions to Article 2 of the Uniform Commercial Code: Clarification or Chaos?, 20 SANTA CLARA COMPUTER & HIGH TECH. L.J. 319, 333, 336–39 (2004) (criticizing the

No. 2]

ARTIFICIAL AGENTS

389

107 provides: (d) A person that uses an electronic agent that it has selected for making an authentication, performance, or agreement, including manifestation of assent, is bound by the operations of the electronic agent, even if no individual was aware of or reviewed the agent‘s operations or the results of the operations.111 This provision is close to the mere tool doctrine, since a mere ―selection‖ for a particular use is sufficient to bind the principal with all the actions of the agent. The agency doctrine is referred to, but not embraced, in the commentary to the provision: The concept here embodies principles like those in agency law, but it does not depend on agency law. The electronic agent must be operating within its intended purpose. For human agents, this is often described as acting within the scope of authority. Here, the focus is on whether the agent was used for the relevant purpose.112 The commentary glosses the provision in referring to a ―relevant purpose‖; the provision itself speaks merely of the principal having ―selected‖ the agent ―for making an authentication, performance, or agreement.‖ 3. E-SIGN Act In 2000, the U.S. Congress passed the Electronic Signatures in Global and National Commerce Act, or ―E-SIGN Act‖113 which provides, inter alia, that: A contract . . . may not be denied legal effect, validity or enforceability solely because its formation, creation or delivery involved the action of one or more electronic agents so long as the action of any such electronic agent is legally attributable to the person to be bound.114 As E-SIGN requires an artificial agent‘s actions to be ―‗legally attributable‘ to its user, it appears to go no further than the common law in recognizing the enforceability of bot-made contracts.‖115 The legislation itself does not specify a particular attribution rule. UCC for allowing contracting without human intervention because of a) the consequential difficulties in showing the parole evidence rule is satisfied with respect to waivers and modifications, and b) the fact that the usual doctrines of fraud or mistake may apply); Michael Froomkin, Article 2B as Legal Software for Electronic Contracting—Operating System or Trojan Horse?, 13 BERKELEY TECH. L.J., 1023, 1061 (1988) (criticizing Article 2B of the UCC for making ―a series of policy choices, especially those displacing consumer law for online transactions and enacting a national law on non-repudiation for digital signature-based e-commerce which do not seem to be required to achieve the end of rationalizing the law of information licenses.‖). 111. UNIF. COMPUTER INFO. TRANSACTIONS ACT § 107 (2002). 112. Id. at § 107 cmt. 5 (emphasis added). 113. 15 U.S.C.§ 7001 (2006); see Gilbert F. Whittemore, Report to the House of Delegates, 2008 A.B.A. SEC. SCI. & TECH., http://meetings.abanet.org/webupload/commupload/CL320060/sitesofinterest_files/ ABAResolutionreUNConvention2008.doc (commenting on E-Sign while urging ratification of the Convention); see also Michael J. Hays, Note, The E-SIGN Act of 2000: The Triumph of Function Over Form in American Contract Law, 76 NOTRE DAME L. REV. 1183 (2001) (explaining the impact of the E-Sign bill on modern commerce). 114. 15 U.S.C. §7001(h). 115. Anthony Bellia Jr., Contracting with Electronic Agents, 50 EMORY L.J. 1047, 1070 (2001).

390

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

C. Australia The relevant Australian legislation is the Electronic Transactions Act 1999 of the Commonwealth (―Australian ETA‖) and the legislation of the various Australian States designed to complement it.116 The legislation provides for validity of electronic transactions through Article 8(1).117 The attribution rule in Article 15(1) is complex: ―[u]nless otherwise agreed between the purported originator and the addressee of an electronic communication, the purported originator of the electronic communication is bound by that communication only if the communication was sent by the purported originator or with the authority of the purported originator.‖118 This formulation leaves open the possibility of an argument that an artificial agent might send a communication with the authority of the purported originator. To that extent, the Australian legislation is consistent with an agency law approach.

D. Summary and Critique of the Legislative Position There is a surprising variety of approaches to the question of attributing data messages sent by artificial agents to particular legal persons. Some of the provisions considered above embody substantive attribution rules for messages sent by electronic agents, while others are open to leaving this issue to statutory or judge-made authority. The attribution rules attribute messages or communications sent by artificial agents variously to: the person who programs the artificial agent or on behalf of whom the artificial agent is programmed;119 the legal entity on behalf of which the artificial agent is operated;120 the person who uses the artificial agent, having selected it for the relevant purpose: so long as the agent is operating within its intended purpose;121 the person who sends the communication by means of the artificial agent or with whose authority the communication was

116. See Oznetlaw.net Fact Sheet on Electronic Transactions Act, available at http://www.oznetlaw.net/ factsheets/tabid/69/FactSheets/ElectronicTransactionsAct/tabid/935/Default.aspx (Last accessed September 22, 2009) (providing a list of Australian states that have enacted complementary legislation). 117. Electronic Transaction Act, 1999, § 8(1) (Austl.) 118. Id. at §15(1). 119. UNIF. COMPUTER INFO. TRANSACTIONS ACT § 212 (2002); Model Law, supra note 83, art. 13(2)(b), at 8. 120. Model Law, supra note 83, art. 13(2)(a), at 8; UN Convention, supra note 90, art. 10, at 6. 121. UNIF. COMPUTER INFO. TRANSACTIONS ACT § 107 cmt. 5 (2002).

No. 2]

ARTIFICIAL AGENTS

391

sent.122 A number of provisions do not establish a substantive attribution rule but rather leave it to the general law to determine the issue: U.S. E-SIGN Act refers to the question whether a data message is ―legally attributable‖ to the sender, without explaining how that issue is to be determined;123 U.S. UETA similarly refers to the question whether an electronic record or signature was the act of the person without providing substantive guidance on when the acts of an artificial agent count as the acts of the user;124 E.U. Electronic Commerce Directive does not contain attribution rules specific to electronic messages.125 Many of the substantive solutions embodied in the instruments and legislation discussed can be criticized, not only as reaching different results among themselves, but as insufficient to account for the variety of possible fact situations. The Model Law solution emphasizes the person who programs the artificial agent, or on whose behalf the artificial agent is programmed.126 The UN Convention commentary mentions this rule but also emphasizes the person on whose behalf the agent is operated.127 In our view, neither of these approaches is subtle enough to give correct results in even quite common cases. An attribution rule which emphasizes the person on whose behalf the agent is operated does not give correct results where the user of the agent is not the operator of the agent. Such a rule would attribute, for example, bids made by an eBay bidding agent on behalf of users to eBay itself, clearly an absurdity as eBay is not a bidder in auctions hosted on its website. Similarly, a rule which emphasizes who or on behalf of whom an agent is ―programmed‖ risks giving wrong results unless it is heavily tweaked. Agents may be bought off-the-shelf or programmed by the operator. They may also be ―programmed‖ by the user, in the sense of having the user enter her instructions (for example, a maximum bid). An agent bought off the shelf is not programmed by the operator but by an irrelevant third party, to whom it would be incorrect to attribute data messages. Where the operator arranges for programming, the rule will give incorrect results in the user-as-principal case already mentioned. The rule would provide more correct results only if ―programming‖ is interpreted in a counter-intuitive way as relating to the user‘s entering her instructions. Such a tweaked rule amounts to a mere tool approach with respect to the user; such a rule is less fair than the agency law approach. While Article 14(1) of the UN Convention relieves persons interacting with electronic agents (i.e., users) of liability for errors in certain cases where 122. 123. 124. 125. 126. 127.

Electronic Transactions Act, 1999, §15(1) (Austl.). 15 U.S.C. § 7001(h) (2000). U.S. UNIF. ELEC. TRANSACTIONS ACT § 9(a) (1999). Electronic Commerce Directive, supra note 98. Model Law, supra note 83, at 8. UN Convention, supra note 90, at 70, 74.

392

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

no opportunity to correct errors is afforded by the operator of the agent, it is not sufficient to relieve operators of liability for errors (such as specification error) which properly rest with principal/users in such cases.128 More defensible is the UCITA approach, which emphasizes the person who uses an artificial agent, having selected it for a relevant person (where the agent is operating within its intended purpose).129 In the eBay example, this would clearly be the person employing the bidding agent, rather than eBay itself. However, the UCITA approach still makes use of three separate but related concepts, and as such is more complex than necessary. As well as the elements of usage and selection, the comment to the UCITA provision also qualifies the rule by saying the agent must be operating ―within its intended purpose.‖130 The Australia ETA adopts the simplest and most robust approach, by emphasizing whether the person concerned sent the relevant communication himself, or whether the communication was made with the authority of the person concerned.131 Such a formulation leaves open the possibility of the legal agent approach. From this perspective, an approach (such as that embodied in the Australian legislation) which focuses on the authority of the agent has the advantages of explanatory power and conceptual simplicity. It also has the advantage of building on jurisprudence surrounding the concept of agents‘ authority familiar from the law of human and corporate agents. V. EVALUATING THE AGENCY LAW APPROACH TO ARTIFICIAL AGENTS The agency law approach to artificial agents is our preferred solution to the contracting problem, for both intuitive and doctrinal reasons, and for its better liability consequences relative to other possible solutions. We will evaluate the agency law approach by considering further the intuitive rationales, the risk allocation effects, and a number of objections to it. We do not engage in an extended evaluation of the other approaches above, as we consider our initial objections to have shown them to be doctrinally untenable. While our analysis will generally be applicable to both solutions within the agency law approach (artificial agent as legal agent without personality; artificial agent as legal person), we do not consider the latter solution to be necessary given present levels of artificial agent development. In our view, considering the artificial agent as a legal agent without legal personality gives ―good-enough‖ risk-allocation outcomes. According legal agents with legal personality would solve one issue—that of agents misrepresenting to third parties the limits of their authority—but that issue can be solved via the doctrine 128. 129. 130. 131.

Id. at 7. UNIF. COMPUTER INFO. TRANSACTIONS ACT § 107 cmt. 5 (amended 2002), 7 U.L.A. 262 (2009). Id. Electronic Transactions Act, 1999, §15(1) (Austl.).

No. 2]

ARTIFICIAL AGENTS

393

of respondeat superior. It would require a larger legal step, one that could well prove judicially or politically controversial. The personhood solution is thus not an incremental solution to the contracting problem. We do, however, consider in much greater depth the possibility of legal personhood for artificial agents in forthcoming work;132 the issue is highly relevant to their future capabilities. A. Intuitive Rationales for the Agency Law Approach The most cogent reason for adopting the agency law approach to artificial agents in the context of the contracting problem is to allow the law to distinguish in a principled way between those contracts entered into by an artificial agent that should bind a principal and those that should not. The doctrine of authority is of key importance in delimiting the field of the principal‘s contractual liability: if entering a given contract is within an agent‘s actual or apparent authority, then the agent‘s principal is bound, even if he or she had no knowledge of the particular contract referred to, and even if the agent exercises its discretion in a way different than how the principal would have exercised that discretion.133 But if the contract is outside the agent‘s actual or apparent authority, then the principal is not bound by the contract.134 As well as providing a clear path to the attribution of contractual acts to a principal without inviting potentially unlimited liability on the principal, embracing the agency law approach to artificial agents would permit the legal system to distinguish clearly between the operator of the agent (i.e., the person making the technical arrangements for the agent‘s operations), and the user of the agent (i.e., the principal on whose behalf the agent is operating in relation to a particular transaction). In some cases (such as the Amazon.com example) these are the same, but in many cases (such as the eBay.com example) these will be different. Embracing the agency law approach would also allow a clear distinction to be drawn between the authority of the agent to bind the principal and the instructions given to the agent by its operator. In the auction website example, the agent is authorized by the user to bid up to the user‘s maximum bid: this maximum bid (and any other relevant parameters) defines the contractual authority of the artificial agent. This authority is not to be confused with the detailed instructions by which a software agent is programmed. The agency approach to artificial agents would also potentially allow the legal system to handle the operation, on behalf of multiple principals, of an agent. Even if the agent concerned were operated by one of the principals, determining which principal the multi-agent contracted for in the case of any particular transaction would be a relatively simple matter of consulting the agent‘s authority and how 132. CHOPRA & WHITE, supra note 72. 133. See RESTATEMENT (THIRD) OF AGENCY §§ 6.01–6.03 (2006) (explaining the how agents, principals, and third parties relate in contract transactions). 134. See id. §§ 2.01, 2.03 (2006) (defining actual and apparent authority).

394

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

it related to the transaction in question.135 Again, a shopping website agent might often act on behalf both of the user and the operator for different purposes. The same agent might, for example, conduct a sale by way of auction on behalf of a shopping website, as well as allowing users to enter maximum bids into a simple bidding agent for that auction. Making sense of the behavior of such an agent without an agency analysis could prove difficult, if not impossible. But the law has long dealt with the role of fiduciaries, usually called brokers and sometimes dual agents, who represent, and owe duties to, both parties to a transaction for different aspects of the transaction.136 It is also able to deal with liabilities arising from agents that are ―lent‖ by a general employer to a special employer for particular purposes.137 Another doctrine of agency law potentially useful in the context of contracting is the doctrine of ratification of agents‘ contracts. This doctrine permits a principal to approve a contract purporting to be done on its behalf, even though there was no actual or apparent authority at the time the contract was entered, so long as the existence of the principal was disclosed.138 As Kerr points out, The rule that precludes undisclosed principals from ratifying unauthorized transactions could have a useful application in electronic commerce. It could indirectly encourage those who initiate a device to make conspicuous the fact that the third party is transacting with a device and not a person.139 B. Economic Arguments for the Agency Law Approach Recent work by Rasmusen140 shows agency-law principles are economically efficient in the rigorous sense of correctly allocating the risk of erroneous agent behavior on the least-cost avoider. The challenge for those who believe an agency law approach to artificial agents should be embraced in the contractual context is to show similar considerations apply in the case of artificial agents as apply in the case of human agents, so similar rules of apportioning liability between the principal and the third party should also apply, to the exclusion of other approaches such as the mere tool doctrine. Rasmusen‘s analysis shows the six threads of agency liability known to U.S. law can each be explained in terms of the least-cost avoider principle.141 Importantly, Rasmusen‘s analysis is mainly concerned with allocating the risk 135. See generally Alessandro Pavan & Giacomo Calzolari, Sequential Contracting with Multiple Principals (Northwestern University, Center for Mathematical Studies in Econ. and Mgmt, Working Paper No. 1457. 2007), available at http://ssrn.com/abstract=1017747 (discussing multiple principals interacting with a single agentfrom an economic perspective). 136. See, e.g., RESTATEMENT (THIRD) OF AGENCY § 3.14 (2006) (discussing ―ambiguous relationships‖). 137. Id. at § 7.03 cmt. d(2) (discussing liability allocation for ―lent‖agents). 138. BOWSTEAD & REYNOLDS, supra note 55 (noting that a principal may ratify acts done on his or her behalf by agents lacking proper authority). 139. Kerr, supra note 9. 140. Rasmusen, supra note 73, at 380. 141. Id. at 380–90.

No. 2]

ARTIFICIAL AGENTS

395

between the principal and the third party.142 This is principally because in this type of litigation the agent is typically relatively poor and therefore not worth suing.143 On his analysis, it is not necessary to the validity of the liability rules that the agent have any assets of its own. This approach validates the ―agent as legal agent without personality‖ solution.144 1. Actual express authority The first case is that of actual express authority: where an agent has actual authority to enter a contract, the principal is liable, whether or not the principal would have entered that contract if he or she had had the opportunity to review it.145 Here, the principal is the least-cost avoider of the risk of entering an unwanted contract, since it is better placed than the third party to know its preferences, and can easily instruct the agent as to its wishes. There is no material difference between the case of a human and artificial agent that would motivate a different conclusion in this class of cases. The principal should be liable for such contracts. 2. Actual implied authority The second case is that of actual implied authority: where an agent is employed and as a result of its position can reasonably infer it has authority to enter a particular category of contracts with third parties.146 Arguably, in the case of artificial agents, there is no actual implied authority. An artificial agent is normally limited by a set of rules, instructions or factors,147 and it is hard to see an analogue to the employment relationship applying to this category of cases. If it did arise, the following would be relevant: The principal has a variety of means available to reduce the risk of agent mistakes. He hires the agent, and so can select an agent with the appropriate talents. He can negotiate a contract to give him incentive to use those talents properly. He can instruct the agent to a greater or lesser extent, choosing the level of detail in light of the costs of instruction and mistake. He can expressly instruct the agent not to take certain actions and can tell third parties about the restrictions. He can monitor the agent, asking for progress reports or randomly checking negotiations that are in progress. The principal‘s control over the agent . . . gives him many levers with which to reduce the probability of mistakes.148 If there is such a thing as actual implied authority with respect to artificial agents, the same considerations would apply so as to make the principal liable 142. 143. 144. 145. 146. 147. 148.

Id. Id. at 372. Andrade, supra note 35, at 359. Rasmusen, supra note 73, at 372. Id. Andrade, supra note 35, at 359. Rasmusen, supra note 73, at 382.

396

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

for contracts where there was such authority. Analogs of most of the means of control available to a principal of a human agent are also available to principals of artificial agents. The analog of selecting an agent with appropriate talents is selection of the agent from a series of off-the-shelf agents, or customizing an agent‘s software; the analog of instructing the agent is programming the agent, or entering appropriate parameters to determine the agent‘s behavior; the analog of asking for progress reports is seeking reports from the artificial agent on its activities or conducting random checks on its status. 3. Apparent authority The third case is that of apparent authority, where an agent appears, by reason of words or conduct of the principal, to have authority to bind the principal.149 The principal is liable on the agent‘s contracts, even where there is no actual authority.150 An important application of this principle is where a principal has withdrawn an actual letter of authority but has failed to notify third parties. Here, the law imposes liability on the principal since it is less costly for the principal to notify third parties than for each third party to have to inquire of every principal whether a letter of authority has been withdrawn.151 Another is where an agent has authority to enter a particular class of transactions, but ignores specific limitations on his authority such as limits to the price he may offer or accept. Here, too, the law imposes liability on the principal, where the agent buys at the market price, since it is more costly to place on the third party the burden of inquiry of the principal whether each particular proposed transaction is authorized than to require the principal to properly enforce her instructions. 152 Normally, too, the principal would be the least-cost avoider in the case of artificial agents, for reasons similar to those that apply to human agents. There are a number of analogs of such disobedience in the realm of artificial agents: firstly, an artificial agent that deliberately contravened its instructions; and secondly, an agent that neglected to obey its instructions, perhaps because its principal‘s instructions were contradictory, or because of a supervening priority, or because of a fault in its programming. 4. Agency by estoppel The fourth case is that of agency by estoppel ―based on the ease with which someone could have prevented harm to himself. Having failed to prevent the harm, she is ‗estopped‘ from asserting what would otherwise be a valid claim.‖153 In order to prove agency by estoppel, the following elements

149. 150. 151. 152. 153.

Id. at 373. Id. Id. at 383–84. Id. at 384. Id. at 378.

No. 2]

ARTIFICIAL AGENTS

397

must be established: (1) intentional or negligent acts of commission or omission by the alleged principal which created the appearance of authority in an agent; (2) reasonable and good faith reliance on the appearance of authority in the putative agent by the third party; and (3) a detrimental change in position by the third party due to its reliance on the agent‘s apparent authority.154 According to Rasmusen, the doctrine is ―clearly justified by the least-cost avoider principle. . . . [The principal] is the least-cost avoider because she can prevent the mistake more cheaply than the [third party] can. He is therefore liable for the resulting harm.‖155 Nothing about human agents, as opposed to artificial agents, merits a different approach. If a principal allows a third party mistakenly to assume an artificial agent is dealing on her behalf, and fails to prevent the mistake, she should be liable. 5. Agency by ratification The fifth case is that of a principle‘s ratification creates an additional source of liability, which occurs when ―the principal assents to an agreement after it is made by someone who lacks authority. Ratification is similar to actual express authority . . . [because the principal‘s ratification effectively demonstrates that he sees no mistake] worth the cost of renegotiation.‖156 ―The principal will be the least-cost avoider for the same reason as that for when the agent has actual express authority;‖157 this is as true of artificial agents as it is for human ones. 6. Inherent agency power The sixth case is that of ―inherent agency power‖: where the third party does not know the agent is an agent, and the agent goes beyond his actual authority.158 In such cases, there is neither actual nor apparent authority, but the principal is still made liable, since he can easily control his agent.159 Here, the same principles apply in the case of artificial agents as human agents. 7. Exceptions to principal liability Rasmusen goes on to cite four categories of cases where the third party is the least-cost avoider, and where the law consequently places the risk of agent error on the third party for various reasons (whether as falling outside one of the six categories set out above, or as falling within a free-standing excep-

154. E.g., Minskoff v. Am. Express Travel Related Servs. Co., 98 F.3d 703, 708 (2d Cir. 1996). 155. Rasmusen, supra note 73, at 388. 156. Id. at 389. 157. Id. (explaining that the principal may be deemed to have ratified the agreement even if he was silent and did not object to the actions of the agent). 158. See id. at 389–90 (demonstrating this concept by use of an example). 159. RESTATEMENT (SECOND) OF AGENCY § 8A (1958).

398

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

tion).160 Each of these has an analogue in the case of artificial agents. The first category is that of over-reliance by the third party on statements by the agent that it has authority to conduct the transaction in question.161 If it is easy for the third party to check whether this is the case, or if the transaction is inherently suspicious, the law will not bind a principal in cases where the third party has not checked.162 It is easy to think of an analogous category for artificial agents. An agent could engage in misleading behavior without it being deliberate; such cases would not require premeditated deceptive behavior on the part of the agent concerned. The second category relates to the incapacity of the agent.163 Where a third party can readily observe the agent‘s incapacity (for example, through drunkenness), then the third party will be the least-cost avoider and the principal will not be liable for his contracts.164 This is not due to a quirk of agency theory but simply because in such situations the agent lacks capacity to enter the contract. However, Rasmusen speculates that where the principal has engaged a habitual drunkard, then the principal should bear the risk, even though in such cases too the agent‘s capacity would be impaired.165 This is because the principal in such cases has more evidence of the agent‘s lack of capacity than any one third party and is therefore the least-cost avoider.166 This category is a very promising one in terms of the analogy with artificial agents. When an artificial agent is temporarily acting in a manner which is obviously defective or faulty, the third party may well be the least-cost avoider of harm, since the principal may be unable to constantly monitor all the activities of the agent at reasonable cost. However, if the artificial agent habitually acts in such a manner, the principal should bear the risk. The third category of cases relates to collusion between the third party and the agent against the principal‘s interests, and other cases where the third party knows the agent is acting contrary to the principal‘s interests.167 In such cases, the principal is relieved of liability for the contract, and the third party is thereby made responsible for monitoring the agent, given that his cost of doing so is zero (since he already knows the agent is being unfaithful to the principal).168 This might seem to be an unlikely category of cases for the near future, while artificial agents remain incapable of deception and other such sophisticated behaviors. However, deliberate hacking of an agent to induce disloyal 160. See Rasmusen, supra note 73, at 392–98 (including, inter alia, care to check authority, the third party has immediate information about the transaction, collusion between agent and third party, and obvious agent malfeasance). 161. Id. at 392. 162. See id. (noting that the third party would be the least-cost avoider in this situation). 163. Id. at 395. 164. Id. 165. Id. 166. Rasmusen, supra note 73, at 395 (indicating that if the principal stands behind the agent even though the latter is a habitual drunkard, the principal should bear the loss from the agent‘s frivolous agreements). 167. Id. at 395–97. 168. Id. at 396.

No. 2]

ARTIFICIAL AGENTS

399

behaviour is quite conceivable, whether for commercial gain or simply as a means of vandalism. If agents exhibiting such behaviors were to operate, nothing about the artificial nature of such agents requires liability for their contracts to be visited on their principals any more than in the case of unfaithful human agents. The fourth category also promises to be a fecund area for the treatment of artificial agents: obvious agent malfeasance, as when it is obvious to a third party that the agent lacks authority to enter a particular contract. Here the principal is released from liability if the agent‘s malfeasance is obvious to the third party.169 In the case of artificial agents, this category and the second category blend into each other, while for human agents, a lack of capacity will be only one form of obvious misbehavior. C. Objections A number of objections can be raised to the possibility of treating artificial agents as true agents in the legal sense. These objections are not insurmountable. 1. Lack of Personality A first possible objection holds artificial agents necessarily lack legal power to act as agents because they are not persons. The Restatement (Third) of Agency states categorically that: To be capable of acting as a principal or an agent, it is necessary to be a person, which in this respect requires capacity to be the holder of legal rights and the object of legal duties. Accordingly, it is not possible for an inanimate object or a nonhuman animal to be a principal or an agent under the common-law definition of agency. However, an animal or an inanimate object may serve as a person‘s instrumentality in a manner that may be legally consequential for the person.170 Even assuming this is a correct statement of extant law, it is not true for all legal systems for all times. We have already encountered the example of the Roman law of slavery, whereby slaves, although non-persons, were able to effect contracts on behalf of their masters and thereby act as, or in a role akin to, their legal agents.171 Similarly, in the next section we will find the capacity to act as an agent does not depend on the agent‘s legal capacity in his own right. Consequently, we can coherently postulate changes to the law, whether in the form of statutory reform or judicial precedents, that renounce the necessity for agents to be legal persons and thereby enable the adoption of the agency law approach to artificial agents. 169. Id. at 397–98 (indicating that the principal may be liable for small mistakes but not for large ones because in the latter case it is usually obvious for the third party that the offer is too good to be true). 170. RESTATEMENT (THIRD) OF AGENCY § 1.04 cmt. e (2006) (internal citations omitted). 171. See Kerr, supra note 9, at 30–31.

400

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

2. Lack of Capacity to Act as Agent The legal capacity to act as an agent depends only on the agent‘s abilities and not, for example, on whether the agent has full capacity to hold legal rights or be subject to liabilities in his own right.172 Thus, an agent need not have the contractual capacity of an adult legal person: so children who cannot contract for themselves can contract on behalf of adults.173 In English law, for the agency to begin or to continue, an agent must be of sound mind, in that the agent must understand the nature of the act being performed.174 In U.S. law, it appears that mental competence is not required, and mere retention of volition is sufficient.175 The English requirement that the agent be of sound mind appears to be motivated by a desire to protect the principal from the consequences of irrational action.176 The requirement could be adapted to the case of artificial agents by requiring, not that the agent understand the nature of the act being performed, but that the agent be functioning appropriately, based on our ability to successfully and consistently predict its behavior in taking actions contingent upon such an understanding. However, this threshold is too high. It would impose the risk of malfunction errors on the third party in all cases. While this is usually correct, as we have seen, there will be some cases where the principal is the least-cost avoider of such loss, just as sometimes, the agent‘s unsoundness of mind will be more obvious to the principal than to the third party. In order to deal with this possibility, the U.S. rule, retaining only a requirement of ―volition,‖ would appear preferable.177 Applying such a requirement to artificial agents would involve requiring the agent to be functioning intentionally, even if not correctly. How to apply such concepts to artificial agents is dealt with in greater detail in further work of ours.178 3. Requirement for a Contract of Agency A third possible objection relies on the fact that in some legal systems (specifically, civil law systems) the establishment of an agency relationship requires a contract between the agent and the principal,179 and as artificial agents

172. RESTATEMENT (THIRD) OF AGENCY §3.05 cmt. b (2006). 173. Id; see also C. CIV Art. 1990 (stating that under the French Civil Code children can in fact be agents but placing the caveat that they can only be held accountable for acts expected of a child). 174. BOWSTEAD, supra note 55, at 54. 175. ―One whom a court has adjudged mentally incompetent but who retains volition . . . has power to affect the principal as fully as if he had complete capacity.‖ RESTATEMENT (SECOND) OF AGENCY § 21 cmt. a (1958). 176. BOWSTEAD, supra note 55, at 54. 177. RESTATEMENT (SECOND) OF AGENCY § 21 cmt. a (1958). 178. See CHOPRA, supra note 9, at 637 (reflecting on several solutions for applying rules of agency to artificial constructs) and CHOPRA , supra note 72, in Chapter 1, passim. 179. See C. CIV. Art. 1984 (suggesting that an agency relationship requires an offer and acceptance, express terms required for establishment of a contract); Sartor, supra note 34 (noting that rights acquired by an agent remain with the agent until transferred by some means to the principal user).

No. 2]

ARTIFICIAL AGENTS

401

are not persons, they cannot enter contracts in their own name. 180 By contrast, in Anglo-American law a contract between principal and agent is not strictly necessary for the establishment of actual authority of an agent to act on behalf of a principal; all that is necessary is that the principal manifest her willingness for the agent to bind her as regards third parties.181 The doctrine of apparent authority, which can also be relied on to bind the principal to contracts concluded by the agent, similarly does not depend on any contract between principal and agent.182 In some civil law systems, the ―[a]cceptance of an agency may be only tacit and result from the performance of it effected by the agent.‖183 Nevertheless, in such systems an agent would require legal personality in order to enter the contract of agency on its own behalf.184 In legal systems where a contract of agency is necessary, it would be necessary either to accord personality to artificial agents in order to deal with contracts made by them, or to adapt to the Anglo-American model which does not require a contract of agency in order to establish an agent‘s authority. 4. No duty of obedience A further objection asserts that, unlike a human agent, an artificial agent owes no duty of obedience and cannot be sued.185 In this respect, artificial agents are somewhat different from minors, who can be agents even though they lack contractual capacity in their own right, but who are nevertheless subject to other obligations arising out of the fact of their agency.186 This objection should not be fatal if for the intuitive and economic reasons we have adduced above the agency law approach is the correct one. However important the obligations of an agent are, it is clear it is in their capacity of binding principals to third parties that agents do their legal ―heavy lifting.‖ The fact that many agents are economically insignificant compared with their principals means that, in any event, they are typically by-passed in litigation in favor of their principals.187 In any event, if the legal person solution were adopted, the 180. RESTATEMENT (THIRD) OF AGENCY § 1.04 cmt. e (2006). 181. E.g., Id. § 3.01 (―Actual authority, as defined in § 2.01, is created by a principal‘s manifestation to an agent that, as reasonably understood by the agent, expresses the principal‘s assent that the agent take action on the principal‘s behalf.‖) The reference to the understanding of the agent does not imply that there is necessarily a contract between the principal and the agent, though, of course, in many cases there will be such a contract. Id.; see also RESTATEMENT (SECOND) OF AGENCY § 21 (1958) (―[A] person without capacity to bind himself because lacking in mentality may have a power to bind one who appoints him to act, but he is not subject to the duties or liabilities which result from a fiduciary relationship.‖) (describing a situation in which a party may not be able to contract, but may be able to bind a principal). 182. See RESTATEMENT (THIRD) OF AGENCY § 3.03 (2006) (―[A]pparent authority, as defined in § 2.03, is created by a person‘s manifestation that another has authority to act with legal consequences for the person who makes the manifestation, when a third party reasonably believes the actor to be authorized and the belief is traceable to the manifestation‖) (explaining that apparent authority based solely on a manifestation of authority may bind). 183. C. CIV. art. 1985 (Fr.). 184. See C. CIV. Art. 1984 (artificial agents and contracts in civil law systems). 185. Joseph H. Sommer, Against Cyberlaw, 15 BERKELEY TECH. L.J. 1145, 1177–78 (2000). 186. RESTATEMENT (THIRD) OF AGENCY §3.05 cmt. c (2006). 187. See, e.g., MDY Industries, LLC v. Blizzard Entertainment, Inc., 616 F.Supp.2d 958 (D.Ariz. 2009)

402

JOURNAL OF LAW, TECHNOLOGY & POLICY

[Vol. 2009

objection would no longer be salient. 5. Inability to Keep Principal Informed A further objection asserts an artificial agent ―cannot keep its user informed of the transactions it is processing, or problems that might be developing.‖188 This depends on the design and functional capacity of the artificial agent. Artificial agents can be less able than humans to answer naturallanguage queries as to the progress of transactions and to give unstructured answers relating to novel situations. However, to the extent artificial agents are able to make highly cogent and reliable routine reports about the status of transactions, this difference in abilities is not fatal. And this is leaving aside any possible development in artificial agents‘ capabilities. 6. Inability to appear to be a principal Another related objection is that an artificial agent ―cannot appear to be a principal thereby triggering the law of undisclosed principals.‖ 189 On the current state of development, artificial agents are indeed not capable of appearing to be principals, as a principal is an entity able to enter contracts in its own right: i.e., a legal person. But the doctrine of undisclosed principals should be treated as severable from the rest of the doctrine of agency. It is unknown to civil law jurisdictions which still have an agency law doctrine. Its inapplicability should not lead to the conclusion that the rules that constitute the doctrine of agency should not apply to artificial agents.‖ VI. CONCLUSION The ―contracting problem,‖ the formal legal doctrinal problem of accounting for contracts entered into by electronic agents when acting on behalf of principals has led to various suggestions for its solutions, two of which involved treating electronic agents as legal agents of their principals, and which we refer to collectively as the ―agency law approach.‖ Our examination of the other solutions to the ―contracting problem‖ led us to argue that its most satisfying resolution—along the legal and economic dimensions— lies in granting artificial agents a limited form of legal agency. Such a move is not only prompted by the ever-increasing autonomy and technical sophistication of today‘s artificial agents but also by the better liability protection it enables for the human and corporate principals of artificial agents. Furthermore, while a number of the existing legislative responses to electronic (holding a software developer liable to a video game maker for selling a bot that allowed users to automate certain aspects of game play); see also Seth Schiesel, Grounding Autopilot Players of World of Warcraft, N.Y. TIMES, Feb. 7, 2009, at C5 (discussing MDY v. Blizzard); Randall Stross, Hannah Montana Tickets on Sale! Oops, They’re Gone, N.Y. TIMES, Dec. 16, 2007, § 3, at 4 (discussing Ticketmaster v. RMG, wherein the plaintiff sued the software developer of a bot that afforded ticket brokers the ability to buy more tickets, more quickly, than an individual consumer would be able to do). 188. Sommer, supra note 187, at 1178. 189. Id.

No. 2]

ARTIFICIAL AGENTS

403

contracting appear to embrace a ―mere tool‖ doctrine of electronic agents, the most important international texts—the Model Law and the Convention—are consistent with the agency approach. In particular, the Australian legislative response refers to a concept of authority amenable to a agency law solution to the contracting problem. The law of agency suggests intuitive rationales, as well as economic ones: broadly, it leads to the correct allocation of risk of error on the part of the agent in the preponderance of cases. While there are a number of objections to the agency law approach, none of them are insurmountable. We conclude that the application of the common-law doctrine of agency to the contracting problem involving artificial agents is cogent, viable and doctrinally satisfying.