Apr 21, 2017 - considerable impediments to the efficiency of email systems, choking up .... âWe specialize in successful, targeted opt-in email marketing.
Please cite as Wall, D.S. (2005) 'Digital Realism and the Governance of Spam as Cybercrime', European Journal on Criminal Policy and Research, 10(4): 309 – 335. [Draft Version]
DIGITAL REALISM AND THE GOVERNANCE OF SPAM AS CYBERCRIME DAVID S. WALL1 ABSTRACT. Spamming is a major threat to the formation of public trust in the internet and discourages broader civil participation in the emerging information society. To the individual, spams are usually little more than a nuisance, but collectively they expose internet users to a panoply of new risks whilst threatening the communications and commercial infrastructure. Spamming also raises important questions of criminological interest. On the one hand it is an example of a pure cybercrime - a harmful behaviour mediated by the internet that is the subject of criminal law, while on the other hand, it is a behaviour that has in practice been most effectively contained technologically by the manipulation of ‘code’ – but at what cost? Because there is not an agreed meaning as to what constitutes ‘online order’ that renders it simply and uncritically reducible to a set of formulae and algorithms that can be subsequently imposed (surreptitiously) by technological process. The imposition of order online, as it is offline, needs to be subject to critical discussion and also checks and balances that have their origins in the authority of law. This article deconstructs and analyses spamming behaviour, before exploring the boundaries between law and code (technology) as governance in order to inform and stimulate the debate over the embedding of cybercrime prevention policy within the code itself. Introduction2 Without trust in the internet the goal of encouraging broader civil participation in the European Information Society (IPTS, 2003: 71; Levi and Wall, 2004: 211) is unlikely to be achieved. Spamming is a major threat to the formation of public trust in that it affects all internet users in one way or another. Individually, spams represent little more than a nuisance, but collectively they expose internet users to a panoply of new risks whilst threatening the communications and commercial infrastructure. Yet spamming also raises important questions of academic interest because, on the one hand it is an example of a pure cybercrime - a harmful behaviour mediated by the internet that is the subject of criminal law, while on the other hand it is a behaviour that has in practice been most effectively contained technologically by the manipulation of ‘code’. This article explores the boundaries between law and code (technology) as governance in order to inform and stimulate an emerging debate over the embedding of policy within the code itself. Should we, for example, actively preserve the ‘end-to-end’ architecture of the Internet in order to give internet users free choice over whether or not they introduce security measures? Or should law and cyber-crime prevention policy be embedded within the code that structures the architecture of communications? Part one deconstructs spamming to reconfigure it as a cybercrime. Not only does the process of spam list building represent information theft, but it also facilitates the commission of further offences by enabling offenders to engage with victims. It also generates de minimis - apparently trifling small-impact bulk victimisations that contrast against a cybercrime debate that has too often been focused upon dramatic and sensational events with clearly perceived groups of victims and offenders. Part two looks at the governance of spam and reviews the tensions between the legal and technological 1
Contact: David S. Wall, Professor of Criminal and Information Technology, Justice Head of School, School of Law, University of Leeds, Leeds LS2 9JT. 2 The first part of this article is based upon research originally funded by a Home Office Innovative Research Award (see further Wall, 2003: 123-130) and subsequently updated in a presentation to the POLCYB conference in Vancouver in Nov. 2003. The second part is drawn from research into ‘The regulation of deviant behaviour on the Internet: the roles of law and ‘policing’ as governance” funded by the AHRB and was presented at a LEFIS workshop entitled ‘Lessig’s code: lessons for legal education from the frontiers of IT law’, Queens University, Belfast, 24th -25th, 2004.
1
determinist approaches. It will show that although law appears to have failed to stem the prevalence of spam, whereas the technological fix seems to have been successful, the latter does not provide a satisfactory solution to the problem, neither does the binary turn out to be so simple. The digital realism of spamming, as a cybercrime, is far more complex than either the legal or technological position allows. By drawing upon the socio-legal debates over ‘code as law’, inspired by Lessig’s Codes (1999) and its subsequent application by others such as Greenleaf (1998) and Katyal (2003) and also the criminological debates over crime prevention, it is argued that a multi-disciplinary ‘digital realist’ approach is required towards the governance of undesirable behaviour and the maintenance of order online. An approach which engages with the transformative impacts of the internet. Deconstructing spamming as a cybercrime Few would ordinarily regard spamming as a cybercrime. The reasons for this are two fold. Firstly, there is an overall lack of knowledge and understanding about spam. Which is not assisted by its rather demeaning and simplistic name. Secondly, although there appears to be a general agreement that they exist, there is much confusion about what cybercrimes actually are (Wall, 2005b; 2006). And not having a clear idea of what cybercrimes are makes them particularly difficult to police. Particularly confusing is the unreflective general tendency by commentators to call just about any offence involving a computer a “cybercrime”. It is far more constructive to see cybercrimes as criminal acts transformed by networked technologies. In so doing, the transformations can be exposed by considering what would remain if the internet were to be removed from the equation. The application of this ‘elimination test’ (Wall, 2005b: 82; 2006) causes three types of cybercrime to surface. First, there are “traditional” crimes in which the internet is used, usually as a method of communication, to assist with the organisation of a crime (for example, by paedophiles, drug dealers etc.). Remove the internet and the criminal behaviour persists because the offenders will revert to other forms of communication. Then there are hybrid cyber-crimes: “traditional” crimes for which the internet has opened up entirely new opportunities (for example, frauds and deceptions, as well as the global trade in pornographic materials). Take away the internet and the behaviour will continue by other means, but not in such great numbers or with so large a scope. Finally, there are the true cybercrimes. These are solely the product of the internet and can only be perpetrated within cyberspace. Take away the internet and spamming vanishes. Spams belong to the latter group, along with intellectual property thefts (acquisition) and the many forms of internet assisted “social engineering”3. As the spawn of the internet, spamming therefore embodies all of its transformative characteristics – its global reach, networking capabilities, empowerment of the single agent through the (re)organisation of criminal labour, use of surveillant technologies, asymmetry in actions (see Wall, 2005b; 2006); see also Savona and Mignone, 2004: 4), creating small-impact bulk-victimisations. Furthermore, spamming not only constitutes an illegal criminal act in many jurisdictions, but it is also frequently a precursor to further offending by providing the means for offenders to engage with their victims. This differential is problematic because, spams tend not to be initially regarded by victims as serious, but can subsequently lead to more significant forms of victimisation. For example, spam payloads that compromise the integrity of networks pave the way for more serious offending. Information harvesting becomes more serious when the information is eventually used against the owner. Similarly, Trojans delivered by spams may be used to install ‘back doors’ which are later used to commit computer-related cybercrimes (BBC, 2003), such as internet scams which can be minor in outcome, but serious by nature of their sheer volume. ‘Computer content’ crimes delivered by offensive spams may start out as offensive then contribute subsequently to the incitement of hatred or violence towards others. For the handful of souls still unaware, spamming is the distribution of unsolicited bulk emails that contain invitations to participate in ways to earn money; obtain free products and services; win prizes; spy upon others; obtain improvements to health or well-being, replace lost hair, increase one’s sexual prowess or cure cancer. The term ‘Spam’ is derived from a Monty Python sketch in which the word ‘spam’ was repeated to the point of absurdity in a restaurant menu (CompuServe Inc. v. Cyber Promotions). There are a number of arguments in favour of spam based upon the need to promote legitimate commercial 3
“Social engineering” is a term frequently used by crackers to denote the practice of tricking people into giving out personal information such as passwords to secure systems. It denotes the exploitation of weaknesses in people rather than through software. It was made popular by Kevin Mitnick in his 2002 book, The Art of Deception. It has a prior history in Skinnerian behavioural psychology.
2
activity and also upholding rights to free expression. There is also an argument that by continually challenging the system they contribute to overall improvements in internet security and in so doing support an entirely new anti-spamming wing of the cyber-security industry, creating new occupations and occupational spaces. But, the demerits far outweigh the merits as they degrade and depress the quality of virtual life. Unlike terrestrial junk mail shots which financially support postal services, spamming creates considerable impediments to the efficiency of email systems, choking up Internet bandwidth and access rates, reducing efficiency and costing internet service providers and individual users lost time through their having to manage spams and remedy problems that they give rise to, such as infections by viruses (Wood, 2004: 29-31). And then there is the fact that they rarely live up to their promises. They introduce new risks in the form of unpleasant payloads, potent deceptions or harmful computer viruses and worms, and their relentlessness generally dispirits internet users. Research by Pew (2004) found that levels of distress caused by spam increased and that growing numbers of Internet users were becoming disillusioned with email. Rather worrying is the prediction, supported by empirical observations (Wall, 2002; 2003) and other commentaries (Yaukey, 2001) that the problem of indiscriminate spams is likely to continue to increase during the coming years. Currently, between half and three quarters of our emails are spams (Leyden, 2003) and along with pop-ups and web ads, unsolicited messages constitute a major obstacle to effective internet usage and its further development. These are all the more worrying given the likelihood that spam numbers will continue to double each year (Wall, 2003), despite concerted attempts to stem their flow by recent antispam legislation and technology. Spammers continually search for new ways to circumscribe technological counter-measures, for example, by installing Trojans that turn recipients’ PCs into ‘Zombies’ and allow remote access, or by employing practices used by virus writers to get ‘unwanted messages into circulation’ (BBC, 2003). Research into the reflexivity of spammers reveals that the spam problem is far more complex than commonly assumed. The spamming industry is, in fact, two quite different sets of enterprises: the compilation and production of bulk email lists, which are then sold on spammers who then use those lists to spam recipients with a variety of offers. Bulk email list compilation The current legal method of compiling email lists in EU countries (under Directive 2002/58/EC) is to require voluntary opt-in to email lists through subscription. More common place, however, is the illegal compilation of email address lists (now illegal to use in the EU) by automated ‘spider-bots’ that scour the world wide web (Wall, 2005a). The economics are simple. Email addresses have no perceivable individual worth, but when collated with 10, 20, 40 or 80 million others they accumulate value. Spammers tend to use email addresses from lists sold to them in CD-ROM format by bulk email compilers. The following two advertisements for CD-ROMs containing lists of emails will be familiar to many readers. “MULTILEVEL MARKETING OPPORTUNITIES: Email Addresses 407 MILLION in a 4-disk set ** Complete package only $99.95!! **” “WE WILL SEND Successfully Emails 1.Million ADDRESSES =Only $99.95!! Nowhere else on the Internet is it possible to deliver your email ad to so many People at such a low cost.100% DELIVERABLE Want to give it a try? Fill out the Form below and fax it back:” But few spams from these CD-Roms will ever reach the recipient because most addresses will be inactive, there again the economy of spamming is such that only a few responses are needed to recoup costs and make a profit. Ironically, some of the major victims of the spam list compilation industry are themselves intending spammers who have been duped in to buying expensive CD-ROMS of unvalidated and useless email addresses.4 Active email addresses have a much higher value, that increases further when profiled by owner characteristics. In common with advertisements, spams containing information relevant to the recipient are most likely to obtain a positive response and result in a successful transaction.
4
For an interesting, yet amusing, description of the spamming process see Anderson (2004).
3
“We specialize in successful, targeted opt-in email marketing. MORE THAN 164 CATEGORIES UNDUPLICATED Email Addresses!! ** business Opportunity seekers, MLM, Gambling, Adult, Auctions, Golf, Auto, Fitness Health, Investments, Sports, Phsychics, Opt-in Etc..” A common, yet effective, strategy to confirm that an email address is active and also to yield important information about the recipient is to send out ‘spoof spams’ by using one of three tactics. A blank email may be sent which requests an automatic response from the recipient’s computer upon opening. Else, it may include offensive subject content or make preposterous claims that incite the recipient to ‘flame’ the sender. Alternatively, the spoof may include the option to ‘deregister’ from the mail list, providing the spammer with important information about the recipient whilst also embroiling them in a ‘remove.com’ scam whereby they end up paying recurrent ‘administration charges’, apparently for an email preference service, without any proof that the service they are buying works. In each case, the spammer obtains confirmation that the email address is valid and may also receive some important information about the recipient such as personal, occupational or commercial details. Email replies frequently reveal much information about the sender, such as where they work, for example, ‘ac.uk’ denotes that they work in a UK university and gov.uk in a UK government office, or ‘nameofbusiness’.com in business. Email signatures reveal even more specific personal information. An ongoing survey of spams received between 2000-2004 (Wall, 2003; 2005a) found that only a relatively small proportion, possibly just over ten per cent of all spams, were genuine attempts to inform recipients about products or services. The remaining 90 per cent lacked plausibility, suggesting that spammers were either short on business acumen, they were victims of unscrupulous list builders, or they deliberately intend to deceive the recipient (Wall, 2003). Approximately one third of all received spams were ‘spoof spams’. The contents of unsolicited bulk emails An analysis of spam content lends weight to the earlier implausibility argument and outlines clearly the types of risk that recipients might face. Table 1 provides a proportional breakdown of the contents of spams received at one account during the first two years of the longitudinal study into spamming (mentioned earlier). It is followed by a more detailed description of their content which illustrates the complex and multiple information flows that spams generate. Though not a precise match, the following categories find a resonance in Brightmail’s Slamming Spam (2002) 5 and other spam surveys such as Clearswift’s monthly spam index. Their prevalence changes over time as the Clearswift index illustrates6. Table 1: Unsolicited Bulk Email contents Income generating claims Pornography and materials with sexual content Offers of free or discounted products, goods and services Product Adverts / Information Health Cures/ Snake oil remedies Loans, credit options or repair credit ratings Surveillance/ Scares /Urban Legends Opportunities to win something, on-line gambling options Other Total •
28% 16% 15% 11% 11% 9% 5% 3% 2% 100%
Income generating claims contain invitations to the recipient, supported by unsubstantiated claims, to take up or participate in lucrative business opportunities. Examples include the following: a) investment reports and schemes; b) lucrative business opportunities such as pyramid selling schemes, inc. Ostrich farming schemes; c) earning money by working at home; d) ‘Pump and Dump’ investment scams; e) emailed Nigerian Advanced Fee scams; f) invitations to develop WWW sites and traffic for financial gains g) Phishing (which is) emails that purport to be from a legitimate bank requesting confirmation of personal details. Phishers rely upon the recipient’s inability to distinguish the bogus email from a real one and use the personal details to defraud the recipient (see Toyne, 2003).
5
Brightmail’s Probe Network Findings (Brightmail, 2002) used the following categories: Adult (8%), Financial (19%), Products (24%), Internet (14%), Other (35%). 6 See further .
4
•
Pornography and materials with sexual content. Examples include the following: a) straight-forward invitations to gain access to a WWW site containing sexually explicit materials; b) invitations to join a group which is involved in sharing images and pictures about specific sexual activities; c) invitations to webmasters to increase their business traffic by including invitations to obtain access to sexually oriented materials on their sites. Many of these spams contain entrapment marketing scams.
•
Offers of free or discounted products, goods and services (including free vacations) - For recipients to be eligible for these offers, they usually have to provide something in return, such as money, a pledge (via a credit card) or information about themselves, their family, their work or their lifestyle. Enticements include the following: a) free trial periods for products or services as long as the recipient first signed up to the service, for example, mobile phones, pagers, satellite TV (it was up to the recipient to withdraw from the service); b) free products, such as mobile phones, pagers, satellite TV, if the recipient signs up to the service for a specified period of time; c) cheap grey market goods which exploit import tax or VAT differences between jurisdictions by selling items such as cheap cigarettes, alcohol, fuel; d) spams which sell goods across borders, from jurisdictions in which the goods are legal to those where the goods are either illicit or restricted, such is the case with prescription medicines, body parts, sexual services, rare stones, antiquities.
•
Advertisements /information about products and services. Some of these advertisements are genuine, others are blatantly deceptive. Examples include advertisements for the following: a) office supplies, especially print cartridges; b) greatly discounted computing and other equipment; c) medical supplies and equipment; d) branded goods at greatly discounted prices; e) educational qualifications; f) Internet auction scams, whereby an advertisement containing information about the auction is spammed; g) bulk email lists.
•
Health cures/ snake oil remedies – Spammers who advertise health cures or snake oil remedies seek to prey on vulnerable groups like the sick, elderly, poor and inadequate. Examples include the following offers: a) miracle diets; b) anti-ageing lotions and potions; c) the illegal provision of prescription medicines; d) expensive non-prescription medicines at greatly discount prices (such as Viagra); e) hair loss remedies etc.; f) various body enhancement lotions or potions to effect breast, penis, muscle enlargements or fat reduction etc.; g) operations to effect the above; h) cures for cancer and other serious illnesses.
•
Loans, credit options or repair credit ratings - Examples include the following propositions: a) instant and unlimited loans or credit facilities, instant mortgages, often without the need for credit checks or security; b) the repair of bad credit ratings; c) Credit Cards with zero or very low interest; d) offers which purport to target and engage with people who’s financial life, for various reasons, exists outside the centrally run credit-rated driven banking system.
•
Surveillance information, software and devices. This category is hard to disaggregate from the mischief section below - the two are included together in Table 1 because it is hard to tell whether the information and products are genuine or not. Examples include: a) scare stories about the ability of others to surveil their Internet use to coerce recipients into buying materials - book, software etc. - about how to combat Internet surveillance and find out what other people know about them; b) encouraging recipients to submit their online access details to purportedly to find out what others know about them; c) recommending a www based service for testing recipient’s own computer security; d) encouraging recipients to purchase spyware that allegedly equips them to undertake Internet surveillance upon others (see earlier descriptions).
•
Hoaxes/ Urban Legends, Mischief collections. Examples include: a) spams that appear to be informative and tell stories that perpetuate various urban legends; b) hoax virus announcements or ‘gullibility viruses’ which seek to convince recipients into believing that they have accidentally received a virus and then provides instructions on to how to remedy the problem, deceiving them into removing a system file from their computer; c) messages which appear to be from friends, colleagues or other plausible sources that deceive the recipients into opening an attachment which contains a virus or a worm; d) chain letters which sometimes suggest severe consequences to the recipient if they do not comply, or the letters may 5
engage the recipient’s sympathy with a particular minority group or cause, fore example, single parent women, or women in general (the Sisterhood Chain Letter scam); e) email based victim-donation scams that emerged on the Internet soon after the events of 11/9/01; f) Invitations to donate funds to obscure religious based activities or organisations; g) links to hoax WWW sites, such as Convict.Net which originally started as a spoof site, but was so heavily subscribed by former convicts that it eventually became reality. •
Opportunities to win something, on-line gambling options - Examples include: a) notification that the recipient has won a competition and must contact the sender so that the prize can be claimed - or they might have to provide some personal information before the prize can be received; b) offers to enter a competition if information or money is provided; c) free lines of credit in new trial gambling www sites.
Victims and offenders Many of the spams mentioned above are disguised forms of entrapment marketing from which victims subsequently find it hard to disengage. The victims of malicious spam content are very hard to identify because they are a heterogeneous group of internet users – possibly their only common characteristic. There is also the more general problem of under-reporting. Victims, for example, often do not know who to report to or may be too embarrassed, or individual losses might be small and the victims simply tolerate the loss. Indeed, an analysis of ISP complaints statistics over 18 months found that the overall threat to the majority of individuals is reasonably small, and that they gradually tend to find their own ways of dealing with spams, which suggests patterns of personal risk assessment and avoidance similar to those found offline (Wall, 2002). Even if reported, then spam-assisted crime will be tend to be recorded by the principle offence committed and not the spam. Furthermore, if reported to the police then the victim is likely to fall into the de minimis trap - the ‘law does not deal with trifles’ (de minimis non-curat lex). The de minimis characteristic of smallimpact bulk victimisations found in cybercrime, combined with the globalised, cross-jurisdictional reach of most cybercrimes, places spamming outside the traditional Peelian framework of policing the dangerous classes which frames the police/public mandate (Manning, 1978; Critchley, 1978; Reiner, 2000: Ch 2; Wall, 1998: 23). As a consequence, there exist many operational, organisational and legal obstacles to the allocation of police resources for investigation and prosecution (Wall, 2005c). Either it is not deemed to be in the public interest to investigate individual cybercrimes, such as spam because they are too minor in nature, or they are too complex technically or cross-jurisdictionally to make conviction likely. So, unless the impact of the crime is considerable, it is unlikely that investigative resources will be released. Finally, even if those resources were released to launch an investigation then the agency to whom the victimisation is reported may not have the knowledge, skill-sets or experience to respond (Wall, 2005c). The greatest danger posed by spams is to the more vulnerable communities in society: the poor with financial problems; the terminally sick ever hopeful of some relief from their pain; the poor single parent who sends off their last $200 for a ‘work at home’ scheme; the youths who seek out ‘cheats’ for their computer games. One particularly vulnerable group are the newly retired who possess all of the signs to fall victim to online fraud - spare time, lack of computer knowledge and savvy, large sums of money to invest. Novel forms of spamming, particularly those employing tactics based upon ‘social engineering’, such as Phishing and ‘Gullibility viruses’, frequently catch many internet users unaware until the nature of the new risk is understood - either through word of mouth, local IT support, www or media reporting. Just as the victims of spam related crime are a heterogeneous group, then so are the spammers. At one end of the spectrum are the honest brokers who genuinely seek to advertise products and services, but at the other end are the dishonest brokers whose aim is to entrap and defraud. It is important to remember that many spammers are themselves victims of spam list-building frauds. Somewhere in the middle are the misguided brokers, protesters, pranksters, smugglers, artists, list builders (Wall, 2002). These findings show that not all spammers are rational actors who seek to maximise benefits while reducing their costs (Savona and Mignone, 2004: 4) - only some are - which has implications for the simple blanket application of rational choice theory based spam solutions discussed later in this article. The characteristic that spammers and their
6
victims each appear to have in common is the internet7, therefore it is in the governance of behaviour on the internet where the solution to spam must lie. ‘Canning the Spam!’ In spamming we find a pure cybercrime, if the internet could be removed, then spamming would disappear. But the problem is that we can not take-away the internet! What, therefore, do we do about the spam problem? Legal versus technological determinism There are currently two major schools of thought dominating the debate over spamming: legal and technological determinism (in addition to the libertarian and laissez faire models). Underpinning the former is the belief that the norms embodied in legislation condition social change, underpinning the latter is the belief that technology performs this role. The traditional legal determinist solution to an undesirable behavioural problem is to introduce new laws to curtail it, and there has of late been no shortage of laws to deal with Spams. On December 11 2003, the UK introduced compulsory opt-in legislation in the form of the Privacy and Electronic Communications (EC Directive) Regulations 2003 (SI/2003/2426), which brought into effect Article 13 of EU Directive 2002/58/EC8 on privacy and electronic communications passed in July of the previous year. Prior to December 2003, the UK had adopted a self-regulatory model in which spammers were supposed to provide those on their mail lists the facility to opt out. The new EU law outlawed spamming unless consent had previously been obtained from the recipient. The main problem for EU law, as Table 2 illustrates, is that most of the spam received in the UK - over 90 per cent - originates outside the EU, with about half from the USA, and a quarter from the Far Eastern countries.9 Table 2 The ‘Dirty Dozen’ spam producing countries (Sources: Sophos, 2004a and 2004b) (N.B. Feb 04 positions in brackets) Ranking 1 (1) United States 2 (4) South Korea 3 (3) China (& Hong Kong) 4 (6) Brazil 5 (2) Canada 6 (-) Japan 7 (7) Germany 8 (8) France 9 (12) Spain 10 (9) United Kingdom 11 (11) Mexico 12. (-) Taiwan - (5) Netherlands - (10) Australia
Aug. 2004 42.5% 15.4% 11.6% 6.2% 2.9% 2.9% 1.3% 1.2% 1.2% 1.2% 1.0% 0.9% -
Feb. 2004 (56.7%) (5.8%) (6.2%) (2.0%) (6.8%) (1.8%) (1.5%) (1.1%) (1.3%) (1.2%) (2.1%) (1.2%)
Others Total
11.8% 100.0%
(12.2%) 100.0%
Change over 6 months -14.2% + 9.7% + 5.4% + 4.2% - 3.9% - 0.6% - 0.3% + 0.1% - 0.2% - 0.2% - 0.5% 0.0%
In response to spamming, the US Federal legislation ‘Controlling the Assault of Non-Solicited Pornography and Marketing Act of 2003’, or the ‘CAN-SPAM Act of 2003’ (S. 877) was passed by Congress in early December 2003 and came into effect on January 1st 2004. It imposed ‘limitations and penalties on the 7
See Smith, Grabosky and Urbas (2004) for a more general over-view of cyber-criminals. The case studies in their Appendix illustrate heterogeneity in the broader population of cyber-criminals. 8 See paragraphs 40, 41, 42, 43, 44, 45. The full text of the EU Directive on privacy and electronic communications (concerning the processing of personal data and the protection of privacy in the electronic communications sector at: europa.eu.int/eurlex/pri/en/oj/dat/2002/l_201/l_20120020731en00370047.pdf 9 Others such as Kurt Einzinger, general secretary of Internet Service Providers Austria, have claimed that the percentage from the US is even higher at around 80 per cent (Ermert, 2004).
7
transmission of unsolicited commercial electronic mail via the Internet’. The US legislation sits on top of a variegated patchwork of state legislation, some of which is very strong, for example, in California – and others weak or non-existent. Unlike the EU legislation which requires recipients to have previously opted-in to spam lists, the CAN-SPAM Act introduced the compulsory requirement for spammers to provide opt-outs (the UK’s previous position) immediately creating a discordance between the EU/UK and US approaches. However, despite the discord and criticisms of their respective shortcomings, bodies of legislation in the EU and US were brought into effect from early 2004 where neither previously existed, but the impacts were not precisely those expected by the legal determinists. Six months after the legislation came into effect, there was a noticeable decrease in the percentage of spam from North America and also a slight decrease from the EU, against an increase from Asia and South America - but the majorities remain. However, upon reflection the changes in the distribution of spams is probably more the product of changes in spam technology, for example the use of CSS (Cascading Style Sheets), exploitation of non-delivery reports and the deployment of remotely controlled ‘zombie’ machines. Figure 1 illustrates a considerable increase in the overall number of spams in circulation. Figure 1 The relationship between law and spam distribution
Graph line A illustrates the relationship between spam distribution and the process of introducing antispamming law. The increase in the number of spams received each month (Graph A) clearly suggests the failure of law to control spamming. The exponential rise in spamming appears to begin in mid-2002 - around the same time as the EU Directive was passed - and was not hindered by the EU and US legislation in December 2003. Indeed, the cynical reader might be forgiven for believing that the legislative process actually made the spamming problem worse. There again, the graph-line also shows a marked drop off in the number of spams received after April 2004, so perhaps this is evidence of a delayed impact of law. Looking more closely at the trends. Figure 1 represents a data-stream that is based upon a longitudinal study of monthly spam receipts by an account for which the email address was published on a public www site and therefore vulnerable to address-gathering ‘spider-bots’. The account remained unchanged for 4.5 years and therefore reflects any overall changes in spamming activity during that period of time. In May 2004 the data stream was interrupted when the account was moved to another server which ran anti-spam software that made various checks on incoming emails and intercepted ‘confirmed’ unsolicited bulk emails before they reached the account’s mailbox. Previously, the software had only flagged potential spams before placing them into the mailbox, thus giving the account owner the choice over what to do with them. This ‘technological’ event, not the law, explains the dramatic fall in spams received by the test account since April/May 2004 and appears to give weight to the technological determinist position – that technology can be used to reduce opportunities for crime by suppressing, or ‘designing out’ the opportunities that encourage the offending behaviour. 8
Without the intervention of anti-spam technology in May 2004 the receipt of spams would have continued to rise and research by Commtouch Software Ltd. independent confirms this observation. It found that the number of spams originating in the US increased by 43 per cent in the six months following the introduction of the US anti-spam legislation (Gaudin, 2004). This increase, observable on graph A in Figure 1, was largely due to the rise in the number of 'botnets'. Botnets are networks of virus infected computers (Zombies) that can be remotely controlled by their 'infectors' to become platforms for launching 'denial of service attacks' or 'Phishing' expeditions and, importantly, further spam distribution through spamming programmes known as 'ratware'. Botnets (constituted by a list of IP addresses of infected computers) are potentially valuable commodities because of the levels of illegal access that they provide and are traded. They indicate the emergence of a new generation of cybercrimes that is being caused by collaborations between, and the convergence of cybercrime skill sets of hackers, virus writers and spammers. They are not discussed here, but see further Wall (2004; 2006). These observations appear to support Stewart Room’s argument that the real weapons against spam may not actually be found in Directive 2002/58 or SI2003/2426 or in the CAN-SPAM Act (Room, 2003: 1780) and in technology. What has actually happened is that the spams are now being identified and deleted en route to the recipient. Although this interception brings internet users some respite from the never-ending spam tsunami, there is no reason to believe that the ‘technological’ solutions have stemmed spamming behaviour at all, they have only reduced the number of spams received by the account and thereby the risk to the recipient. The solution to spamming is therefore more complex than a technological fix (the subject of the subsequent discussion). More worrying is that the decision to intercept spams appears to be based upon scientific considerations made in the absence of any critical debate. Considerations which wrongfully assume that spammers are heterogeneous and act rationally. This practice of 'ubiquitous crime prevention' contravenes the established principle of ‘end-to-end’ architecture which values free communication while leaving the choice over what and what not to receive with the user (Lessig, 2001: 35-37, 173). Towards a digital realism The ‘digital realism’ of spamming is more than a simple set of binaries – to spam or not to spam, or law v. technology. We therefore need to explore recent developments in thinking about the governance of online behaviour in terms of a multi-disciplinary perspective that identifies its relative merits and demerits. Some inspiration for such an approach can be found articulated in Lessig’s early ‘New Chicago School’ arguments (Lessig, 1999: 235) which draw upon Mitchell’s City of Bits (1995), whereby behaviour is shaped by the codes that create the architecture of the internet. It is a thesis that draws inspiration from Foucault’s analysis of Bentham’s Panopticon (Foucault, 1983: 223) in which the physical design of the prison shapes the power relationship between prisoner and guard, in favour of the latter. Within Bentham's Panopticon, only a few guards were required to control a large number of prisoners. It worked on the principle of ‘natural surveillance’ with prisoners obeying the prison rules of conduct because they feared punishment and never knew when they were being watched. The disciplinary theory behind the Panopticon was that, under the panoptic gaze, offenders would adhere to the prison regime in order to avoid punishments - eventually modifying their behaviour. Although the ‘architectural’ arguments of Lessig et al. provide the basis for analysing the role of law in governance of harmful online behaviour, it was Greenleaf (1998) who made out the case for a ‘digital realist’ approach which takes into account the fact that the law in isolation only has a limited direct impact upon behaviour, but does have the capacity to shape the environment in which behaviour takes place. Like Lessig, Greenleaf explores the relationship between the law, the ‘codes’ which create the architecture of the Internet, social norms and the opportunities created by the market. Of specific interest is the capacity of these "four modalities of constraint" (Lessig, 1999) to shape criminal or deviant behaviour. The broader themes expressed within this digital realist perspective are not so far removed from existing crime prevention theories that advocate the use of technological means to change the physical environment of criminal opportunity in order to reduce it. Where it departs from these theories is that digital realism considers the specific nature and implications of the architecture of the Internet as ‘codes’ – technology creates the environment. It is therefore useful to bring together the debates over internet governance and the ‘criminologies of everyday life’ (including situational crime prevention) which also use ‘technologies of control’ (Marx, 2001) to effect the governance of online behaviour.
9
Reading between the lines, the principle strength of the digital realist argument is that neither Law nor technology alone can be considered to be the sole driver of social action. This observation is important if we are going to effect a realistic policy to shape undesirable internet behaviour such as spamming. The problem is, however, that although Lessig points us in the right direction, he doesn’t really show us where to go because his primary objective is to contribute to debates over freedom of expression on the internet rather than develop a theory of internet governance. As a consequence, two main weaknesses emerge. Firstly is his over-simplification of software as ‘code’ (Greenleaf, 1998) and his under-conceptualisation of its many facets. Although Lessig states implicitly that code has many intentions, what is not fully explored are the different functions of code(s). Some codes, for example, create and facilitate very desirable behaviours that celebrate what is good about the internet, others as indicated above, create new opportunities for undesirable behaviour. Some codes interrogate, or allow the investigation of other codes, others provide stealth and protection against intrusion. Some codes attack and destroy different codes, others guard against such attack. Some codes facilitate behaviours, others restrict them. Some codes are easily changed, others are embedded in the hardware. This spectrum of usage and purpose makes all the more confusing Lessig’s statement that ‘code is law’. While the code can do the work that satisfies law’s desire, surely it is of paramount importance that the law remains the authority. To say ‘code is the law’ is rather like saying that police officers are the law when in fact their role is to apply the law. The second weakness is Lessig’s under-conceptualisation of the four ‘constraints’, when each also has a powerful enabling function. His focus upon their constraining qualities is restrictive and generally understates their value as instruments of behavioural governance. It also over-simplifies the relationship between them, which (despite the many caveats) Lessig tends to portray in functional rather than relational terms, reducing them at times to a simple flow chart (Lessig, 1999: 88) and therefore downplaying the complexities of any relationship that exists between them. In some ways, rather reminiscent of Talcott Parson’s The Social System – top down, too neat and tidy. Of these ‘constraints’, his use of the concept of architecture is the hardest to pin down because it develops as the book progresses. It varies from being one of the four modalities of constraint to being the meta-structure in which all constraints are located. In short, Lessig does not adequately acknowledge or accommodate the quite different ‘spaces’ that each modality represents. Since Lessig clearly stated his mission at the outset as being to contribute to the broader debate over freedom of expression, then in many ways, such a critique is a little unfair. However, a critical overview enables the ideas to be advanced. Particularly useful is the notion, identified earlier, of the ‘four modalities’ as spaces, or sub-architectures that empower as well as constrain within the overall architecture of cyberspace. Perceiving them as ‘spaces’ reveals the specific and different nature of the power relationships that each imposes to shape behaviour. Legal spaces are, for example, quite different to social spaces, which are different from market spaces and also the broader architectural structure in which they take place. Consequently, the new digital realist approach reconciles the desires of law with the digital realism of cybercrime: legislative intentions; social attitudes towards spam; market forces and the technologies of control. It is very important not simply to view the ‘modalities’ as parts of a whole and assume that a functional relationship exists between each. While they may appear to “‘work’ together as a functional entity”, displaying a broad common purpose, they may not necessarily have any other unity, being shaped as a complex latticework of influences rather than as a driver (of social action). The concept of ‘assemblage’ (Haggerty and Ericson, 2000: 605) is much more useful in explaining the relationship between them. While Haggerty and Ericson, after Deluze and Guattari, were adapting ‘assemblage’ to explain the lack of causal relationship in the application of different surveillance technologies, the analogy remains appropriately useful to strategies of governance. The detailed debate over the value of applying ‘configuration’ and ‘assemblage’ to governance is for another paper, but for this discussion, the concepts accommodate overlap between the spaces, any confluence that occurs and also any unintentional cross-influences. This revision of Lessig’s modalities as different types of spaces each with their own architectures can enable us to demonstrate the complexity of the relationship between law and social action and also to understand the ways that law, as an expression of norms, ‘casts its shadow’ over the field and can exercise a ‘chilling effect’ upon spamming behaviour.
10
Using legislation to change the legal and regulative architecture. Introducing new legislation is the most obvious starting point in dealing with spamming. However, while the law spells out what it right and wrong, various challenges to law enforcement arise, as do issues of unequal access to justice for the individual because the spam problem is trans-jurisdictional. Already highlighted is the fact that over 90 per cent of the spams received in the EU come from outside its boundaries. Yet, law is an important source of authority, clarifying the formal (state) position on the issue and also sending out a clear message to spammers whilst also legitimising the use of a range of legal techniques. Such techniques would range from the use of 'cease and desist' notices ('letter’s before action' in the UK) to privately brought criminal actions, or civil actions brought with a view to developing the common law position (in common law countries). One interesting and very pertinent example of the chilling effects of law is found in the Institute for Spam and Internet Public Policy’s ‘death by 100 paper cuts’ strategy, which encourages victims of spam (mostly domain name owners) to ‘sue a spoofer’. With enough domain owners standing up and saying “[w]e’re not going to take it”, and fighting back, spammers will have to stop spoofing, if not stop spamming altogether”10. Law ‘casts its shadow’ widely and under that shadow, fall a range of other actions. Shaping social values to change the normative architecture Can take place in two different ways. Firstly through passive education by educating internet users to understand the nature of the beast - what is and what is not spam and what the risks are. Causing them to make up their own minds and express their own choices, driven by their own experiences and supported by information sources made available by coalitions of interested parties, NGOs and government organisations. Especially informative is Spamhaus.org, the Coalition Against Unsolicited Commercial Email (cauce.org), and also David Sorkin’s spamlaws.com site. The most common outcome of increased user awareness is that they begin to deal with their own spams. This is clearly illustrated in the overall drop in the percentage (and number) of complaints to ISPs about unsolicited bulk emails (Wall, 2003). Over an 18 month period the percentage of quarterly complaints about spams fell from 38 per cent to 14 per cent (Wall, 2002). The second way is by actively encouraging the building, or galvanising, of online communities of users to counter spam. A number of counter-spam communities exist that individuals can consult to find out how they can remove their own addresses from existing spam lists. See for example, Spamhaus.org, Junkbusters.com and Spambusters.com. Of course, these active groups also provide passive information which is educational. Alternatively spam groups may go further and actively push the political process for change, for example, by lobbying politicians to bring about a more co-ordinated international response. The Parliamentary All Party Internet Group (apig.org.uk) have been very active in this endeavour, as have CAUCE (Coalition Against Unsolicited Commercial Email), resulting in legislative responses by the EU Parliament and US Congress. In both examples, law provides both a reference point and also empowerment. Shaping markets to change the architectures of consumption. Because spams present internet users with a major disincentive to use the internet which exacts a commercial toll, then internet service providers (including organisations providing networked services to employees and clients) are forced by market pressures to introduce more and more robust policies towards spams. Although levels of responsiveness vary, anti-spam policies have their origins in user-action in so far as it shapes the market for a product. Simply put, the more spam received, the greater the need to act. But the policies also take authority from law. Although the anti-spam law does not specifically prescribe ISP action, its ‘shadow’ does strengthen the hand of the ISP, and others, when introducing anti-spam measures. In addition to introducing anti-spam technology into their servers, ISPs also want to be seen to act (legally), against those who bring their servers into disrepute by distributing spams – to disincentives spammers and create a further ‘chilling effect’. A good anti-spam policy and track record of legal action makes for improved business, deters spammers, and also enables ISPs to offset potential liabilities. Shaping the code to change the architectures of communication. Following on from the above, spams and their payloads have become such a prominent issue in recent years that the major ISPs are now employing sophisticated filtering software to identify known spams and remove them from their systems. There are also many new and quite effective technological methods of hardening potential target computers with spam10
See
11
filters, email preference services, email filtering facilities, and other security software. Indeed, Graph B, on Figure 1, tells a completely different story to Graph A as it represents the spams that got past the spam filters and ended up in the mail box. As few as 6 per cent of the spams received by the research account were not identified as spams by the filtering system. Clearly, technology has the greater impact than law. As Lillian Edwards has observed, ‘code trumps the law’ (Edwards, 2004). However, it is the anti-spam law that provides the authority for the use of anti-spam code. So, in this sense, code is not the law, as stated earlier, rather it reaffirms the view that law is an authority behind the code it empowers. Which is an important distinction, else we begin to believe that the use of technology is ‘natural’ and benign rather than the product of human action. What this ‘digital realist’ analysis illustrates is that while the direct impact of law upon behaviour may be limited, law has nevertheless played an important, though not exclusive role in the governance of spams. Under the ‘shadow of law’ technology is effective in shaping the architecture(s) to reduce spam receipts, but its shadow also strengthens social values against spammers and shapes the market against them. The fact is that the receipt of spams has declined considerably, possibly by more than 95 percent – which is good news11. The problem is that the (mainly technological) interventions have not been the product of a coherent policy formation process with the consequence that they represent a shift away from ‘end-to-end’ policy, to the embedding of policy in the codes that facilitate communications. Embedding policy into code With policy embedded into the code, users can no longer exercise a free choice to mediate their own communications. While it is unlikely that many people would wish for more spams, the uncritical adoption of technological interventions has much broader implications for the flow of information on the internet. Which brings us back to Lessig, the role of government in the origination of cyber-crime (prevention) policy and also of the law. He shows that the confluence of codes as regulators and codes as architecture makes the internet vulnerable. This is because his primary concern has been to ‘build a world of liberty’ into an internet whose future looks increasingly controlled by technologies of commerce and backed by the rule of law (Lessig 1999: x), but also unnervingly controlled by distributed, rather than centralised, sources of authority: “The challenge of our generation is to reconcile these two forces. How do we protect liberty when the architectures of control are managed as much by government as by the private sector? How do we assure privacy when the ether perpetually spies? How do we guarantee free thought when the push is to propertize every idea? How do we guarantee self-determination when the architectures of control are perpetually determined elsewhere?” (p.x-xi). It has been Katyal (2003), who has made some inroads into these conundrums with regard to the cybercrime debate. He takes forward Lessig’s lead in isolating the concept of ‘architecture’ as a constraint upon online behaviour to inform the regulation of computer crime (Katyal, 2003: 2261). He draws upon ideas of crime prevention that employ the manipulation of real-space architecture to show that the following four digital design principles can be employed to prevent cybercrime: natural surveillance; territoriality; community building; protecting targets. Katyal states that cyberspace solutions to cybercrime must therefore try to capture the root benefits of the internet's potential to impose natural surveillance, territoriality (stewardship of a virtual area), capacity for building online communities and protecting targets, without damaging its principal design innovation - its openness (Katyal, 2003: 2268). Yet, he observes the merits of balancing the advantages with the disadvantages, because this ‘openness’ can be “both a blessing and a curse”. On the one hand, it helps software, particularly open source code, adapt when vulnerabilities are found, but on the other hand, “… the ease with which architecture is changed can also facilitate exit and network fragmentation” (Katyal, 2003: 2267). Similarly, closed or hidden code, especially that which creates the higher-end internet architecture, in the hands of the private sector can lead to similar fragmentations. Lessig, himself, has long argued that private ordering can “pose dangers as severe as those levied by the state” (Katyal, 2003: 2283; Lessig, 1999: 208), which he similarly fears because of its lack of transparency. But contra Lessig, Katyal claims that since architecture is such an important tool of control, then this lack of transparency is precisely the reason why 11 This was the collective capture rate from three spam-filters employed at the user end. New research by IBM which employs new methods of identification promises to capture upwards of 97 per cent of spams (Leyden, 2004).
12
the government “should regulate architecture, and why such regulation is not as dire a solution as Lessig portrays” (Katyal, 2003: 2283). Government regulation, for Katyal, is the lesser of the two evils because it works within more transparent frameworks of accountability. But this brings us back to the implications of the uncritical application of anti-spam technology. The apparent success story of the taming of spams also resonates with the traditional debates over situational crime prevention found within criminology. Debates that can further inform the formation of cyber-crime prevention policies. We know from the mainstream criminological literature that victims - in this case of spamming and associated crimes - clearly feel let down by the criminal justice system and have begun to tolerate spam by adopting a range of coping strategies. These strategies may involve spending time physically deleting spurious messages, regularly changing email addresses, installing anti-spam filter technology etc. In Garland’s words, they adopt “a stoical adaptation that prompts new habits of avoidance and aggravation at the cumulative nuisance that crime represents for daily life” (Garland, 2000). In so doing, however, they also become less willing to be sympathetic towards offenders – as the social and psychic investment of individuals in crime expands, we end up with more punitive language and a harder response. In other words, spam recipients feel less and less disposed towards spammers, particularly as their spamming becomes more relentless and therefore more ready to accept the imposition, backed by law, of the introduction of anti-spam technology. But to what effect? Although this debate has hitherto largely been about freedom of expression, there is an even broader and equally serious consideration about self-determination - one of the characteristic benefits of the internet. The technological fix is becoming an increasingly easier option, especially when it is being carried out automatically on our behalf by technology. The problem is that the embedding of policy in the code is not just about freedom of expression because there are very few who would actively support spamming either in principle or in practice. Rather, it is about the loss of and lack of choice, and knowing at what point the code ceases to be a help and becomes a hindrance, whether consciously or not. We may do well here to take heed of the concerns expressed by the Frankfurt School about using technology to solve problems, because without balances and checks, the technology becomes “aware of everything but itself and its own blind spots and biases” (Agger, 2004: 147). “Once we assume that scientific methodology can solve all intellectual problems, science becomes mythology, aware of everything but itself and its own blind spots and biases. This results in authoritarianism, especially where science is harnessed to industrial-age technology and nature is conceptualised as a sheer utility for the human species” (ibid). This authoritarianism often rears its ugly head in the arguments underpinning the introduction of technological restraint in application of, what have become known as, the ‘criminologies of the other’ and the ‘criminologies of everyday life’ (Garland, 2000; 2001) which include rational choice theory, routine activities theory and lifestyle theory. Embodying a Hobbesian view of the individual, these anti-social criminologies (Hughes, et. al. 2001) purport to reduce crime by reducing criminal opportunities through the wholesale restriction of movement, regardless of individual intent. Situational crime prevention, for example, draws upon the criminologies of everyday life (Garland, 2001: 127) to focus upon the reduction of opportunity by increasing the effort needed to commit a crime, increasing the risks to the offender, or reducing the reward of crime. Whereas a utilitarian case can be argued to support situational crime prevention policy in the physical world, it begins to fall down in the virtual world because the manipulation of codes can have such an overwhelming and absolute impact on users’ ability to exercise a rational choice. There are many continuities in the use of technologies of social control and power, especially in terms of the rationale behind the use of technological devices – namely to rationalise capital and make commercial processes more efficient, economic and effective. However, the discontinuities begin to occur in the sheer magnitude of distributed networked technologies and the impact of code upon choice, ‘movement’, global reach, ‘equidistance (Geer, 2004)’. It is important at this point, to draw a fine line in the distinction made earlier between pure cybercrimes and traditional/ hybrid cybercrimes. Newman and Clarke (2003), for example, make a plausible argument – though contestable - for applying situational crime prevention to ‘e-commerce crime’, which they define as economic crime conducted over the internet. For the purposes of this article, this is ‘hybrid cybercrime’ – traditional criminal behaviour for which new (networked) opportunities arise such as fraudulent retailing, 13
financial services fraud, medical services/products fraud, e-auctions, peer-to-peer transactions (Wall, 2005b:82). Take away the internet and these forms of behaviour will still exist by using alternative forms of communication and probably on a smaller and much more local scale. While, some of these economic crimes will inevitably be the result of the secondary impact of spam, because the spam caused the engagement of the spammer with their victims, the crime prevention solution is not dissimilar to its off-line origins. They are mainly ‘end-to-end’ solutions which seek to engender trust between parties engaged in a commercial transaction by verifying the true intentions of each, and not policy embedded within the main communications architecture. Spam, as a pure cybercrime, on the other hand, is a very different issue. The technology introduced to remove spam does not discriminate on the basis of intent, it just blocks all messages that contravene a set of criteria represented in a scientific formula embedded in the codes that control the communication process. This is problematic because spammers are, as outlined earlier, a heterogeneous group and not all spams necessarily contain criminal intent. There is also the problem of applying different anti-spam legislation across a range of jurisdictions and culture –increasing the need to ‘think globally, but act locally’ (Kuchinskas, 2004). The debate over whether or not law or technology is the best way to solve the problem of spam has recently taxed organisations such as the Anti-Spam Research Group of the Internet Research Task Force. Interestingly, there is a growing acceptance that stronger and harder legislation is less desirable than “a mix of private legal action and technology” using “technology to make laws more enforceable” (Kuchinskas, 2004) - not so far conceptually from a digital realist approach. Conclusions Spams introduce a range of new risks delivered through the conduit of networked technologies. Although law has been introduced to outlaw spamming it has been trumped by the ‘technological fix’ which has had by far the greater impact on reducing risk to the recipient. However, the literature of Karl Marx through to Gary Marx (2001) via many others has illustrated quite conclusively that the application of technologies of control alone tends to inscribe distrust into the process, leading to the breakdown of trust relationships because they fail to reassure, create fresh demands for novel forms of trust then institutionalises them (paraphrasing Crawford, 2000). In fact, history shows generally that technologies of control can often make much worse, the very problems they were designed to solve, especially if, in the case of spam, they end up hardening the spammer’s resolve. Already there is evidence to show that the means of circumventing antispam measures is becoming more sophisticated (BBC 2003) and that there are also marked signs of resistance in the form of Hactivism, Denial of Service attacks and the various forms of ethical and nonethical hacking. This is in addition to the increasing convergence of the (cyber)criminal skill sets belonging to fraudsters, hackers, virus writers and spammers (Wall, 2004). But this does not mean that law has failed, rather the preceding analysis reveals that the law works in a number of different ways and at a number of different levels to achieve its desire. In many ways the discussion over spamming is fairly unproblematic because of the overwhelming support for anti-spam measures, so it is therefore harder to make stick some of the arguments posed earlier. However, what the discussion has exposed is the necessity to ensure that there is in place legal authority which justifies and legitimises action, provides some transparency and also allows recourse in cases where injustice occurs. Because there is not an agreed meaning as to what constitutes ‘online order’ that is common enough to render it simply and uncritically reducible to a set of formulae and algorithms that can be subsequently imposed (surreptitiously) by technological imperatives, the imposition of order online, as it is offline, needs to be subject to critical discussion and also checks and balances that have their origins in the authority of law, rather than technological capabilities. When the discussion shifts to the contravention of the ‘end-to-end’ principle through the use of wholesale technological interventions the important role to be played by law becomes most visible. Some measure of legal support for this principle can, interestingly, be found in the US case of Metro Goldwyn-Mayer Studios, Inc. v. Grokster, Ltd. in which various facets of the music industry attempted to curtail use of peer-to-peer technology (in the distribution of MP3s). The 9th Circuit decided (and upheld on appeal) against the plaintiffs on the basis that ‘significant non-infringing uses’ would be inconvenienced were the decision to have been made in favour of MGM et al. The decision effectively preserves the principle of ‘end-to-end’ because technological interventions also ensnare legitimate users of the technology. It will be interesting to see if a similar argument will be launched against the design of anti-spam filters at some point in the future. 14
We need to wise up to the fact that spamming is here to stay in one form or another. Not only will it continue to increase in volume, but spammers and their software will continue to be inventive and reflexive in overcoming security measures (Wood, 2004: 31-32). Unfortunately, this enduring reflexivity means that we have some hard decisions to make if we are to maintain current internet freedoms and openness and not become strangled by security. Either we continue to allow decisions to be made on scientific grounds and allow the ‘control creep’ (Innes, 2001), endemic to the present climate of post-911 security consciousness (Levi and Wall, 2004), to continue to the point where the pessimistic authoritarian predictions of the Frankfurt School come true and networked technologies become what we all fear, the insidious meeting point of crime science and governance at a distance. Or we work towards developing workable, law-driven frameworks in which a range of considerations – including technology, but also social and market values are employed to reduce harmful online behaviour. There is a hard message here, if the criminologists do not take on this pragmatic role then they will quickly become eclipsed by the crime scientists who will (Clark, 2004; 55). It is inevitable that in practice, ideals will become moulded by the politics of compromise, but if this means that we may have to tolerate spam to a small degree in order to preserve what is good about the internet then it will be a small price to pay. Given that purely technological solutions are problematic for the reasons outlined earlier, then a digital realist approach could inform (social) policy formation in order to constitute the most viable and effective attack upon what has quickly become ‘the white noise’ of the internet. Cases CompuServe Inc. v. Cyber Promotions, 962 F.Supp. 1015 f/n 1 Metro Goldwyn-Mayer Studios, Inc. v. Grokster, Ltd. 9th Circuit Court of Appeals, 19 August, 2004. References Agger, B. (2004). The Virtual Self: A Contemporary Sociology. Oxford: Blackwell. Anderson, M. (2004). Spamming for Dummies. The Register, 27 July, 2004. BBC (2003). Spammers and virus writers unite. BBC News Online, 30 April. Brightmail (2002). Slamming Spam Clarke, R. (2004). Technology, Criminology and Crime Science. European Journal on Criminal Policy and Research, 10(1), 55-63. Crawford, A. (2000). Situational Crime Prevention, Urban Governance and Trust Relations. In A. von Hirsch, D. Garland, and A. Wakefield (Eds), Ethical and Social Perspectives on Situational Crime Prevention, 193-213. Oxford: Hart Publishing. Critchley, T.A. (1978). A History of the Police in England and Wales. London: Constable. Edwards, L. (2004). Code and the law: the next generation. Paper given at the LEFIS workshop Lessig’s code: lessons for legal education from the frontiers of IT law, Queens University, Belfast, 24th -25th July. LEFIS. Ermert, M. (2004). Good Spam: Bad Spam. The Register, 5 February. Foucault, M. (1983). Afterword: The Subject and Power. In H. Dreyfus and P. Rainbow (Eds), Michel Foucault: Beyond Structuralism and Hermeneutics 2nd edition, 208-26. Chicago: University of Chicago Press. Garland, D. (2000). The Culture of High Crime Societies: Some Preconditions of Recent “Law and Order” Policies. British Journal of Criminology, 40(3), 347-375. Garland, D. (2001). The Culture of Control. Oxford: Oxford University Press. Gaudin, S. (2004). U.S. Sending More Than Half of All Spam. Internetnews.com, 1 July. Geer, D. (2004). The Physics of Digital Law, Plenary Speech at the Digital Cops in a Virtual Environment Conference, Yale Law School, 26-28 March. Information Society Project. Greenleaf, G. (1998). An endnote on regulating cyberspace: architecture vs. law?. University of New South Wales Law Journal, 21(2) (reproduced in D.S. Wall (Ed), Cyberspace Crime, 89-120. Aldershot: 15
Ashgate/Dartmouth, 2003). Haggerty, K. & Ericson, R. (2000). The Surveillant Assemblage. British Journal of Sociology, 51(4), 605622. Hughes, G., McLaughlin, E. and Muncie, J. (2001). Teetering on the edge: the futures of crime control and community safety. In G. Hughes, E. McLaughlin and J. Muncie (Eds), Crime prevention and Community Safety: Future Directions. London: Sage. Innes, M. (2000). Control Creep. Sociological Research Online, 6(3). IPTS (2003). Security and Privacy for the Citizen in the Post-September 11 Digital Age: A Prospective Overview (EUR 20823). Katyal, N.K. (2003). Digital Architecture as Crime Control. Yale Law Journal, 112, p. 2261-2289. Kuchinskas, S. (2004). Think Globally, Block Locally. internetnews.com, 29 July. Lessig, L. (1999). Code and other laws of cyberspace. New York: Basic Books. Lessig, L. (2001) The Future of Ideas: The Fate of the Commons in a connected world. New York: First Vintage. Levi, M. and Wall, D.S. (2004). Technologies, Security and Privacy in the post-9/11 European Information Society. Journal of Law and Society, 31(2), 194-220. Leyden, J. (2003). Spam epidemic gets worse. The Register, 3 December. Leyden, J. (2004). IBM dissects the DNA of spam. The Register, 23 August. Manning, P.K. (1978). The Police: Mandate, Strategies, and Appearances. In P. Manning and J. Van Maanen (Eds), Policing: A View from the Street, 7-32. New York: Random House. Marx, G.T. (2001). Technology and Social Control: The Search for the Illusive Silver Bullet. International Encyclopaedia of the Social and Behavioral Sciences. Amsterdam: Elsevier. Mitchell, W. (1995). City of Bits: Space, Place, and the Infobahn. Cambridge, Mass.: MIT Press. Mitnick, K. and Simon, W.L. (2002). The Art of Deception: Controlling the Human Element of Security. New York: John Wiley and Sons. Newman, G.R., and Clarke, R.V. (2003). Superhighway Robbery: Preventing e-commerce crime. Cullompton: Willan Publishing. Pew (2004). The Can-Spam Act has not helped most email users so far. Pew Internet Project Data memo, March. Reiner, R. (2000). The Politics of the Police, 3rd edition. Oxford: Oxford University Press. Room, S. (2003). Hard-core spammers beware?. New Law Journal. 28 November, 1780. Savona, E. and Mignone, M. (2004). The Fox and the Hunters: How IC Technologies Change the Crime Race. European Journal on Criminal Policy and Research, 10(1), 3-26. Smith, R.G., Grabosky, P.N. and Urbas, G. (2004). Cyber Criminals on Trial. Cambridge: Cambridge University Press. Sophos (2004a). Sophos outs ‘dirty dozen’ spam producing countries: Anti-spam specialist maps the spam world. Sophos Press Release, 26 February. Sophos (2004b). Sophos reveals latest ‘Dirty Dozen’ spam producing countries Anti-spam specialist reveals the biggest exporters of junk email. Sophos Press Release, 24 August. Toyne, S. (2003). Scam targets NatWest customers. BBC News Online, 24 October. Wall, D.S. (1998). The Chief Constables of England and Wales: The socio-legal history of a criminal justice elite. Aldershot: Dartmouth. Wall, D.S. (2002). DOT.CONS: Internet Related Frauds and Deceptions upon Individuals within the UK. Final Report to the Home Office (unpublished), March. Wall, D.S. (2003). Mapping out Cybercrimes in a Cyberspatial Surveillant Assemblage. In F. Webster and K. Ball (Eds), The intensification of surveillance: Crime terrorism and warfare in the information age, 112-36. London: Pluto Press. Wall, D.S. (2004). Son of Spam: Crime convergence in the information age. Annual Conference of the American Society of Criminology, Nashville, Tennessee, USA, 17 – 20 November. American Society of Criminology. 16
Wall, D.S. (2005a). Surveillant Internet technologies and the growth in information capitalism: Spams and public trust in the information society. In R. Ericson and K. Haggerty (Eds), The New Politics of Surveillance and Visibility. Toronto: University of Toronto Press. Wall, D.S. (2005b). The Internet as a Conduit for Criminals. In A. Pattavina (Ed), Information Technology and the Criminal Justice System, 77-98. Thousand Oaks, CA: Sage. Wall, D.S. (2005c). Policing Cybercrime: Situating the public police in networks of security in cyberspace. Police Practice and Research: An International Journal (forthcoming). Wall, D.S. (2006). Cybercrimes. Cambridge: Polity Press, (Forthcoming). Wood, P. A (2004). Spammer in the Works: Everything you need to know about protecting yourself and your business from the rising tide of unsolicited “spam” email, A Message Labs White Paper, April. Yaukey, J. (2001). Common sense can help you cope with spam, USA Today, 19 December. .
17