Computer capacity utilisation ndash; a multilevel ...

5 downloads 19230 Views 213KB Size Report
Feb 16, 2015 - utilisation can best be understood as a 'transition process', where an information- .... households and small businesses through computer utilities. .... when Apple introduced its ground-breaking Apple II, only a meagre 20,000, ...
This article was downloaded by: [NIFU STEP] On: 20 February 2015, At: 03:30 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Technology Analysis & Strategic Management Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ctas20

Computer capacity utilisation – a multilevel perspective on decades of decline Arne Martin Fevolden

a

a

Nordic Institute for Studies in Innovation, Research and Education, PB 5183 Majorstuen, Oslo NO-0302, Norway Published online: 16 Feb 2015.

Click for updates To cite this article: Arne Martin Fevolden (2015): Computer capacity utilisation – a multilevel perspective on decades of decline, Technology Analysis & Strategic Management, DOI: 10.1080/09537325.2015.1012057 To link to this article: http://dx.doi.org/10.1080/09537325.2015.1012057

PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Downloaded by [NIFU STEP] at 03:30 20 February 2015

Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions

Technology Analysis & Strategic Management, 2015 http://dx.doi.org/10.1080/09537325.2015.1012057

Computer capacity utilisation – a multilevel perspective on decades of decline Arne Martin Fevolden†∗

Downloaded by [NIFU STEP] at 03:30 20 February 2015

Nordic Institute for Studies in Innovation, Research and Education, PB 5183 Majorstuen, Oslo NO-0302, Norway

This article uses a multilevel perspective to explain why there has been a dramatic decline in the utilisation of computer-processing power during the last two to three decades. It identifies that computers have been used in two different ways – either as single-user systems where each users received his or her own computer or as time-sharing systems where each computer was shared by several users. It finds that the time-sharing systems made considerably better use of the computers’ resources than the single-user systems and that the utilisation of computer-processing power began to decline as single-user microcomputers began to replace time-shared minicomputers and mainframes. The article argues that this development contradicts much of the recent research on capacity utilisation, but argues that this contradiction can be explained by analysing this development as a transition process with multiple levels of interaction. Keywords: multilevel perspective; capacity utilisation; computer industry; systems architecture

1.

Introduction

Companies in capital-intensive industries need to maintain a high level of capacity utilisation to remain viable (Chandler and Hikino 1990; Hughes 1983). Only by dividing their high fixed costs of capital investments over a substantial amount of output can these companies keep their unit costs low and their prices competitive. As a consequence, they have ensured that capital goods as varied as factories and elevators, airport runways and telecommunications networks are used in a way that make the most of their available capacity (Nightingale et al. 2003). Nevertheless, there is one capital good that has evolved the opposite way – namely computers. The utilisation of computer-processing power has in the period from 1960s until the 2000s fallen from around 60 to less than 5%.1 This decline contradicts what most existing research on capacity utilisation would predict (Davies 1996; Nightingale et al. 2003) and represents a tremendous waste of capital. By simply increasing computer capacity utilisation from the current 5 to the 1960s average of 60%, we would have multiplied the available processing power by 12 and made 11 out of 12 computers redundant. Although processing power is growing rapidly, our low ∗ Emails: † Current

[email protected]; [email protected] address: Centre for Technology, Innovation and Culture, PO Box 1108, Blindern, Oslo N-0317, Norway

© 2015 Taylor & Francis

Downloaded by [NIFU STEP] at 03:30 20 February 2015

2

A.M. Fevolden

level of computer capacity utilisation means that we invest a lot of resources in maintaining a stock of computers that is much larger than what is necessary. Considering that many companies today are trying to improve the utilisation of their computers by reintroducing old technologies under new labels such as ‘virtual machines’ and ‘cloud computing’ – the decline in computer capacity utilisation also appears as an historical paradox. This paper seeks to explain why this decline happened. This article will try to answer why computer capacity has declined by applying a ‘multilevel perspective’ (MLP; Geels 2002; Rip and Kemp 1998). It will argue that the decline in capacity utilisation can best be understood as a ‘transition process’, where an information-processing regime based on centralised architectures (e.g. time-shared microcomputers) was replaced by a regime based on decentralised architectures (e.g. personal computers). Although the decline in computer capacity utilisation first gained momentum during the late 1980s and early 1990s, it will also maintain that this transition process was largely brought about by socio-technical events that took place in the 1960s and 1970s. The MLP was chosen as this article’s theoretical perspective because the perspective is especially suited to explain long-term technological changes (Geels 2002) and has already been successfully applied to explain transition processes of a similar magnitude to the one we will study in the computer industry – such as the transition from sailing ships to steamships and horse-drawn carriages to automobiles (Geels 2002, 2005). Nevertheless, this article does not only want to apply the MLP, it also wants to contribute to this perspective. The article wants to help further develop the MLP, by showing how this perspective can be used not only to explain technological progress but also technological fragility and regression. The article will argue that transitions are violent processes that bring about progress on a broad front, but that in the ensuing chaos – valuable features of a technology can also be lost. The article follows a tradition that Nelson and Winter (1982, 46) called ‘appreciative theorising’ – of exploring a subject through detailed accounts rather than formal representations. The article will be organised in the following way: in Section 2, the article will discuss how theories of capacity utilisation can be understood in the context of computing. In Section 3, it will look at case study on the emergence, evolution and eventual demise of time-sharing. In Section 4, the article will provide an answer to why computer capacity utilisation has declined by applying MLP to the case study, and Section 5 will discuss some important theoretical findings that can be derived from this exercise. 2.

Computers, architectures and capacity utilisation

Research on capacity utilisation has mostly been conducted on industries making use of capital goods quite different from computers (see, for example, Chandler and Hikino 1990; Davies 1996; Hughes 1983; Nightingale 2000; Nightingale et al. 2003; Tether and Metcalfe 2003). It is, therefore, important to point out how computers differ from ‘conventional capital goods’ – such as factories, airports and power plants – and how these differences influence an analysis of their capacity utilisation. First, computer capacity utilisation cannot be measured directly. Unlike a car factory, for instance, – where capacity utilisation can be calculated as the difference between the actual and the maximum number of cars produced – computers provide computational services that vary constantly with the users’ various requests and offer no consistent unit of measurement. A common solution to this problem has been to measure capacity utilisation at the component level – as the difference between the actual and the maximum number of instructions that the

Downloaded by [NIFU STEP] at 03:30 20 February 2015

Computer capacity utilisation 3 central-processing unit (CPU) carries out (usually measured in microinstructions per second). The conventional wisdom has been that the utilisation of the CPU works as a good proxy for the system’s overall utilisation (Datamation 1966c; Newsweek 2002). Second, computers differ from conventional capital goods by providing services that can deteriorate as utilisation increases. When the processing load increases, computers need to schedule their tasks and their response time increases. For computer users this means that they will experience that the computer ‘slows down’ and becomes less responsive. Nevertheless, it is possible to increase utilisation without reducing the responsiveness of the computer, by exploiting what the computer pioneer, Licklider ([1960] 1988, 135), in his influential essay ‘Man–Computer Symbiosis’, referred to as the ‘speed mismatch between men and computers’. Licklider found that a person’s leisurely interaction with the computer and need of sleep and pauses left the computer standing idle for many longer and shorter time intervals and that this unused capacity could be used for other purposes, without affecting the responsiveness of the computer. Third, computers differ from many conventional capital goods by having their potential level of capacity utilisation largely determined by their architecture. A computer system’s primary source of unused capacity is the ‘speed mismatch between men and computers’, which leave the computer system standing idle waiting for input from the users. Nevertheless, this unused capacity can only be meaningfully used if the computer system is able to transfer the computer power from users that squander it to users that can make good use of it, and to do that the computer system needs a ‘systems architecture’ that enables it to effectively distribute its resources between multiple users. There is a range of different systems architectures and they have a varying potential level of capacity utilisation (Englander 2000). Systems architecture can be thought of as a continuum, running from ‘true’ decentralised architectures – where all the computer resources are dedicated to a single user – to ‘true’ centralised architectures – where all of the computer resources are shared between vast numbers of users. The ‘true’ decentralised architectures offer no means of transferring unused computer power between users and will, therefore, have a low potential level of capacity utilisation, while the ‘true’ centralised architectures have the potential to redistribute most unused capacity and, therefore, have a high potential level of capacity utilisation. As we can see from Figure 1, several of the familiar architectures can be placed between these extremes. To the left on the decentralised side, we find single-user systems (personal computing) and client/server architectures, where most or almost all of the computer systems’ resources are dedicated to individual users. The main difference between single-user and client/server architectures is that the client/server architectures allow the single-user computers to share a computer server, where files and applications can be stored and accessed. Nevertheless, the client/server systems only have a slightly higher level of capacity utilisation than single-user system because the servers control only a small part of the system’s resources and most of the processing is carried out by the front-end computers. To the right on the centralised side, we find what we today call virtual machine and cloud computing architectures – and what was previously called time-sharing and computer utility architectures – where most or almost all of these two computer systems’ resources can be shared between multiple users. The main difference between virtual machines (time-sharing) and cloud computing (computer utilities) is size. Cloud computing (computer utilities) usually involves computers that are remotely located, with a large number of users accessing them over the telephone lines or the Internet and has a larger potential for distributing computer power than virtual machines, which frequently serves a smaller group of users. (See, for instance, Englander (2000) for more information about computer architectures.)

4

A.M. Fevolden

Downloaded by [NIFU STEP] at 03:30 20 February 2015

Single-user systems (Desktop computers)

Distributed processing (Client/server systems)

Time-sharing systems/ Virtual machines

Computer utilities/ Cloud computing

Figure 1. Four different systems architectures. Note: The shading indicates where most of the system’s resources (processing power, memory and storage) are located – either in the front row as terminals or desktop computers (that are completely dedicated to each user) or the back row as centralised computers (that users must share with other users).

In reality, it is difficult to make exact measurements of computer capacity utilisation – since computer capacity utilisation is as much a social as a technical issue and it is difficult to find out exactly how much use people make of their computers. Nevertheless, there are reasons to believe that there is a strong correlation between computer architecture and computer capacity utilisation. Not only is it evident from the conceptual discussion above that users using centralised architectures will make more effective use of their computer resources than users using decentralised architectures do, but the few empirical studies that have been conducted also seem corroborate this view.2 3.

Case study

When interactive computing – the type of computing we engage in when we are playing a computer game, writing a report in a word-processing programme or browsing for information on the Internet – first emerged as a feasible form of computing in the early 1960s, it came bundled with a system that ensured a high level of capacity utilisation, called time-sharing.3 The time-sharing system was an architecture that allowed for a high level of capacity utilisation by enabling several users to simultaneously share a single computer and ensured that the processing power that one user might have squandered was made available and put to good use by another. The system became so successful that by late 1970s it had become an indispensable part of the information-processing environment of large corporations and government institutions and given rise to so-called ‘computer utility’ companies that began to provide time-sharing services to households and small businesses over the telephone lines. Nevertheless, in the 1980s and 1990s, two events conspired to bring down computer capacity utilisation. The microcomputer emerged and (i) adopted the single-user system – an extremely wasteful form of architecture where an entire computer was dedicated to each user; and (ii) the single-user microcomputers began to replace the time-shared minicomputers and mainframes and thereby brought down the economy-wide utilisation of computers. Both of these events seem at first sight paradoxical, and this case study will explain these events as a three-part ‘transition’ process. In the first part, we will look at how timesharing system became a ‘socio-technical regime’, and how this regime succeeded in providing

Computer capacity utilisation 5

Downloaded by [NIFU STEP] at 03:30 20 February 2015

information-processing services to large corporations and government institutions through timeshared minicomputers and mainframes, but failed to provide the same level of services to households and small businesses through computer utilities. In the second part, we will see how the pent-up demand among households and small businesses for information-processing services created a ‘niche’ in which the microcomputers could evolve, but also how this niche again shaped the microcomputers’ hardware and software in a way that forced the microcomputer companies to abandon the time-sharing system in favour of an architecture based on single-user computing. Finally, in the third part, we will see that the same ‘niche’ enabled the microcomputer companies to adopt common hardware and software standards that again enabled them to benefit from economies of scale, which would eventually prove so powerful that the microcomputers were able to supplant the minicomputers and mainframes – despite their lower level of capacity utilisation.

3.1.

Time-sharing on minicomputers and mainframes (1960s–1980s)

In an article examining the use of the term ‘time-sharing’ that appeared in Datamation, March 1966, Robert A. Colilla, concluded that: [f]or a group interested in advancing the art of communication between men and machine and for one concerned with the accurate retrieval of information, we have not performed well in communicating among ourselves. Indeed, we almost seem to have invested more effort in developing catchy, semiabsurd names than we have in developing clear, meaningful ones. (Datamation 1966b, 51)

What Colilla addressed in his article was a conceptual crisis within the computer community that originated with the new time-sharing systems. By the early 1960s, a number of major time-sharing projects had been completed and several more were underway (see below, SAGE, MIT and Dartmouth College time-sharing projects). These projects brought with them a host of new technologies and captivating visions of what computing was about to become and, perhaps, an even greater sense of confusion. Even today, with the benefit of hindsight, it is easy to find these early developments bewildering. Time-sharing did not appear as a single, homogenous entity; rather it appeared as three related but distinct versions of the same system – ‘the online transaction-processing system’, ‘special-purpose time-sharing’ and ‘general-purpose’ time-sharing. The first version of time-sharing – ‘the online transaction-processing system’ – emerged out of SAGE, a colossal computer and radar-based air defence system, developed during the 1950s and 1960s by the US military to protect the country against a potential nuclear bomb attack by the Soviet Union (Pugh 1996, ch. 15). The other two versions of the system – the ‘special-purpose’ and ‘general-purpose’ time-sharing systems – emerged out of computer projects at MIT and Dartmouth College. What separated these three versions of time-sharing was the degree of flexibility that the systems offered. The online transaction-processing system allowed its users access to only a single programme (or group of related programmes); special-purpose time-sharing, to a single programming language; and general-purpose time-sharing, to whatever programme or programming languages they might desire. Nevertheless, flexibility came at a price – increased flexibility caused technical instabilities, which caused the system to crash or perform poorly, which again limited the number of simultaneous users that the system could accommodate (Norberg, Freedman, and O’neill 1996, ch. 2).

Downloaded by [NIFU STEP] at 03:30 20 February 2015

6

A.M. Fevolden

The trade-off between flexibility and size within the time-sharing system influenced the commercialisation of the system during the 1960s and 1970s. The mainframe companies with their large computers offered the online transaction-processing system to large corporation and government institutions, and brought the system into varied applications such as banking, ticket reservation, air-traffic control and stock market quotations. The minicomputer (and some mainframe) companies with their smaller computers were responsible for introducing the generaland the special-purpose time-sharing systems in the scientific and engineering markets, providing machines for simulation, statistics, programming and product design (O’Neill 1992). Both the minicomputer and mainframe companies embarked on an effort to provide largescale special- and general-purpose time-sharing. They wanted to provide large-scale specialand general-purpose time-sharing not only because larger systems provide a higher level of capacity utilisation, but also because larger computers, at the time, benefitted from considerable economies of scale. These scale economies were known as Grosh’s law, a law that held that a computer twice the size of another (cost twice as much) could marshal four times its computer power (Ceruzzi 1998, 177–179). The minicomputer and mainframe companies went about providing large-scale special- and general-purpose time-sharing in two different ways. The minicomputer companies (e.g. DEC, Data General and Wang Laboratories) with their small machines supported special- and generalpurpose time-sharing while they tried to expand the size of their systems; and the mainframe manufacturers (IBM, General Electric and Honeywell) with their large machines supported the online transaction-processing system while they tried to introduce systems that were less constrained. The minicomputer and mainframe companies managed during the late 1960s and early 1970s to remove many of the impediments that held back the time-sharing system. The minicomputer companies gradually improved the size of their systems, supporting an average of 16–24 users in the beginning to well over 260 in the late 1970s.4 The mainframe manufacturers managed in the early 1970s after several spectacular failures – most notably, GE’s model 645 and IBM’s 360/67 – to introduce machines that supported special- and general-purpose time-sharing (Ceruzzi 1998, 154–158). There was also another type of company that helped commercialise the time-sharing system, called ‘computer utilities’ (Campbell-Kelly and Garcia-Swartz 2008; Norberg, Freedman, and O’neill 1996, ch. 2). The computer utilities were time-sharing service providers that bought or rented computers from the mainframe and minicomputer companies and sold computer services by enabling companies, for a fixed fee, to connect their terminals to their computers over the telephone lines. They started to appear in the second half of the 1960s and provided at first access to special-purpose time-sharing services and later began to provide online transactionprocessing services, such as online library searches and stock market quotations. The computer utilities planned to use large computers to solve the computational problems of small firms that relied on relatively weak computers and profit from the efficiency differentials that Grosh’s law imposed between computers of different sizes. By exploiting differences in working hours across time zones and providing services both to companies during the day and households during the evening, they could ensure that their computers were not standing idle over long periods of time (Baran 1971). The computer utilities never quite lived up to the expectation that some computer pioneers had of them turning into something akin to computational power plants, which would cover the computational needs of towns and cities. The computer utilities relied on the national carriers to relay their services over the telephone lines, but the national carriers complicated their transactions by charging steep prices for long distance calls and making light of their attempts to

Downloaded by [NIFU STEP] at 03:30 20 February 2015

Computer capacity utilisation 7 improve data-transfer standards. The national carriers did not see the computer utilities as potentially important customers and, therefore, found no reason to adopt any special treatment for them, and they were afraid to enter the business themselves because of potential anti-trust litigation (Datamation 1966a, 1967, 1968, 1969). The computer utilities also found that their business got squeezed between the evolution of the time-sharing system and the diffusion of integrated circuits: when Grosh’s law held sway, the time-sharing system did not allow for large enough machines to support the computer utilities and when the time-sharing system allowed for large enough machines, Grosh’s law no longer held sway (more on this in 3.3) (Campbell-Kelly and Aspray 1996, 215–219). Furthermore, the computer utilities had great difficulties handling load cycles (most likely due to the complexity of their operations, which involved having customers log on to their machines at irregular intervals). Their systems frequently clogged up and broke down when too many clients used their services at the same time, causing more than a little frustration among their clients. They also received many complaints from their clients that their services were both exceedingly difficult to use and terribly crude (Datamation 1977). Without the aid of Grosh’s law, the computer utilities were forced either to provide querybased services using online transaction processing or to compete head-to-head with minis and mainframes by providing special- and general-purpose time-sharing services. They thrived on the query-based market because economics favoured maintaining large databases on a single machine. But against the minis and mainframes, they did not fare well. The computer utilities’ special- and general-purpose time-sharing services proved so poor, unreliable and expensive that most firms found that they were better off buying or renting a minicomputer or mainframe rather than to rent time on the computer utilities’ machines. The computer utilities might, if they had been given more time, been able to solve their load cycle problems, improve their services, resolve their disputes with the national carriers and, then, perhaps, extend their services across time zones and into peoples’ homes. But they were given no more time. They had, by 1977, when Apple introduced its ground-breaking Apple II, only a meagre 20,000, mostly discontented customers (Datamation 1977). They had done too little, too late to prevent the microcomputers from superseding them.

3.2.

Time-shared versus single-user microcomputers (1970s–1980s)

In 1971, Intel laid the foundations for the microcomputer revolution when it invented the microprocessor. Intel invented the microprocessor to serve as the logic unit in a calculator that were under development at the Japanese company, Nippon, but it soon realised that the microprocessor had wider applications and could be embedded in other electronics products, among them, computers (Jackson 1997, ch. 8). In the mid-1970s, several companies – such as R2E and MITS – began to build computers based on Intel’s microprocessors. These systems were initially more computers in theory than in practice, but they would in the next two years give way to successors that resembled the personal computer as we know it today, complete with the operating system, programming languages, connection ports for the video monitor, keyboard and printer, marketed and sold through consumer electronics stores (Ceruzzi 1998, 226–241). Nevertheless, the microprocessor did not only bring about the personal computer (single-user microcomputer), it also gave rise to another system that shared many traits with the minicomputers and mainframes – the time-shared microcomputers. When the Indiana-based computer company, Technical Systems Consultants, declared in an advertisement in Byte, August 1977, that it had created a system that allowed for ‘[f]our simultaneous users, all running BASIC, all

Downloaded by [NIFU STEP] at 03:30 20 February 2015

8

A.M. Fevolden

independent, and all on the same Micro’ (1977), it was only the first in a long line of companies that announce their new microprocessor-based time-sharing systems. Although companies such as Tandy, Apple and Commodore experienced great success selling single-user microcomputers to hobbyists and households, other companies such as Cromemco, Ohio Scientific and Altos Computer Systems experienced during the late 1970s and early 1980s similar success by providing time-shared microcomputers to small companies and institutions. The time-shared microcomputer companies sold machines that competed against low range minicomputers and mainframes (Fevolden 2013). The time-shared microcomputer companies managed to some extent to combine the ‘best of both worlds’. They took advantage of the capacity utilisation afforded by the time-sharing system and the inexpensive processing power offered by the microprocessors (more about this in the next section). Nevertheless, by the mid-1980s, virtually all the time-shared microcomputers had disappeared. There were several reasons why the time-shared microcomputers failed. One reason was that the time-shared microcomputer companies met strong competition from the minicomputer and mainframe companies, which provided machines with similar performance and software applications, while the single-user microcomputer companies only met moderate competition from the computer utilities, which struggled to provide reliable time-sharing services. This enabled the single-user microcomputer companies to expand effortlessly in the hobbyist and household market, while the time-shared microcomputer companies had to fight their way into the small business market. Over time, this contributed to making the single-user microcomputers the dominant platform in the microcomputer segment. Another reason was a temporary imbalance in hardware costs, which was caused by the rampant rate of innovation in the semiconductor industry that made microprocessors and memory cheap while hard disks and printers remained expensive. Industry observers, such as Tom Williams, argued during the early 1980s that time-sharing made less sense after the semiconductor companies had made microprocessors and memory cheap and readily available. They claimed that the most feasible systems were now those where the users shared only – what had become the most expensive parts of the system – hard drives and printers (InfoWorld 1980). This imbalance made networked/client–server based single-user microcomputer systems a competitive alternative to time-shared microcomputers in the small business market. A third reason was that IBM locked the time-shared microcomputer out of the dominant operating system standard when it entered the microcomputer industry. When IBM introduced its first single-user microcomputer, with an extensive marketing campaign, in August 1981, it received so many orders that it took almost two years before it had managed to ramp up production sufficiently to meet demand. Although IBM planned to introduce its microcomputer with an upgraded version of the popular operating system CP/M, it ended up – through a set of unfortunate circumstances – asking Microsoft to write a new operating system for its computer (Cringely 1996, 128–130, 164). Had IBM chosen CP/M, it would have taken part in a well-established operating system standard that was compatible with a wide range of single-user and time-shared microcomputers. Instead, it was left with a brand new operating system – called PC-DOS – that was at first only compatible with IBM’s own single-user microcomputer. Although Microsoft did rewrite the operating system so it could run on other single-user microcomputers (MS-DOS), it never introduced a time-sharing version, because it had for some time been developing another operating system specifically designed for the time-shared microcomputers, called Xenix. Microsoft had a long-term goal of merging Xenix and DOS, but developments in the microcomputer industry would prevent them from accomplishing this task (Campbell-Kelly 2003, 240–242). In 1983, the microcomputer software industry began to

Computer capacity utilisation 9 mature, which brought about shakeouts and a high level of company mortality (Campbell-Kelly 2003, ch. 8). Microsoft with its IBM-backed operating system standard did fine. It experienced such success with DOS that Digital Research – its main rival and the supplier of the dominant multiuser operating system, MP/M – was in a terminal decline by the summer of 1984 (Campbell-Kelly 2003, ch. 8). With its demise (and probably a whole host of other multiuser operating system suppliers), the time-shared microcomputer companies stood without a viable operating system. As a result, the microcomputer became a single-user system.

Downloaded by [NIFU STEP] at 03:30 20 February 2015

3.3.

Single-user microcomputers versus time-shared minicomputers and mainframes

Sales figures before 1987 seemed to indicate that microcomputers, minicomputers and mainframes would live peacefully side by side. Nevertheless, beginning in 1987, even the largest corporations changed their policies; they no longer bought large, centralised systems; they found that they got both better computers and better software by purchasing single-user microcomputers. They used these personal computers as replacements for the terminals that connected users to minis and mainframes, transferring, in the process, most of the corporations’ computational workload to them (Cortada 1996, 176–180). By the late 1990s, personal computers dominated computing everywhere from households to small businesses, from large corporations to government institutions. They had not, however, eliminated the time-sharing system; the system still existed, though, under a new name, client/server architecture. Nevertheless, the singleuser microcomputers had taken over the majority of the computational workload and brought computer capacity utilisation down to about 5%. The reason why the microcomputer companies were able to supplant the minicomputer and mainframe companies had to do with changes in the economies of software development and integrated circuits production. During the 1970s and 1980s, the economics governing both hardware production and software development changed markedly. Whereas the production of CPUs and memory had traditionally relied extensively on workers inserting and wiring up the circuitry by hand, it would later rely on ever larger chips that had gotten the circuitry etched into them with photolithography. As a result, computers no longer improved their price/performance ratio by growing larger (Groch’s law), they improved their price/performance ratio by using standardised components that were produced in large quantities (Braun and Macdonald 1982; Ceruzzi 1998, 177–179). In a similar way, software developed its own scale economies. As computers grew more powerful and programming languages became more advanced, software became increasingly complex and costly to develop, which in turn made it feasible to divide development costs over ever larger user-groups.5 Both these developments would benefit the microcomputer companies. The microcomputer used from the very beginning the same processors and memory chips as other electronic products. Although they adopted the newest and most expensive components, they still benefited from the prolonged amortisation of circuit designs and equipment that came from letting components trickle down into products that were less sophisticated, such as calculators, traffic lights and microwave ovens (Ceruzzi 2002). Nevertheless, the microcomputer companies made their greatest stride towards exploiting these scale economies in the mid-1980s when they adopted the IBM compatible PC with Intel iAPx86-series processor and MS-DOS operating system as an almost industry-wide standard. The new standard allowed Intel and Microsoft to exploit the extraordinary scale economies that existed in semiconductor production and software development and to provide the microcomputer companies with far cheaper processors and a far more sophisticated operating system. The new standard also created a common

Downloaded by [NIFU STEP] at 03:30 20 February 2015

10

A.M. Fevolden

platform for software applications and led to a greater concentration of software companies (Campbell-Kelly 2003, ch. 5). The result was that the microcomputers came from the onset with processors that had at least five to six times better price/performance ratio than the processors used by the minicomputers and mainframes. They would continue to increase their lead until their processors had, by the early 1990s, a price/performance ratio that exceeded the processors used by minicomputers and mainframes by astonishing 20–40 times (Mckenney, Mason, and Copeland 1995, 16–17). The microcomputers also benefited from comparable advantages in memory as Japanese semiconductor companies began in the early 1980s to flood the market with cheap DRAM chips (Langlois and Steinmueller 1999). Nevertheless, the microcomputers would perhaps gain their greatest advantage over the minicomputers and mainframes in software. The microcomputers came from the mid-1980s on with operating systems that featured easy to use graphical-user interfaces (Campbell-Kelly 2003, 246–251) and much anecdotal evidence indicates that their applications were already from the early 1980s more capable and much easier to use than the applications that came with the minicomputers and mainframes (Cringely 1996, 76). Why did the microcomputer companies manage to exploit these economies and not the minicomputer and mainframe companies? The answer was that the minicomputer and mainframe companies sold a capital good, whereas the microcomputer companies sold a consumer product (Campbell-Kelly 2003, ch. 1). This resulted in low switching costs in the microcomputer markets and high switching costs in the minicomputer and mainframe markets. The microcomputer companies sold initially most of their computers in the household market. They responded to these market conditions by providing easy-to-use computers and packaged software that required little support and maintenance. This meant that microcomputer users could easily switch between computers, operating system and applications software whenever they upgraded their computer system. The minicomputer and mainframe companies, on the other hand, sold most of their computers to large corporations and government institutions. They responded to their needs by providing custom-written programmes and sophisticated software products, which were difficult to use and required a substantial amount of training. This meant that minicomputer and mainframe users found it difficult to switch from one supplier to another without retraining their staff and rewriting their software. The result was that it was much easier to establish common standards in the microcomputer market than in the minicomputer and mainframe markets, and the microcomputer companies began to benefit from a virtuous cycle, where improvements in software led to improvements in hardware and improvements in hardware led to improvements in software. 4.

Discussion

We will in this section return to our research question – why computer capacity utilisation has declined – and adopt a ‘MLP’ to understand the developments outlined in Section 3. MLP is a theoretical approach that draws on evolutionary economics and socio-technical systems and that has been developed to analyse processes of technological transition (Geels 2002; Rip and Kemp 1998). MLP portrays transition processes as an interaction between three levels of analysis. At the heart, we find the ‘socio-technical regime’ (meso-level), which consist of the embedded practices and routines that make up the part of the economy relevant for the analysis (e.g. petrol-based vehicles); above, we find the ‘Socio-technical landscape’ (macro-level), which consist of broader societal and economic structures or trends that can put pressure on the regime (e.g. climate change). Below, we find ‘niche innovations’ (micro-level), which are protected spaces where radically new innovations can find shelter from competition and potentially develop and replace

Downloaded by [NIFU STEP] at 03:30 20 February 2015

Computer capacity utilisation 11 or reconfigure the regime (e.g. electric-powered vehicles). Since many excellent accounts of MLP already exist, among others in this journal (Geels 2002, 2005), we will not go into the details here, but rather focus on applying the perspective. Since MLP is a heuristic framework, its levels are not predefined and need to be specified and adapted to the phenomena that are being investigated. In our analysis, we will define the core of the regime as fully assembled and functional computer systems and the primary commercial actors as the dominant computer systems providers (initially the minicomputer and mainframe companies; later the microcomputer companies). As part of the regime, we also find other actors that take on more of a ‘supporting role’ in our analysis – such as policy-makers which serve and regulate the industry and other commercial actors which work within related fields, such as telecommunications, integrated circuits and software development. In terms of the other levels, we will follow the same line of thinking and define the niche level as sheltered spaces where experimentation with radically new computer systems design can take place and the landscape level as broader developments that can affect the demand for information-processing services (e.g. economic growth). In the following analysis, we will follow the organisation of the case study and divide the analysis into three phases, where Phase 1 explains how the time-sharing regime emerged, Phase 2, why the microcomputer companies adopted the single-user architecture and Phase 3, why computer users replaced their time-shared minicomputers and mainframes with single-user microcomputers. In Figure 2, we see a depiction of the three phases and the main interactions between the various socio-technical levels. Phase 1: Development of the time-sharing regime. The case study showed that the time-sharing system emerged out of niche-level military (SAGE) and computer science (MIT and Dartmouth College) projects and quickly became an important information-processing regime alongside batch processing. The time-sharing system was initially plagued by instabilities, but after the minicomputer and mainframe companies managed to introduce a range of incremental innovation, the system became stable and suitable for widescale implementation by private companies and government institutions. Nevertheless, the case study also showed that the time-sharing system was developed unevenly due to ‘landscape’ and ‘regime’ effects. The landscape effect was the growing demand for computer services among smaller companies and households (which emerged as a result of economic growth and falling prices of computer components). The regime effect was the unsupportive behaviour of the telecommunications sector (charging steep prices for long distance calls and neglecting to improve data-transfer standards) that prevented the computer utilities from expanding their computer services to small companies and private citizens. This pent-up demand, again, opened up a niche for the microcomputers. Phase 2: Microcomputers and choice of architecture. The case study showed how the microcomputer companies initially provided both single-user and time-shared microcomputers. Nevertheless, it revealed that the microcomputer soon became exclusively a single-user system due to ‘regime-level’ influences. The most important of these regime-level influences was the uneven development of the time-sharing regime, where the time-shared minicomputers and mainframes thrived but the computer utilities struggled. This created a competitive environment for the time-shared microcomputers which had to compete with minicomputers and mainframes, while it created a growing and relatively sheltered market for the single-user microcomputers. Nevertheless, it is important to emphasise the single-user microcomputers segment itself was fiercely competitive – but this ‘internal’ competition seemed to propel the diffusion of single-user microcomputers, in contrast to the ‘external’ competition in the time-sharing segment, which seemed to limit it. This regime-level influence was further accentuated when IBM entered the

12

A.M. Fevolden

Socio-technical landscape (exogenous context) (i) Development of the time-sharing regime -The failure of the computer utilities created a pent-up demand for information processing services among hobbyists and households

Downloaded by [NIFU STEP] at 03:30 20 February 2015

Socio-technical regime

(iii) Microcomputers vs. minis and mainframes -Standardization on common operating system and hardwareled to a virtuous cycle for the microcomputer companies NicheInnovations

(ii) Microcomputers and choice of architecture -The relative importance of the hobbyist and household market led the microcomputer companies to adopt the single-user architecture

*Inspired by Geels (2002)

Figure 2. Multilevel analysis of computer architecture. Source: Geels (2002).

microcomputer niche with a single-user microcomputer and chose an operating system – (PC and MS) DOS – that did not support time-sharing. The result was a lock-in between the singleuser system and the dominant operating system that prevented the time-shared microcomputer companies from taking advantage of much of the software development that happened in the software industry. Phase 3: Microcomputers vs. minicomputers and mainframes: The case study showed that the microcomputers began to replace the minicomputers and mainframes during the latter part of the 1980s. This transition pathway of ‘technological substitution’ (Geels and Schot 2007) was due to the same regime-level effects on the niche. The uneven development of the time-sharing regime forced the microcomputer into the household and small business market, a market that was much more dynamic and allowed the microcomputer companies to settle on common standards for software and hardware. These common standards allowed the single-user microcomputers to take advantage of scale economies, which were not only powerful enough to compensate them for their low level of capacity utilisation, but also to enable them to take on the minicomputers and mainframes. In conclusion, we might say that the article has shown how niches can have mixed effects on niche innovations. Smith and Raven (2012) found that niches offer three types of protection – shielding, nurturing and empowerment. By ‘shielding’, Smith and Raven meant that the niches

Computer capacity utilisation 13 can protect the niche innovation from competitive pressures; by ‘nurturing’, that they can help the niche innovations become competitive in the market place; and by ‘empowerment’, that they can enable the niche innovations to challenge the existing regime. In our case study, we saw that the microcomputers benefitted from all three of these types of protection. Nevertheless, we also saw that these three types of protection also facilitated the adoption of less than ideal technological solutions that became an enduring part of the microcomputer through lock-in mechanisms. We saw that the same niche in which the microcomputers found protection from the minicomputers and mainframes also forced them to adopt the single-user system, and we saw that when the niche nurtured and empowered the microcomputers through standardisation and scale economies, it also created a lock-in between the microcomputer and the single-user system.

Downloaded by [NIFU STEP] at 03:30 20 February 2015

5.

Conclusion

In this article, we have provided an explanation for why there has been a dramatic decline in the utilisation of computer-processing power during the last two to three decades. We have seen that computer capacity utilisation remained high up until the late 1970s, when large corporations and government institutions made use of time-shared minicomputers and mainframes and computer utility companies used the same types of computers to provide time-sharing services to households and small businesses over the telephone lines. Nevertheless, we identified two events, which took place during the 1980s and 1990s, that brought down the computer capacity utilisation: the microcomputer emerged and (i) adopted the wasteful single-user system and (ii) the single-user microcomputers began to replace the time-shared minicomputers and mainframes. The main argument in this article has been that these two events were contingent on a series of unintended and accidental development traits in the computer industry. Had the computer industry evolved in a slightly different way, these two events might not have taken place and computer capacity utilisation might have remained high. Had, for instance, market conditions been more favourable for the computer utilities or the time-shared microcomputers – today’s information-processing environment might have been based on centralised architectures such as cloud computing and virtual machines. This article has also raised some questions that might be worth pursuing in future research. On the empirical side, this paper focused on the period up until the early 1990s, and it would be interesting to see further research on later attempts at raising capacity utilisation, such as ‘Grid computing’ and ‘Internet-based applications’. On the theoretical side, this paper has shown that the MLP can also be used to explain technological fragility and regression, and it would be interesting to see further research on how transition processes might favour some features of a technology – while other features are discarded.

Acknowledgements The author acknowledges the help he had received from Edward Steinmueller, Andy Davies, Terje Grønning, Antje Klitkou and two anonymous referees.

Disclosure statement No potential conflict of interest was reported by the authors.

14

A.M. Fevolden

Downloaded by [NIFU STEP] at 03:30 20 February 2015

Notes 1. Measurements of computer capacity utilization are scarce and hard to find. The estimate from the 1960s is based on measurements conducted on MIT’s time-shared IBM 7094 in 1966 (Datamation 1966c); the estimate from today is based on more general assessments reported in Newsweek (2002). It is difficult to assess the economy-wide utilization of computers since computing happen on a wide range of platforms (from cell phones to computer servers). Nevertheless, personal computing devices (e.g. cell phones, tablets and laptops) still carry out the bulk of today’s processing jobs and it is, therefore, reasonable to assume that the economy-wide utilization of computers can only be slightly higher than 5%. 2. Measurements made in the 1960s on time-shared IBM 7094 suggest that centralized architectures enabled users to make use of as much as 60% of a computer’s processing power (Datamation 1966c), while similar measurements reported in Newsweek suggest that decentralized architectures such as single-user computers prevent users from using much more than 5% of the computer’s resources (Newsweek 2002). In this article, we will, therefore, focus on the transition from centralized to decentralized architectures to explain why computer capacity utilization has declined. 3. This paper will focus exclusively on interactive computing – the most common and familiar form of computing today – and will have little to say about its counterpart, batch processing – a form of computing that was central early in the computer’s history, but that plays only a minor role in today’s information-processing environment. Interactive computing is defined by users interacting directly with the computer during program execution and covers activities as diverse as playing a computer game, writing a report in a word-processing program, and browsing for information on the Internet. It accounts for the vast majority of today’s computer use and is responsible for almost the entire decline in computer capacity utilization. Batch processing, on the other hand, is a form of computing where users first submit their jobs to a computer, then wait until the computer has stacked, scheduled and run through all of the jobs, before they set off to retrieve their results. Although batch processing makes good use of the computer’s capacity, it is limited to the type of jobs that can be both clearly defined in advance and subjected to lengthy response cycles, limitations that leave it unfit for all but a small set of today’s computational chores. 4. Based on advertisements appearing in Datamation during this period. 5. These economies of scale are not of the traditional sort that resides within a single company; rather they are of the sort that a whole industry or industry segment can benefit from. Therefore, they have in the literature been given names such as ‘external economies’ (Langlois 1992) and ‘increasing returns’ (Arthur 1989).

Notes on contributor Arne Martin Fevolden is a Senior Researcher at the Nordic Institute for Studies in Innovation, Research and Education (NIFU) and a postdoctoral researcher at the TIK centre at the University of Oslo. He holds a Ph.D. in innovation studies/economics of technological change from the University of Oslo. His main research interest is the economics of high-technology industries, and he has in the past years focused on the information and communications industry and the defense industry.

References Arthur, W. B. 1989. “Competing Technologies, Increasing Returns, and Lock-in by Historical Events.” Economic Journal 99 (394): 116–131. Baran, P. 1971. Potential Market Demand for Two-way Information Services to the Home 1970–1990. Menlo Park, CA: Institute for the Future. Braun, E., and S. Macdonald. 1982. Revolution in Miniature: The History and Impact of Semiconductor Electronics Re-explored in an Updated and Revised Second Edition. Cambridge: Cambridge University Press. Byte. 1977. “Advertisement by Technical Systems Consultants.” Byte 25. Campbell-Kelly, M. 2003. From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry. Cambridge: MIT Press. Campbell-Kelly, M., and W. Aspray. 1996. Computer: A History of the Information Machine. New York: Basic Books. Campbell-Kelly, M., and D. D. Garcia-Swartz. 2008. “Economic Perspectives on the History of the Computer Timesharing Industry, 1965–1985.” IEEE Annals of the History of Computing 30 (1): 16–36. Ceruzzi, P. E. 1998. A History of Modern Computing. Cambridge: MIT Press. Ceruzzi, P. E. 2002. “Personal Computer Software.” In From 0 to 1: An Authoritative History of Modern Computing, edited by A. Akera and F. Nebeker, XI, 228 s.: ill. New York: Oxford University Press. Chandler, A. D., and T. Hikino. 1990. Scale and Scope: The Dynamics of Industrial Capitalism. Cambridge, MA: Belknap Press.

Downloaded by [NIFU STEP] at 03:30 20 February 2015

Computer capacity utilisation 15 Cortada, J. W. 1996. Information Technology as Business History: Issues in the History and Management of Computers. Westport, CN: Greenwood Press. Cringely, R. X. 1996. Accidental Empires: How the Boys of Silicon Valley Make their Millions, Battle Foreign Competition, and Still can’t Get a Date. London: Penguin. Datamation. 1966a. “The Computer Utility – A Public Policy Overview.” Datamation 22–39. Datamation. 1966b. “Time-sharing and Multiprocessing Terminology – Toward Standardized Usage.” Datamation 49– 51. Datamation. 1966c. “Time-sharing Measurement – System & User Characteristics.” Datamation 22–26. Datamation. 1967. “Ama’s Briefing on the Computer Utility – Conference Report.” Datamation 65–67. Datamation. 1968. “Government Policy Implications in Data Management – What to Do.” Datamation 37–40. Datamation. 1969. “The ‘69 Time-sharing Gold Rush – Some must Fall.” Datamation. Datamation. 1977. “Time-sharing – Standards for Time-sharing.” Datamation 139. Davies, A. 1996. “Innovation in Large Technical Systems: The Case of Telecommunications.” Industrial and Corporate Change 5 (4): 1143–1180. Englander, I. 2000. The Architecture of Computer Hardware and Systems Software: An Information Technology Approach. New York: Wiley. Fevolden, A. M. 2013. “The Best of Both Worlds? A History of Time-shared Microcomputers, 1977–1983.” IEEE Annals of the History of Computing 35 (1): 23–34. Geels, F. W. 2002. “Technological Transitions as Evolutionary Reconfiguration Processes: A Multi-level Perspective and a Case-study.” Research Policy 31 (8–9): 1257–1274. Geels, F. W. 2005. “The Dynamics of Transitions in Socio-technical Systems: A Multi-level Analysis of the Transition Pathway from Horse-drawn Carriages to Automobiles (1860–1930).” Technology Analysis & Strategic Management 17 (4): 445–476. Geels, F. W., and J. Schot. 2007. “Typology of Sociotechnical Transition Pathways.” Research Policy 36 (3): 399–417. Hughes, T. P. 1983. Networks of Power: Electrification in Western Society, 1880–1930. Baltimore: John Hopkins University Press. Infoworld. 1980. “Two Current Approaches – Taking Stock of Multiuser Systems.” InfoWorld 15 and 23. Jackson, T. 1997. Inside Intel: Andy Grove and the Rise of the World’s Most Powerful Chip Company. New York: Penguin Putnam. Langlois, R. N. 1992. “External Economies and Economic Progress – The Case of the Microcomputer Industry.” Business History Review 66 (1): 1–50. Langlois, R. N., and W. E. Steinmueller. 1999. “The Evolution of Competitive Advantage in the Worldwide Semiconductor Industry, 1947–1996.” In Sources of Industrial Leadership: Studies of Seven Industries, edited by R. R. Nelson and D. C. Mowery, 19–78. Cambridge: Cambridge University Press. Licklider, J. C. R. 1988. “Man-computer Symbiosis.” In A History of Personal Workstations, edited by A. Goldberg, 131–140. New York: ACM Press. Mckenney, J. L., R. O. Mason, and D. C. Copeland. 1995. Waves of Change: Business Evolution through Information Technology. Boston, MA: Harvard Business School Press. Nelson, R. R., and S. G. Winter. 1982. An Evolutionary Theory of Economic Change. Cambridge, MA: Belknap Press. Newsweek. 2002. “Life in the Grid.” Newsweek, 64–67. Nightingale, P. 2000. “Economies of Scale in Experimentation: Knowledge and Technology in Pharmaceutical R&D.” Industrial and Corporate Change 9 (2): 315–359. Nightingale, P., T. Brady, A. Davies, and J. Hall. 2003. “Capacity Utilization Revisited: Software, Control and the Growth of Large Technical Systems.” Industrial and Corporate Change 12 (3): 477–517. Norberg, A. L., K. J. Freedman, and J. E. O’neill. 1996. Transforming Computer Technology: Information Processing for the Pentagon, 1962–1986. Baltimore: Johns Hopkins University Press. O’Neill, J. E. 1992. “The Evolution of Interactive Computing through Time-sharing and Networking.” Ph.D. diss., University of Minnesota. Pugh, E. W. 1996. Building IBM: Shaping an Industry and Its Technology. Cambridge: MIT Press. Rip, A., and R. Kemp. 1998. “Technological Change.” In Human Choice and Climate Change, edited by S. Rayner and E. L. Malone, 327–399. Columbus, OH: Battelle Press. Smith, A., and R. Raven. 2012. “What is Protective Space? Reconsidering Niches in Transitions to Sustainability.” Research Policy 41 (6): 1025–1036. Tether, B. S., and J. S. Metcalfe. 2003. “Horndal at Heathrow? Capacity Creation through Co-operation and System Evolution.” Industrial and Corporate Change 12 (3): 437–476.