The pricing of soft and hard information: economic ... - Springer Link

2 downloads 13933 Views 327KB Size Report
Sep 12, 2012 - Springer Science+Business Media New York 2012. Abstract This paper uses ... Keywords Soft information 4 Screenplays 4 Movies 4 Contracts.
J Cult Econ (2013) 37:271–307 DOI 10.1007/s10824-012-9183-5 ORIGINAL ARTICLE

The pricing of soft and hard information: economic lessons from screenplay sales William N. Goetzmann • S. Abraham Ravid Ronald Sverdlove



Received: 4 July 2011 / Accepted: 9 August 2012 / Published online: 12 September 2012  Springer Science+Business Media New York 2012

Abstract This paper uses a unique data set on screenplay sales to learn how the information content of a sales pitch affects sale prices. This is one of the few studies that analyze ‘‘soft information’’ outside the banking industry. We find that ‘‘soft information’’ proxies, such as the descriptive complexity of a pitch, depress prices, in particular for less experienced writers, supporting the common industry view that high concept (short and simple) screenplays sell better. ‘‘Hard information’’ (measurable experience) variables are priced as well. We also find that large studios shun ‘‘soft information’’, whereas small companies handle it better, as predicted by most theories. In the last part of the paper, we find that, surprisingly, buyers seem to be able to forecast the eventual success of a project based upon the purchased script, paying more for screenplays which will eventually culminate in more successful movies. In other words, perhaps ‘‘somebody knows something’’. Keywords

Soft information  Screenplays  Movies  Contracts

W. N. Goetzmann Edwin J Beinecke Professor of Finance and Management Studies, Yale School of Management, Yale University, New Haven, CT 06520, USA e-mail: [email protected] S. A. Ravid (&) Sy Syms Professor of Finance, Syms School of Business, Yeshiva University, New York, NY 10033, USA e-mail: [email protected] R. Sverdlove School of Management, New Jersey Institute of Technology, Newark, NJ 07102, USA e-mail: [email protected]

123

272

J Cult Econ (2013) 37:271–307

1 Introduction This paper uses a unique data set of screenplay sales to show how ‘‘soft’’ and hard information interact in determining the price paid for a product, in this case, screenplays. We analyze the elements of the pitch that sells a screenplay as well as the other important ingredients that determine transaction viability and pricing. We also provide some insights into the types of contracts used, and finally we test to see if the prices paid for screenplays are rational, based on the completed product. Practically all assets, enterprises, and transactions are evaluated using both ‘‘soft’’ and ‘‘hard’’ information. Considerable research, as well as the experience of market participants, suggests that numbers do not completely capture the nuances of such things as quality, counterparty risk, manager reputation, and the potential for success. A major challenge for the economist hoping to study the importance of ‘‘soft’’ information is its unquantifiable nature. Most of the literature on the impact of soft information has focused on the financial services industry, a setting in which there is considerable numerical information about assets and where one might expect only limited deviation from reliance on quantitative models (see Petersen 2004, for an insightful survey). As a consequence, studies of soft information treat it as the residual in an estimation model, or capture its effects through indirect proxies such as distance. In this paper we use our unique data set to provide more direct evidence of how soft information and hard information interact and how they affect the sale price of an asset. Importantly, since different types of organizations purchase scripts, we can also shed some light on how different types of organizations process information. Specifically, we begin by analyzing the description provided at the time of sale of screenplays (the sales pitch) to predict pricing and contract design. Our proxies assess the complexity of the script, and include the number of words in the pitch, the number of genres used in the description, and whether it is possible to summarize the new idea with references to movies that have already been released. We use these ‘‘soft measures’’ as well as ‘‘hard measures’’ of screenplay writers’ experience, to estimate pricing and contract design. Our findings support industry accepted wisdom that ‘‘high concept’’ (short and sweet) pitches sell better. We also complement a new and innovative study by Eliashberg et al. (2007) by correlating the success of movies with the prices paid for the original screenplays. We show that high priced scripts lead to more profitable movies, supporting the idea that markets are rational and cast some doubt on the famous movie industry observation that ‘‘nobody knows anything’’. The rest of the paper is organized as follows. The next section discusses related literature. We then focus on soft information in the film industry and make our empirical predictions. After discussing the institutional background, we describe our data and research design. Finally, we proceed with our results and conclusions. 1.1 Related literature Our paper is related to several strands of literature. The marketing literature often uses advertising expenditures as an independent variable determining various aspects of the economic success of products. Specifically, in the context of the motion

123

J Cult Econ (2013) 37:271–307

273

pictures industry, advertising expenditures appear in various types of revenue equations (See Elberse and Eliashberg 2003; Basuroy et al. 2003 or Palia et al. 2008). In contrast, this paper analyzes descriptive elements that are salient in the purchase decision in detail. In particular, we focus on ‘‘soft information’’ elements in the sales pitch. ‘‘Soft Information’’ has been a topic of study in economics, finance, organizational behavior, and marketing. The theoretical literature in this area focuses on the impact of soft information on organizational structure, and most of the empirical testing considers the financial services sector. Aghion and Tirole (1997) initiated the idea of the link between the type of information available, the ability to communicate, and organizational structure. The notion of soft information as being (perhaps infinitely) costly to transmit is formalized in Stein (2002). Stein (2002) says that in the presence of soft information, decentralization can allow good projects to be funded by providing the correct incentives to division managers who can engage in research. He defines ‘‘soft information’’ as information that can be verified only by the agent who produces it, and cannot be unambiguously documented. Rajan et al. (2010) similarly suggest that soft information is ‘‘information that cannot be communicated to a third party’’ (ibid. p. 5). Impossibility is an approximationalmost everything can be done at a cost, and thus we will classify soft information as information that is very costly to communicate to third parties. Numerical information such as a credit score or years of experience (in our paper) are easy to communicate. Whether someone is reliable or a screenplay is worthwhile is more difficult to convey. Petersen (2004) suggests that soft information has been in the background, but not necessarily the foreground, of various theoretical papers. However, Petersen (2004) admits: ‘‘I do not have a simple definition of what information is hard and what is soft,’’ but he provides some specific characterizations of hard information: it is numerical, collected impersonally, and evaluated in the same way by different people. The examples he uses are revenue figures (hard information) versus honesty (soft information). Petersen correctly notes that a scale can be found for soft information, but that does not make it hard. His survey also discusses empirical work, suggesting that small banks may be more comfortable with soft information, whereas larger banks process hard information better. Firms that are less informationally transparent have a lower probability of loan approval, but such firms tend to do better with smaller banks that can better evaluate soft information. Petersen and Rajan (2002) focus on the distance between lenders and borrowers. Their finding which is most relevant to the current discussion, is that more informationally opaque firms tend to borrow from nearby lenders. Informational transparency is measured by the availability of a firm credit card, the availability of tax and income records and being a franchise.1 The idea of distance as an indirect measure of social relations is also developed in Uzzi (1999) and Uzzi and Gillespie (2002). These papers, which approach the problem from a management-sociological viewpoint, study the impact of ‘‘embeddedness’’ (closeness) of banking relationships on the cost of capital and on trade credit 1

However, a time variable turns out to be important and the authors note that ‘‘The relation between predicted distance and credit availability is weakening over time’’ (p. 2566) possibly indicating technological progress.

123

274

J Cult Econ (2013) 37:271–307

relationships respectively. The main proxies used for ‘‘embeddedness in a social network’’, are the duration of the relationship, and ‘‘multiplexity’’, i.e. the number of business and personal services used by the borrower. Uzzi (1999) finds that a closer relationship significantly lowers the cost of capital, and Uzzi and Gillespie (2002) find that it affects trade credit relationships. Berger et al. (2005) use a matched sample of banks and firms. They find that large banks lend in a more ‘‘impersonal’’ way and are less willing to deal with difficult, implicitly high soft information cases.2 Liberti and Mian (2009) consider a natural experiment following a change of the hierarchical structure at an Argentinean bank. They find that soft information variables (namely, risk assessment measures of management, measures of the competitive environment, and access to capital markets and other bank relationships) are significant in determining the pricing of working capital loans after the change. However, Liberti and Mian (2009) also state: ‘‘A strict definition of soft information makes it impossible to be codified and hence (by assumption) soft information cannot be observed by an econometrician’’. In a recent paper, Gill and Sgroi (2012) detail the conditions under which a monopolist will choose ‘‘tough’’ versus ‘‘soft’’ product tests when launching a new product. Passing a ‘‘soft’’ test is easier, but less informative than passing a ‘‘tough’’ test. In a way this idea may be similar to a choice of a ‘‘hard’’ or a ‘‘softer’’ pitch for a movie. In summary, there is still no generally accepted theory of ‘‘soft information’’. However, there are several models and much testing of the impact of ‘‘soft information’’ on various aspects of decision-making, most often in the financial services industry. The proxies are generally indirect as discussed earlier. We will proceed with the definition of soft information as information that is very costly to communicate, and we will use our own proxy, which is relevant to our data. Another related strand of literature is empirical studies of contract design. These studies use observed contract features to test ideas in the vast theoretical contract design literature. Much of the work was done in the venture capital sector, where contracts are available for inspection. Gompers and Lerner (1996) consider the use of covenants in venture capital contracts. They suggest that covenants may be used instead of adjusting pricing or specifying an 80/20 contract split to reflect agency problems and supply and demand conditions in the market for venture capital. Kaplan and Stro¨mberg (2003) provide a detailed analysis of the features and dynamics of venture capital contracts. They suggest that different features may conform to different theories of contract design. In the biotechnology industry, the focus is, again, on the distribution of various rights in the contracts (see for example, Lerner and Merges 1998). However, these papers discuss control and effort issues in contracts relating to projects that are in progress. We consider the sale of a completed product. Perhaps closer to our work are Banerjee and Duflo (2000) and Chisholm (1997). Banerjee and Duflo (2000) show that better reputation (in Indian software companies) leads to a lower prevalence of fixed payment contracts, which provide more incentives to the firms than the ‘‘contingent’’ (time and materials) contract. Chisholm (1997) analyzes several dozen actor contracts and considers who is more likely to receive a share contract. Finally, Harris et al. (2012) provide a 2

See Petersen and Rajan (1994), Petersen and Rajan (2002) in addition to Berger et al. (2005).

123

J Cult Econ (2013) 37:271–307

275

theory of intellectual property pricing, illustrated with data which overlaps the data in this paper. The model considers two agents, a buyer (say a studio in our case) who maximizes profits from each purchase and a seller (a screenwriter in our case) who maximizes lifetime income. The seller and the buyer may disagree on the competency of the seller (writer). Every purchase provides information to the market and builds the reputation of the seller. As reputation increases, the seller can command a higher price in subsequent periods. Because of the difference in opinions, a contingent contract, such as we see in our data, may arise even if the seller is risk averse and the buyer is risk neutral. In this context, soft information is essentially a barrier, or a cost, for more efficient transactions (this is very similar to the ideas in the banking literature). If it were clear in advance which screenplay will be produced and when it is produced how it will fare, contracting would be easier. The purpose of our current paper is essentially to show how this informational cost affects pricing. We should finally mention the only two studies (to our knowledge), which directly address screenplay pricing. Eliashberg et al. (2007), uses ‘‘semantics’’ (numerical measures of the text) and the ‘‘bag-of-words’’ methodology for content analysis of the ‘‘spoiler’’, a detailed movie description. These data are used to predict the return on investment for the resulting movie (US box office/budget). We use measures similar to semantics to analyze the pitch that led to the sale. We have much shorter texts (our average pitch is 26 words whereas the average ‘‘spoiler’’ is 1,642 words),3 but we have more screenplays. Also, our focus is on the actual sale and not the return on subsequent investment, although we do discuss the movie projects as well.4 In a recent working paper, Luo (2011) provides a theoretical model and uses a similar data base to ours to predict whether a writer should sell a pitch or a completed screenplay. Some of her results support the view here and in Harris et al. (2012) that the experience of the screenwriter is a most important ‘‘hard information’’ element. In Luo (2011) experience helps in predicting the decision when to sell and the price of a screenplay. 1.2 Testing 1.2.1 How we measure soft and hard information The literature surveyed in the previous section does not provide a single definition of soft information. If we put the empirical proxies in the context of the theoretical literature, we may identify the main characteristics of soft information as a high cost of transmission which may lead to different interpretations by different people- this 3

See Table 2 of Eliashberg et al. (2007), and our data description.

4

Chevalier and Mayzlin (2006) consider the number of characters in reviews posted for books sold on Amazon.com and BN.com as a value measure. Their findings suggest that longer reviews are required to support a ‘‘mixed’’ review, i.e. 1 star (worst) and 5 star (best) reviews are associated with shorter reviews. This is consistent with our view of length as a measure of complexity and nuances. Godes and Mayzlin (2004) find that more complicated measures are very noisy. As discussed earlier, Eliashberg et al. (2007) use the ‘‘bag-of-words’’ methodology as part of their assessment of the text of ‘‘spoilers’’. For a discussion and implementation of some of the automated methodologies, see for example Tetlock (2007).

123

276

J Cult Econ (2013) 37:271–307

is related but harder to operationalize. Petersen (2004, p. 5) also suggests that soft information ‘‘is often communicated in text.’’ As discussed earlier, we consider ‘‘softer’’ information an additional cost in the transaction. In other words, if the information we can provide about the screenplay is ‘‘softer’’ it will be costlier to transmit. In our context, we suggest that more complex project descriptions proxy for ‘‘softer’’ information. In order to understand a complex summary (See discussion below) a costly discussion and detailed reading of the script may be required, and even then it will be difficult to visualize the resulting movie. We proxy for soft information using measures of the descriptive complexity of the sales pitch, in particular the number of words in the ‘‘logline’’ (abstract, pitch). This is similar to the semantic measures used in Eliashberg et al. (2007), which turn out to be the ‘‘more relevant’’ in their analysis (see Figure 5). Thus, our proxies do not ‘‘measure’’ soft information, but indicate that it is present. In order to suggest why short loglines may be easier to transmit, here are two examples from our sample: Greatest Escapes: ‘‘Several 12 year old kids escape from a camp from hell.’’ On any given Saturday Remembering the Titans Gives me the Varsity Blues: ‘‘Spoof of football movies.’’ [Note that the title is longer than the logline.] A complex description means that one probably has to read the screenplay in detail before a purchase decision can be taken. Furthermore, the resulting movie may be harder to visualize and may be interpreted differently by different people even after they have read the screenplay. This is equivalent to a high cost of transmission Here are two examples of longer, more complex loglines from our sample: Joe Somebody: ‘‘Corporate guy who is divorced and at the end of his rope is beaten up and humiliated by a co-worker over a parking space. He confronts his fears and in the process comes to terms with what he wants out of life and ultimately falls in love again’’. Tick Tock: ‘‘Amnesiac wakes up to find that he is in FBI custody, as the prime suspect in a series of LA bombings. Without knowing whether he is really the bomber or just someone set up to look like he is, he must lead a young, female FBI agent on a desperate search through Los Angeles for the remaining explosives, before they detonate.’’ We also use two other measures: the number of other movies mentioned in the logline and the number of genres assigned to the screenplay. If a description specifies that the story in question is ‘‘very similar to The Godfather’’, people will have a clearer visualization of the movie that can be made from the screenplay, so that one can say the information is ‘‘harder’’. Here is an example from our sample: Act of treason: ‘‘‘In the line of Fire’ meets the ‘Bodyguard’.’’ Multiple genres indicate a ‘‘fuzzier’’ screenplay (e.g. a genre designation of ‘‘action/ adventure/comedy’’ for the screenplay of ‘‘Spoils of War’’ as opposed to a genre designation of ‘‘comedy’’ for the screenplay ‘‘Special’’). A fuzzier classification may require industry participants to pay more attention to the details of the story, which makes transmission costly, and is also conducive to very different

123

J Cult Econ (2013) 37:271–307

277

assessments of the resulting movie by different people. Consider again the ‘‘Spoils of War’’: Spoils of war [genre: action adventure comedy]: ‘‘A newly found treasure map leads three soldiers to look for rewards just days before the Kuwait desert storm invasion.’’ The fuzzy specification of genres makes the screenplay harder to visualize. (Why is this a comedy?)5 Our proxies, which are indicative of ‘‘softer’’ screenplays, are also similar in a sense to the ‘‘embeddedness’’ measures that indicate a ‘‘softer’’ banking relationship.6 The ‘‘hard’’ element in selling the screenplay is the reputation of the writer. We measure the screenwriter’s experience by the number of films he has sold and the number and quality of awards he has received. These measures indicate the potential value of the film the screenplay may lead to. Also, as noted, this is what we expect from a model such as Harris et al. (2012) which predicts that writers will be paid more as their reputation increases. These measurable characteristics are similar to hard information variables such as credit rating or global relationship banking used in Liberti and Mian (2009) and the borrower’s credit payment record used in Petersen and Rajan (2002). The venture capital literature uses similar experience and reputation measures. (See Kaplan and Stro¨mberg (2003) use of repeat entrepreneur.) Banerjee and Duflo (2000) also use reputation measures as independent variables in their contract design regressions. We also consider the characteristics of the studios that buy the screenplays. Stein (2002), Petersen and Rajan (2002), Berger et al. (2005), and others, suggest that more hierarchical firms are less able to evaluate soft information and more likely to depend on hard information, because multi-layered firms will have more difficulty transmitting soft information through the hierarchy. As in the banking literature, we use company size as a proxy for hierarchy, and thus expect larger studios to pay relatively more for ‘‘harder’’ screenplays all else equal. In order to better handle ‘‘softer’’ screenplays, large studios may offer a contingent contract. Therefore we will test the impact of studio size on screenplays sales. 1.2.2 Forward looking prices Since screenplays are ‘‘interim products’’ and generally lead to a final product, then after we test the impact of the pitch and the screenwriter information on sales prices, we will look at the valuation of the finished product, in this case measured by revenues and rates of return of film projects. The question is whether the prices are forward looking—that is to say, whether the studio will pay more for screenplays that eventually lead to more successful movies. This part of the paper is similar to Eliashberg et al. (2007) in the sense that we are predicting the success of a movie 5

Note that about two-thirds of the screenplays are given only one genre, which supports the validity of this measure.

6

One may conjecture that, as in the banking papers, if one has a closer relationship, soft information will be less of an issue. The industry is relatively small, but we also try to control for this issue by using a variable for a screenwriter having a manager.

123

278

J Cult Econ (2013) 37:271–307

based on screenplay characteristics. In our paper, however, in addition to control variables regarding the movie itself, we have a market-based measure of the quality of the screenplay, namely, its purchase price. While efficient markets die-hards would probably expect prices to predict the success of the project, this is not the prevailing industry belief. The very successful screenwriter William Goldman famously summarized these beliefs in 1983 with the following phrase: ‘‘Nobody knows anything7’’. 1.3 Background, data, and variables The process of turning an idea into a completed movie is complex and long, and even selling a screenplay is a difficult task. One can register a screenplay with the Writers Guild of America (WGA); however, a writer will need an agent in order to submit a screenplay to a studio or production company. Getting an agent may not be trivial: quite a few agencies do not accept unsolicited manuscripts,8 and represent only people who are referred by people they know. The agent may submit a screenplay to be evaluated by a production company. Most major studios have several layers of screening before a script ends up in the hands of someone who can make a purchase decision. WGA sets minimum prices for screenplays, which in early 2004 (somewhat later than the last sale in our dataset) were around $50,000 for a low budget movie and up to $90,000 for a high budget film. However, a purchase (which is when the screenplay appears in our data), even at a very high price, is no guarantee of production. It may still take a while for anything to happen. First, screenplays are ‘‘developed,’’ that is, changed, re-written and adapted to both the creative and pragmatic (budget) requirements of the purchasing entity.9 Then, even if everybody is happy with the final write-up, there may not be a studio that is willing to finance and distribute the film.10 Fundamental to the sales and marketing process for a screenplay is the ‘‘pitch,’’ the basic concept of the screenplay boiled down to a paragraph or two that can be delivered in writing or verbally by a writer or agent. The pitch must explain the potential appeal of the story, without the details of the actual script. The common belief in Hollywood is that a ‘‘high concept’’ script, one with a simple pitch, is more valuable, and easier to sell to readers and producers.11 This presents a challenge to our analysis. Absent such common knowledge, the description length would be 7

Clearly even if we find that prices predict the success of the project (as we do) there are many other elements that define the success of a movie and thus it is difficult to show that in fact, ‘‘nobody knows anything’’ is incorrect, but the correlation we find is compatible with rational forward looking pricing.

8

See WGA.org.

9

A playwright contractually controls a play written for the theater. No one is allowed to change her lines. In the movie business this is very different. Don Jacoby, who received 1.5 million dollars for his script, told Variety in November 1998, ‘‘Not eight words from the original script were in the movie’’.

10

The film industry boasts a large number of people who make a very nice living writing screenplays, but rarely if ever had anything they wrote actually produced. 11

Cf. Orr, Bonnie, ‘‘High Concept,’’ Screentalk.biz, http://www.screentalk.biz/art043.htm. See also: Lerch (1999), Downs and Russin (2003), ‘‘Movie Maker’’ #54 Vol. 11, 2004, ‘‘Marketing Your Screenplay’’ Jerrol LeBaron, p. 68.

123

J Cult Econ (2013) 37:271–307

279

driven entirely by the need to best sell the script concept. Since many people believe in the ‘‘high concept’’ idea, we would expect writers to cut their descriptions short to increase the probability of sale. The descriptions we have are all relatively short, but variation in complexity does exist: the loglines in the entire dataset range from 2 to 96 words, with an average of 25. Clearly, brevity is not the only criterion that writers use in developing their pitches. We suggest that these variations may be the result of a process that moves toward a ‘‘separating equilibrium’’ in screenplay sales. Only ‘‘harder-information’’ scripts can be reduced to a few words without losing the ability to communicate the plot. For more complex concepts, a brief description will lose so much information as to render the pitch worthless. Thus, writers and agents will incur the ‘‘cost’’ of a longer pitch in order to signal the presence of a ‘‘softer’’, more complex plot line.12 We gather data on the screenplay ‘‘pitch’’ or ‘‘logline,’’ (the description used to sell the script) as well as screenwriter compensation and experience, script complexity, and movie financials and characteristics. Our main source of information is the 2003 Spec Screenplay Sales Directory, compiled by Hollywoodlitsales.com. It contains 1,269 screenplays sold over about 6 years. The information provided on each sale usually includes: title, pitch (presumably, as provided by the agents of the buyer or seller), genre, agent, producer, date-of-sale, purchase price, and buyer. Sometimes additional information is provided. This additional information (definite or tentative) may identify parties who are interested in the project.13 We have a purchase price for 778 scripts (61.31 % of the total sample). The price may be an exact number (which we have for 224 scripts, 28.79 % of scripts with available price, 17.65 % sample). In other cases, Spec Screenplay Sales Directory may record an approximate price (554 scripts). This is generally recorded as, for example, mid- 600’s, or low 400’s. In the latter case, we transform the price range into an estimate (for instance, low five figures is transformed into $25,000; high six figures is transformed into $750,000).14 Using these numbers and transformations, we analyze the data further.15 12 This type of choice may be similar in nature to the choice between ‘‘tough’’ and ‘‘soft’’ testing for newly launched products in Gill and Sgroi (2012). 13 Here are some examples of the additional information provided. The following comment was added to the description of the screenplay entitled ‘‘Kungfu Theater’’: ‘‘DreamWorks purchased project from Mandalay which bought it in September 2000 for six figures. ’’An example of information about the screenwriter’s path to developing the screenplay is found in the comments on ‘‘Lightning’’ by Marc Platt: ‘‘The writer based screenplay on 1997 novel, ’A Gracious Plenty’ which he optioned out of his own pocket. Writer is also a producer’’. The information may be tentative, e.g. regarding the script ‘‘Last Ride’’, it was noted that: ‘‘Ron Howard might direct.’’ In other cases, the information is more definite, e.g. in the notes for the screenplay entitled ‘‘Mickey’’ we find that ‘‘Harry Connick Jr. is in talks to star; Hugh Wilson will direct’’. 14 There is one exception to this rule. Two movies have (non-contingent) prices listed as ‘‘eight figures’’. Since the highest exact price that we have is $11 million, and we have seen references to record script prices for various studios as being at most in the low seven figures, we have estimated these two prices to be $10 million. Other than these three, the next highest prices in the database are $5 million or ‘‘midseven figures’’, of which there are 15. 15 Before doing the analyses, all prices are adjusted to 2003 dollars using the annual average Consumer Price Index factors from the Bureau of Labor Statistics, available at http://www.bls.gov.

123

280

J Cult Econ (2013) 37:271–307

As discussed earlier, screenwriters may be offered two types of contracts. The first is a fixed payment, non-contingent contract. There are 299 such screenplays in our sample (38 %). Alternatively, the screenwriter may be offered a contingent contract—489 of the scripts in our sample fit this description. (Note that there are 10 scripts for which we know the type of contract but not the price.) In a contingent contract the screenwriter receives an initial payment upon contract signing and an additional amount if the script is produced. Average compensation in noncontingent contracts is (in thousands) $1,241.28 (standard deviation, 4,440.23). In contingent contracts, the average initial payment is much lower, $455.76 (standard deviation, 374.81); total compensation if the script is finally produced is $987.99 (standard deviation, 1,005.38). A screenplay, as we explained earlier, needs to pass several layers of approval. The logline is the first step in that process, and it is widely regarded as a vital part of getting the project accepted by an agent and then a studio.16 In order to assess the descriptive complexity, which, as we have suggested, indicates the soft information content of the screenplay, we start with a simple measure, namely, the number of words in the logline (LogWords). Out of 1,269 scripts, the Directory lists the logline for 1,218 scripts (95.98 %). The average logline description contains 25.92 words (standard deviation is 13.65). Since the number of words is a rough approximation, and different types of descriptions require more or fewer words for the same level of complexity, we also created a coarse division to approximate the fundamental differences in complexity. SoftWords is an index variable, which equals 0 if the logline contains up to 20 words; 1 if it contains between 21 and 30 words; 2 if it contains between 31 and 40 words; and 3 if it contains more than 40 words.17 When we want to emphasize the soft aspect of a description, we use the variable HighWords; a dummy that equals 1 if the logline contains more than 40 words (SoftWords = 3) and 0 otherwise. The logline may be just descriptive or may contain references to existing movies. Eighty-five scripts (6.98 % of the scripts for which we have the storyline) mention at least one movie in the story line (29 mention two movies). SoftLogMovies equals 1 if the logline refers to any other movie and zero otherwise. We assume that analogy or reference to other movies make the logline more transparent. Additional information is provided for 573 scripts, (45.15 % of the sample). As discussed earlier, this information may make the script easier to interpret. We create a dummy variable for the availability of additional information (InfoDummy), which is equal to 1 if additional information is provided. The discussion of soft information in the previous section should make it clear that soft information measures are bound to be noisy. Thus, even if we have the correct characterization, most of the action should probably be in the extreme cases. We create a very simple script complexity index, TransparentScript, that equals 1 16

For example, see advice on crafting the logline at: http://www.inktip.com/tips-loglines.php.

17

Instead of using a single categorical variable, SoftWords, for logline length, we tried running a dummy variable for each range. As expected, results were similar. For purposes of sensitivity analysis, we also tried different ranges, where SoftWords received the value of zero for loglines under 15 words, and under 25 words respectively. There was very little difference in the empirical results, including runs with the changed TransparentScript variable and thus we did not include these specifications in our tables.

123

J Cult Econ (2013) 37:271–307

281

when the log line contains up to 20 words (i.e. SoftWords equals 0), and additional information about the script is available (i.e. InfoDummy equals 1). TransparentScript is equal to 1 for 227 scripts (17.9 % of the sample). Genres are commonly considered to be important variables in studies of films. (For a recent example, see De Vany 2004). We use the variable SoftGenres, which equals one if the number of genres assigned to the script is at least 2 and zero otherwise.18 The next set of variables describes our ‘‘hard information’’, namely, the screenwriter’s experience and past success.19 The average number of previously produced scripts is 2.0236 per screenwriter. The writers of 730 scripts have not sold any previous work. ReputationMovies takes the value 0 if the screenwriter has never had any screenplay produced or sold; 1 if the screenwriter has had between 1 and 3 scripts produced; 2 if between 4 and 10 scripts have been produced; and 3 if the screenwriter has previously had more than 10 scripts produced. The FirstMovie variable is 1 if no previous screenplay has been sold and 0 otherwise. We use additional ‘‘hard achievement’’ variables measuring the numbers of Oscars and other awards which the writer had won or been nominated for.20 Finally, an 18 Four hundred and sixty-five scripts (36.64 % of the sample) are assigned more than one genre (453 are assigned two genres, 12 three genres). We group the different genres reported by Spec Screenplay Sales Directory into six broad categories: action (189 scripts), comedy (571 scripts), drama (257 scripts), romance (257 scripts), thriller (224 scripts), and other (123 scripts). Genres can be control variables (i.e. compensation may be higher for certain genres than for others), but can also serve as a measure of complexity, namely, if more than one genre is assigned to a screenplay, that may indicate more complexity and a higher component of soft information. 19 Clearly one can argue with definitions of hard and soft information. However, to our mind, external validation variables are at least as ‘‘hard’’ as past payment history in credit applications, which is a behavioral characteristic changeable at the applicant’s will at any time. In any case, those variables are ‘‘harder’’ and more likely to be viewed the same way by different people [see Stein (2002) and Petersen (2004)] than a description of a plot line. 20 To measure screenwriter experience we search the Internet Movie Database (IMDb) for the number of scripts previously sold by the screenwriter and produced. If we find no entries, we also search our own database to see if this writer had previously sold any screenplay. The average number of previously produced scripts is 2.0236 per screenwriter (standard deviation, 5.5593). The writers of 730 scripts (57.52 % of the sample) have not sold any previous work. ReputationMovies takes the value 0 if the screenwriter has never had any screenplay produced (as per IMDb) or sold (in our database); 1 if the screenwriter has had between 1 and 3 scripts produced (which is the case for 348 scripts, 27.42 % of the sample); 2 if between 4 and 10 scripts have been produced (142 scripts, 11.18 % of the sample); and 3 if the screenwriter has previously had more than 10 scripts produced (49 scripts, 3.86 % of the sample). If we cannot find any produced screenplay in IMDb and no previous sale in our database, then our FirstMovie variable receives a value of one. For those who have had a screenplay produced, FirstMovie receives a value of zero. We use additional ‘‘hard achievement’’ variables measuring the numbers of Oscars and other awards which the writer had won or been nominated for. Details of these variables are given in ‘‘Appendix A’’. Finally, an unknown screenwriter may use a manager to compensate for his lack of experience. Spec Screenplay Sales Directory reports that the screenwriters who wrote 172 of the scripts sold (13.55 % of the total sample) employ a manager. NomOscar (AwardOscar) takes the value 1 if the screenwriter had been nominated for (had won) an Oscar prior to the current sale. AnyNom (AnyAward) takes the value 1 if the screenwriter had been nominated for (had won) an award in any of the major festivals tracked by imdb.com: Oscars, Golden Globes, British Academy Awards, Emmy Awards, European Film Awards, and awards from the festivals of Cannes, Sundance, Toronto and Berlin. For 71 scripts, the screenwriter had been nominated in a major festival; in 32 cases, the screenwriter had previously won an award in a major festival; in 27 cases, s/he had been nominated for an Oscar; and for 10 scripts, the screenwriter had previously won an Oscar.

123

282

J Cult Econ (2013) 37:271–307

unknown screenwriter may use a manager to compensate for his lack of experience. Spec Screenplay Sales Directory reports that the screenwriters who wrote 172 of the scripts sold (13.55 % of the total sample) employ a manager. The Internet Movie Database (IMDb) reports all films produced or that are in production. 311 scripts (24.51 % of the total sample) had been produced or were in production as of early 2004. The idea here is to consider a set of films that were produced in short order in order to see whether executives understand the true potential of the property purchased. Clearly, the longer the time period from the original sale until the film is produced, the more likely it is that the screenplay will be modified and re-written and the less likely it is that the original buyers had a precise idea of the production in mind. In other words, we are not trying to test predictive powers, but business acumen. We should note, however, that a check as of 2010 found only 39 additional films in production, so our data set includes most of the films that have ever been produced from the screenplays in our data set. For each movie produced, we obtain financial performance figures from Baseline Studio Systems (blssi.com). Specifically, we have the budget (‘‘negative costs’’) and the domestic, international, video, and DVD revenues.21 We use two measures of return: total revenues over budget (‘‘negative costs’’), and total revenues over budget plus advertising and promotion expenditures. In spite of industry wisdom, promotional expenditures are highly correlated with the budget (See Ravid and Basuroy 2004). Therefore the two indices are highly correlated. However, we did use both to be consistent with the previous marketing literature discussed earlier. For each film we obtain several additional control variables. MPAA ratings (in particular, family friendly ratings) have been shown to be a most important determinant of revenues and returns in a number of previous papers.22 We obtain ratings for all films released. We use several additional control variables, representing star power and critical opinion. Star power can, in principle affect box office revenues.23 To assess star qualities, we use IMDb, which provides a list of the director, and up to 8 main cast members. We then classify each cast member following a similar procedure to the one used to measure screenwriter experience and past success. The variable Cast Nominated Oscar counts the total number of Oscar nominations for the film’s 8 main cast members, prior to the film’s production date. Cast Awarded Oscar, Cast Any Nomination and Cast Any Award have a similar interpretation. Alternatively, we use IMDb Starmeter to classify an actor as a star. Starmeter uses proprietary algorithms that take into account several measures of popularity for people and titles. The primary measure captures who or what is being viewed on the public imdb.com website. Other factors include box office receipts and user quality votes on a scale of 1–10. The rankings are updated on a weekly basis. We classify 21 Before we do the analyses, we adjust all financial data from the release date to 2003 dollars using the annual average Consumer Price Index factors from the Bureau of Labor Statistics, available at http://www.bls.gov. 22

See for example, Ravid (1999), Ravid and Basuroy (2004), Fee (2002), De Vany and Walls (2002), or Simonoff and Sparrow (2000). 23 The analysis of stardom goes back to the theory of Rosen (1981) and includes empirical work by Hamlen (1991) and Chung and Cox (1994). Ravid (1999), however, who considers specifically the movie industry, finds that star power is not a significant determinant of either revenues or return on investment.

123

J Cult Econ (2013) 37:271–307

283

an actor as a star if he or she has a Starmeter ranking better than 150 in the first entry in January of the year the movie is released. For example, Edward Norton was the lead character in the film The 25th Hour, released on December 19, 2002. Norton’s ranking on January 6, 2002 was 99, so that according to the Starmeter classification, he would be classified as a star.24 Our Starmeter variable counts for each film (similar to other cast reputation variables) the total number of cast members who were classified as stars in January of the year the movie was released. Using the different reputation variables, we create dummies as alternative measures of cast stardom. Thus, Cast Dummy Awarded Oscar, for instance, takes the value one if any cast member has been previously awarded an Oscar.25 We measure critical opinion using the Crix Picks column in the publication Variety, which lists reviews from various media outlets (including major papers and broadcast outlets) for the first weekend in which a film opens in New York, Los Angeles, and Chicago.26 The total number of reviews, Total Reviews proxies for the attention the movie receives.27 In its Crix Picks column, Variety classifies reviews (based on critics’ own assessments) as ‘‘pro’’, ‘‘con’’, or ‘‘mixed.’’ We use these classifications to create summary measures of critical opinion: Positive Reviews is the ratio of number of ‘‘pro’’ reviews divided by the total number of reviews. NonNegative Reviews is the ratio of non-negative reviews (i.e. pro plus mixed) divided by the total number of reviews.28

2 Results 2.1 Hard information, soft information, and screenplay sales Table 1 provides data description and means comparisons. Each panel lays out the variables of interest for the entire sample, the sub-sample with contingent contracts, and the sub-sample of screenplays that were later produced. The t test for equality of means and the Kruskal–Wallis (K–W) test for equality of medians show that script prices are significantly different depending on the values of the information variables. The table suggests that two elements are salient in successfully selling screenplays: screenwriter experience and past success, and soft information or 24 We experimented with Starmeter rankings of the highest 50 or highest 100, but that did not change the qualitative results. None of these variables was significant. 25

Continuing with our example, Cast Dummy Awarded Oscar takes the value one for the film The 25th Hour, since one of the film’s cast members, Anna Paquin, received an Academy Award in 1994 for her role in the film The Piano. 26 In the earlier years of our sample, reviews from Washington, D.C. were also included and we include these as well when they are present. Their numbers are generally small. We do not include the reviews from London, where movies do not generally open at the same time as in American cities. 27

Ravid (1999) found that the total reviews variable significantly affected movie performance in his sample. In the marketing literature there is an active debate regarding the role of critics (See for example, Eliashberg and Shugan 1997 and Basuroy et al. 2003) but there is no disagreement about the idea that reviews have an impact on the success of movies.

28 In Ravid (1999) only the total number of reviews mattered. However, Eliashberg and Shugan (1997) as well as Basuroy et al. (2003) found that reviews significantly affect the weekly revenues.

123

284

J Cult Econ (2013) 37:271–307

complexity. The number of movies previously credited to the screenwriter dramatically increases her compensation (Table 1, panel B). For example, the median writer who sells his screenplay can expect $303,000, whereas the compensation grows to over half a million (550 K) for the median experienced writer. Similarly, writers who have written more successful screenplays (reputation movies = 3) are much less likely to receive a contingent contract (See Harris et al. 2012). Nomination of any kind increases the writer’s compensation significantly, as well as winning Oscars. Thus, panel B of Table 1 suggests that the ‘‘hard information’’ component is important for sale prices. The next panel (panel C) includes ‘‘soft information’’ variables. The results suggest that shorter (‘‘high concept’’) loglines (SoftWords = 0) are associated with higher prices, and a lower probability of a contingent contract. (Here the separation is between 0, 1, and 2 versus 3). Similarly, screenplays that provide additional information are rewarded for it, and a ‘‘transparent script’’, which is a composite of the two measures, is worth more than a ‘‘non-transparent’’ one. An average ‘‘transparent script’’ sells for $836,000, whereas the price of the average ‘‘nontransparent script’’ fetches only $622,000. The role of genres is described in panel D but we cannot draw very clear conclusions from the numbers in this panel. In panel E we consider the variability of prices of screenplays with more or less soft information. The distribution of prices of ‘‘softer’’ screenplays is more variable, supporting the characterization of ‘‘softness’’ as a risk factor.29 In summary, the first five panels of Table 1 suggest that soft information in the pitch and hard information variables are priced and are important. Hard (experience) information increases prices. However, pitches that contain ‘‘softer’’ information lead to significantly lower prices. The last three panels of Table 1 test the organizational structure issue, i.e. the willingness to pay for soft and hard information by larger, more hierarchical organizations versus smaller companies, which is discussed by much of the theoretical work on ‘‘soft information’’. We divide the set of screenplays into two groups. The first (Large) contains those scripts bought by the six large studios whose Chairmen and Presidents constitute the board of the Motion Picture Association of America, and their three large subsidiaries. The second group (Small) contains those screenplays bought by any other company among the more than 500 buyers in our data set.30 Panel F shows the mean and median calculations from panels A and B for each group separately and the differences of means and medians. Panel G shows only the differences. We performed t tests for the differences in these panels. In almost every category, the ‘‘Large’’ studios pay higher prices for screenplays and 29

We thank Mitch Petersen for suggesting this test.

30

These six studios are listed on the home page of the Association at www.mpaa.org.We did the same tests using an expanded set of 17 studios for the Large set, taken from the data used in Einav (2007). The results were similar. We further divided the Large group into those screenplays bought by the large studio alone, and those bought by the large studio in partnership with a smaller studio. Comparing the same means for these two subgroups gives similar results. The partnerships with smaller studios are analogous to a large bank having smaller branches or subsidiaries closer to the customers or having a decentralized organization, as described by Stein (2002). The large buyers alone use more hard information, while in partnerships with small studios, they may be able to make better use of the soft information.

123

AnyAward

AnyNom

AwardOscar

NomOscar

FirstMovie

ReputationMovies 77 21

2

3

317

0

13

1

3

1

41

1 765 12

0

1

p value

736

0

p value

774

0

p value

764

0

p value

460

1

p value

460 219

0

Scripts with non-contingent contracts

1

298

Scripts with contingent contracts

Panel B: Screenwriter Reputation (Hard Information)

777 479

Scripts with compensation information

Panel A: Contract Summary Statistics

N

1,573

634

0.0001***

1,426

605

0.0004***

2,736

641

0.0001***

1,890

628

0.0001***

900

475

2,019

1,114

718

475

958

456

649

595

468

0.0030***

565

460

0.1236

2,420

468

0.0403**

878

468

0.0001***

550

303

0.0001***

1100

565

520

303

550

303

468

0.4167

0.6196

0.4534

0.5610

0.6196

0.0278**

0.0000

0.6189

0.2456

0.4615

0.6191

0.6080

0.6057

0.6239

0.1209

0.3810

0.6623

0.6073

0.6239

0.0000

1.0000

0.6165

Mean

Mean

Median

Cont

Price

Table 1 Summary statistics for screenplay prices by hard and soft information variables

0.3333

0.2040

0.0027**

0.3902

0.1956

0.0482**

0.6667

0.2042

0.0027***

0.5385

0.2003

0.0100***

0.2516

0.1748

0.0652*

0.3158

0.2500

0.2465

0.1748

0.2318

0.1925

0.2060

Mean

Produced

J Cult Econ (2013) 37:271–307 285

123

123 139 102

2

3

269

1

100

1

132 350 117 107 159 74

Comedy

Drama

Romance

Thriller

Other

p value

651

0

p value

508

0

p value

296 214

0

1

p value

N

Action

Panel D: Genres

TransparentScript

InfoDummy

SoftWords

Panel C: Soft Information

Table 1 continued

610

827

632

608

608

633

0.0548*

836

622

0.3368

697

623

624

609

645

684

0.0016***

510

510

468

510

396

515

0.0267**

497

460

0.0001***

484

468

0.0441**

298

520

468

468

0.0686*

0.6757

0.5283

0.6916

0.6410

0.6143

0.6591

0.7707

0.6100

0.6252

0.2900

0.5911

0.6299

0.1963

0.7157

0.6331

0.6028

0.6014

0.1518

Mean

Mean

Median

Cont

Price

0.2676

0.1623

0.2095

0.2435

0.2137

0.2137

0.7397

0.1959

0.2106

0.5912

0.2171

0.2004

0.5011

0.2300

0.2117

0.2344

0.1815

0.2724

Mean

Produced

286 J Cult Econ (2013) 37:271–307

171

Scripts with non-contingent contracts

127

ReputationMovies

5.887 306.8**

0

1.536

1.012

0.015

0.041

Mean

1

Panel G: Hard Information: Large Buyer Value—Small Buyer Value

Ratio

30

0.0297

Difference

Scripts with non-contingent contracts

70

159

565

363

0.6329

0.6032

Cont

348

Scripts with contingent contracts

1,158

512

510

535

293

351

Mean price

181** 103

Scripts with compensation information

Large—small

Scripts with non-contingent contracts

749

810

409

568

Mean

Median

8.62

5.69

Skew

Mean

0.0006***

1,270

995

SD

Cont

0.0296**

298

509

Median

Price

219

0.8169

624

655

Mean

Scripts with contingent contracts

Scripts with compensation information

346

260

Large buyers

431

Scripts with contingent contracts

N

Scripts with compensation information

Small buyers

Panel F: Contract Summary Statistics for Large and Small Buyers

102

p value

649

SoftWords = 3

N

SoftWords \ 3

Panel E: Variability of Price for Two Levels of Soft Information

Table 1 continued

J Cult Econ (2013) 37:271–307 287

123

123 1.012

60

69

High hard, low soft

Low hard, high soft

106 168

542

1,214

Large

Large

7.981

1.298

1.251

1.277

2,420

1.285

5.55

1.234

1.508

Mean price Small

1.805 0.393

N

145.1** 2,252**

0

1

149.6** 526.6

0

161.9**

1

1

0

132.8* 2,263

0

1

5.887 367.6

1

-689.6

3

0

651.6

2

Small

771 471

-0.1667

0.034

-0.1738

0.042**

0

0.032

-0.55

0.041

0.017*

0.041

-0.212

0.008

Mean

Difference

Ratio

Cont

Mean price

Panel H: Hard and Soft Information: Large and Small Buyer Values

AnyAward

AnyNom

AwardOscar

NomOscar

FirstMovie

Table 1 continued

71

443**

Difference

Large-small

1.151

1.574

Ratio

Large/small

288 J Cult Econ (2013) 37:271–307

This table summarizes the relationship between screenwriter compensation, screenwriter reputation and script complexity. The Price, Cont, and Produced columns give the means or medians of those variables as classified by the variables and values in the first two columns. The p values are for tests of equality across values of the classifying variables. For variables with two values (0, 1), the T Test is used for equality of means. For variables with more than two values and for all medians, the Kruskal–Wallis (K–W) test is used for the equality of two or more medians. The last two columns analyze how screenwriter reputation and script complexity influence the type of contract offered to the screenwriter, as well as the probability that the script is ultimately produced. Compensation variables include the price (in thousands of 2003 dollars) paid to the screenwriter (Price); which is either the price paid in non-contingent contracts or the initial price paid in contingent contracts; Cont is a dummy variable that takes the value 1 when the screenwriter is offered a contingent contract (i.e. a contract in which compensation depends on whether the movie is ultimately produced or not). Produced is a dummy variable that takes the value 1 if the movie is produced and 0, otherwise. We include several screenwriter reputation variables. ReputationMovies takes the value 0 if the screenwriter has not previously sold any script; 1 if the screenwriter has previously sold between 1 and 3 scripts; 2 if the screenwriter has previously sold between 4 and 10 scripts; and 3 if the screenwriter has previously sold more than 10 scripts. FirstMovie takes the value one if the screenwriter has not previously sold any script, and zero otherwise. NomOscar (AwardOscar) takes the value 1 if the screenwriter has previously been nominated for (won) an Oscar. AnyNom (AnyAward) takes the value 1 if the screenwriter has previously been nominated (won) for an award in the following festivals: Oscars, Golden Globes, British Academy Awards, Emmy Award, European Film Award, Cannes, Sundance, Toronto, Berlin. We also include several variables that try to capture soft information or script complexity. SoftWords equals 0 if the script logline contains up to 20 words; 1 if it contains between 21 and 30 words; 2 if it contains between 31 and 40 words; and 3 if it contains more than 40 words. InfoDummy equals 1 if additional information about the script is available. We create a script complexity index, TransparentScript, that equals 1 when the logline contains up to 20 words (i.e. SoftWords equals 0), and additional information about the script is available (i.e. InfoDummy equals 1). The genre variables are dummy variables: Action (Comedy, Drama, Romance, Thriller) takes the value 1 if the script is classified in the ‘‘Action’’ (Comedy, Drama, Romance, Thriller) category by Spec Screenplay Directory, and 0 otherwise. Other is 1 if the other five genre variables are all 0, and 0 otherwise. Compensation, soft information and type of contract data are from the Spec Screenplay Sales Directory. Reputation variables and information regarding whether the movies have been produced is from IMDB. Panel E shows that the set of movies with high soft information has a much higher variability of prices. Panels F and G give the same statistics as Panels A and B respectively, but with the movies divided into two groups: those bought by 6 of the largest studios and those bought by other studios. Panel F shows each group separately and the differences of the means and ratios of mean prices in each category. Panel G shows only the differences of the means and ratios of mean prices. Panel H shows mean prices for large and small studios in groups that have the most hard information and least soft or the most soft information and the least hard and the large-to-small ratios. The definition of high and low hard (soft) information is based on an index formed by adding several of the hard (soft) information variables. Details of the index definitions are given in the text. The asterisks in the difference panels indicate the significance of t tests for the means of the two groups being different. *,**,***indicate significance at the 10, 5 and 1 percent levels for the t and Kruskal–Wallis tests

Table 1 continued

J Cult Econ (2013) 37:271–307 289

123

290

J Cult Econ (2013) 37:271–307

perhaps surprisingly, offer more contingent contracts.31 In many cases, these differences are statistically significant (panel F). If the hierarchical structure of a large company leads to difficulties in making judgments based on soft information, then contingent contracts may be beneficial. This finding is consistent with the banking literature. We test this idea further, by looking at the relative prices paid for ‘‘harder’’ screenplays. Specifically, in panel G we compare ratios between prices paid by large studios and small studios as the availability of hard information increases. We find that large studios pay relatively more for hard information. In other words, the large studios are not only paying more for everything, but are willing to pay a premium for hard information. This further confirms the analogy to banking papers.32 This also suggests that it matters to whom you pitch your movie and different types of companies have different a tolerance level for ‘‘soft’’ pitches. To see the combined effect on prices of studio size and information, we use overall indices of hard and soft information.33 Panel H shows the comparison of the ‘‘hardest’’ movies (high hard and low soft information), to the ‘‘softest’’ (low hard and high soft information), for ‘‘Large’’ and ‘‘Small’’ studios. For the ‘‘hardest’’ screenplays, the large-small difference of average prices is significant, and the large/ small ratio is 1.57. For the ‘‘softest’’ group, the difference is insignificant and the ratio is 1.15. ‘‘Large’’ studios may pay more in both cases, but relatively more for harder screenplays and less for softer ones. This also confirms the predictions of models such as Stein (2002). The remaining tables present regressions testing the relationship between soft information, compensation and deal structure.34 In Table 2, the dependent variable is the price paid in either contingent (not made) or non-contingent contracts. The results seem to confirm the findings in the means tests. In all regression specifications, hard information (experience) variables, such as the number of films the screenwriter had written, or nominations for major awards, are positive and significant. Soft information variables, such as SoftWords (a dummy) and LogWords (which counts the number of words) have the right sign (negative) but are not significant. If we compare the ‘‘softest’’ screenplays (first time writers with longer loglines) to others, the variable Logwords 9 FirstMovie is

31

This effect is also seen in Tables 2 and 3: the LargeStudio variable is always positive and usually significant. 32 The one exception is for RepnMovies = 3, where the small studios pay more than the large studios. This anomaly results from a category with only seven data points, two of which are outliers. 33

The hard information index is HardIndex = RepnMovies ? AnyNom, with a value between 0 and 4. Low hard information is defined as HardIndex = 0 and high hard information is defined as HardIndex [ 0. The soft information index is SoftIndex = SoftWords ? SoftGenres ? (1-InfoDummy) ? (1-SoftLogMovies) with a value between 0 and 6. Low soft information is defined as SoftIndex \ 3 and high soft information is defined as SoftIndex [ 2. For the 777 screenplays for which we have prices, 448 have low hard information, 329 have high hard information, 303 have low soft information, and 474 have high soft information. 34

All regressions are tested for multicollinearity and the standard errors are White heteroskedasticity adjusted.

123

Comedy

Action

LargeStudio

SoftLogMovies

InfoDummy

SoftWords

FirstMovie * LogWords

LogWords

AnyNom

NomOscar

ReputationMovies

NumberMovies

124.318*

147.881**

(1.8200)

(2.0200)

(2.2800)

(1.9300)

70.147 (0.7100)

(0.3000)

(-0.3500)

-36.488

(1.7100)

29.851

(-0.1500)

-16.236

(1.9500)

(1.9800)

(2.0100)

146.263**

(0.0100)

132.348*

146.373**

170.519**

(2.0800)

152.328**

(0.8100)

(-0.0700)

62.200

(-0.5800)

(-1.8000)

(-3.6500) -19.657

-4.066*

-7.922***

(1.1300)

141.991*

(2.2300)

238.145**

(4.2400)

688.938***

(6.3400)

53.051***

88.099

140.788

(-0.8700)

(1.4700)

157.548

(4.3000)

(4.0000)

1.490

(2.1000) -2.252

696.299***

1139.496***

(6.8400)

53.914***

-9.417

223.033**

(1.3500)

(5.0600)

(4.2100)

143.939

835.403***

(3.0100)

683.390***

851.471***

(6.9700)

(2.8900)

(7.8100)

(7.1400)

334.019***

815.258***

368.410***

(6.4200)

341.494***

50.893***

53.200***

(6.7400)

Table 2 OLS regressions for screenplay prices

(0.3500)

34.941

(-0.2000)

-21.227

(1.9700)

145.277**

(-1.6100)

-3.626

(6.2200)

52.115***

J Cult Econ (2013) 37:271–307 291

123

123

0.0930

0.0380

751

(8.5600)

499.014***

0.0795

751

(5.0400)

439.095***

0.0967

751

(4.0600)

405.653***

0.0975

751

(2.9100)

297.853***

(-1.8800)

(-1.5800)

(2.2100) -202.138*

-170.148

(1.9100)

(0.4000) 252.522**

(0.2000) 218.470*

42.824

(0.2700)

(-0.0100) 21.217

32.648

-0.895

0.0447

751

(11.1700)

686.391***

0.0916

751

(7.5300)

477.003***

0.0750

751

(7.6500)

532.315***

0.0758

751

(4.7700)

516.276***

(-1.4500)

-156.857

(1.7000)

197.123*

(0.2300)

24.909

(0.0000)

0.093

General Compensation. This table reports OLS estimates of general compensation regressions on a set of variables that measure screenwriter reputation, script complexity, movie genre and agency relationships. The dependent variable, Price (measured in thousands of 2003 dollars), reflects the payment made to the screenwriter when he sells the script. In noncontingent contracts, the screenwriter compensation is fixed (i.e. it does not depend on whether the movie is produced or not). In contingent contracts, Price reflects the screenwriter compensation when the movie is not made. We include several screenwriter reputation variables. NumberMovies measures the number of scripts previously sold by the script’s screenwriter. ReputationMovies takes the value 0 if the screenwriter has not previously sold any script; 1 if the screenwriter has previously sold between 1 and 3 scripts; 2 if the screenwriter has previously sold between 4 and 10 scripts; and 3 if the screenwriter has previously sold more than 10 scripts. FirstMovie takes the value 1 if the screenwriter has not previously sold any script, and 0 otherwise. NomOscar takes the value 1 if the screenwriter has been previously nominated for an Oscar. AnyNom takes the value 1 if the screenwriter has been previously nominated for an award in the following festivals: Oscars, Golden Globes, British Academy Awards, Emmy Award, European Film Award, Cannes, Sundance, Toronto, Berlin. We also include several variables that try to capture soft information or script complexity. LogWords counts the number of words in the script logline. SoftWords equals 0 if the script logline contains up to 20 words; 1 if the script logline contains between 21 and 30 words; 2 if the script logline contains between 31 and 40 words; and 3 if the script logline contains more than 40 words. SoftLogMovies is 1 if the logline mentions at least one other movie and 0 otherwise. InfoDummy equals 1 if additional information about the script is available. We create a script complexity index, Transparent Script, that equals 1 when the logline contains up to 20 words (i.e. SoftWords equals 0), and additional information about the script is available (i.e. InfoDummy equals 1). LargeStudio is a dummy variable for the buyer of the screensplay being one of the six largest studios. The genre and agency variables are dummy variables. Action (Comedy, Drama, Romance, Thriller) takes the value 1 if the script is classified in the ‘‘Action’’ (Comedy, Drama, Romance, Thriller) category by Spec Screenplay Directory, and 0 otherwise. Manager takes the value of 1 if the screenwriter has a manager, and 0 otherwise. We create interaction variables for soft low reputation—soft information. These variables, identified by FirstMovie *variable, take the value of the relevant soft information variable if the screenwriter has not previously sold any script, and 0 otherwise. Compensation, soft information and type of contract data are from the Spec Screenplay Sales Directory. Reputation variables and information regarding whether the movies have been produced is from IMDB. t statistics are in parentheses. *,**,***indicate significance at the 10, 5 and 1 % levels

0.0934

Adjusted R-squared

751

(6.3300)

(8.3200)

751

354.880***

439.261***

Observations

Constant

Manager

Thriller

Romance

Drama

Table 2 continued

292 J Cult Econ (2013) 37:271–307

J Cult Econ (2013) 37:271–307

293

significant at 5 % or more supporting the idea that if you are inexperienced and you provide a ‘‘soft’’ screenplay, you will be punished.35 TransparentScript, which describes screenplays with additional information and for which the log line contains less than 20 words is most significant. The lower is the ‘‘soft information’’ content, as measured by TransparentScript, the higher is the price. As a rough estimate, we can say that having sold a previous movie (NumberMovies) increases the price received by about 50,000 dollars. However, a transparent screenplay is worth about $200,000 more. Whereas the seller cannot control the former element (experience), she can and should control the latter (the pitch and information provided). Softlogmovies is insignificant. It has not worked in other specifications either, perhaps due to the small number of observations. In this table and in others, having a manager seems to decrease the price received. We are not sure why, but perhaps more established writers do not need a manager. As seen in the means comparisons, here as well, large studios pay more. We also ran regressions for the noncontingent contracts only with similar results. (Tables are available from the authors.) Table 3 presents a regression in which the dependent variable is the initial compensation in contingent contracts. Harris et al. (2012) suggest that less experienced writers should receive contingent contracts, and this is also what we see in Table 1. Thus, we expect soft information to matter more for the pricing of this sub-set of contracts, as it indeed does. HighWords (defined as 1 if LogWords[40 and 0 otherwise) is negative and significant in most regressions. The interaction variables generally have the right sign, and are often significant, including those with soft genres (in which more than one genre is assigned to the screenplay). This again supports the idea that soft information is viewed as a risk factor that lowers prices. In summary, the results so far suggest that screenplay prices and contract design are heavily dependent on both the identity of the writer and the soft information contained in the description of the project, at the time of sale. We turn next to the role of soft information in the success or failure of movies that are actually produced, and consider whether the first-stage (screenplay) pricing makes economic sense.

2.2 Screenplay prices and the success of films The distribution of movie releases from our sample is somewhat skewed compared to a random sample—there are no G-rated movies, and there are fewer PG-rated movies, fewer R-rated movies and more PG-13-rated movies than in a random sample, (see Ravid 1999; De Vany and Walls 2002 and MPAA.org). Some recent work seems to show that the most profitable family movies tend to be developed in house, rather than purchased from outside screenwriters (see Palia et al. 2008). Also, films based on scripts by first time screenwriters have lower budgets, as expected.36 35 This result is conceptually similar to the finding in the banking literature that small, opaque firms have more difficulties in obtaining credit, and prefer working with smaller banks which can better handle soft information [see Petersen (2004) and Berger et al. (2005)]. 36

Detailed tables are available from the authors.

123

294

J Cult Econ (2013) 37:271–307

Table 3 OLS regressions for initial compensation in contingent contracts NumberMovies

47.508*** (9.1600)

ReputationMovies

47.316*** (9.1500) 193.996*** (8.9700)

193.029*** (8.8800)

FirstMovie* LogWords

-4.153*** (-4.1300)

SoftWords InfoDummy

-10.546 (-0.3100)

6.551 (0.1800)

FirstMovie* SoftGenres SoftLogMovies HighWords FirstMovie* HighWords LargeStudio

-42.062 (-0.9700) 33.431 (0.5400) -73.684* (-1.6700)

-63.460 (-1.4300)

-60.583 (-1.3500)

78.526** (2.4300)

83.111** (2.5600)

359.301*** (14.8800) 467 0.1666

325.300*** (11.4700) 467 0.1622

77.780** (2.3600) 32.641 (0.7000) 33.932 (0.7800) 33.075 (0.6200) -20.348 (-0.4400) 14.586 (0.2800) -75.828 (-1.5300) 306.842*** (6.7100) 467 0.1572

Action Comedy Drama Romance Thriller Manager Constant Observations Adjusted R-squared

-74.383* (-1.6900) 97.547*** (2.8500)

78.757** (2.4400)

484.693*** (16.9100) 467 0.0561

362.018*** (15.3400) 467 0.1679

Initial compensation in contingent contracts. This table reports OLS estimates of initial compensation in contingent contracts (i.e. contracts in which the screenwriter compensation depends on whether the movie is produced or not) on a set of variables that measure screenwriter reputation, script complexity, movie genre and agency relationships. The dependent variable, Price (in thousands of 2003 dollars), measures the initial payment that the screenwriter receives in a contingent contract. If the movie is not produced, the screenwriter does not receive any additional compensation. When the movie is produced, the screenwriter is paid an additional fee. We include several screenwriter reputation variables. NumberMovies measures the number of scripts previously sold by the script’s screenwriter. Reputation Movies takes the value 0 if the screenwriter has not previously sold any script; 1 if the screenwriter has previously sold between 1 and 3 scripts; 2 if the screenwriter has previously sold between 4 and 10 scripts; and 3 if the screenwriter has previously sold more than 10 scripts. First Movie takes the value 1 if the screenwriter has not previously sold any script, and 0 otherwise. We also include several variables that try to capture soft information or script complexity. LogWords is the number of words in the script’s logline (brief description). SoftWords equals 0 if the script logline contains up to 20 words; 1 if the script logline contains between 21 and 30 words; 2 if the script logline contains between 31 and 40 words; and 3 if the script logline contains more than 40 words. LogMovies is the number of other movies mentioned in the script’s logline. SoftLogMovies equals 1 if the script’s logline refers to any other movie, and 0 otherwise. InfoDummy equals 1 if additional information about the script is available. SoftGenres equals 1 if the qualified number of genres is greater than 1, and 0 otherwise. HighWords is 1 if the script’s logline has more than 40 words (SoftWords = 3), and 0 otherwise. LargeStudio is a dummy variable for the buyer of the screensplay being one of the six largest studios. The genre and agency variables are dummy variables. Action (Comedy, Drama, Romance, Thriller) takes the value 1 if the script is classified in the ‘‘Action’’ (Comedy, Drama, Romance, Thriller) category by Spec Screenplay Directory, and 0 otherwise. Manager takes the value of 1 if the screenwriter has a manager, and 0 otherwise. We create interaction variables for soft low reputation - soft information. These variables, identified by FirstMovie *variable, take the value of the relevant soft information variable if the screenwriter has not previously sold any script, and 0 otherwise. Compensation, soft information and type of contract data are from the Spec Screenplay Sales Directory. Reputation variables and information regarding whether the movies has been produced is from IMDB. t-statistics are in parentheses. *,**,***indicate significance at the 10, 5 and 1 % levels

123

Cast Any Nomination

Cast Awarded Oscar

Cast Nominated Oscar

Before 2,000 9 total reviews

Total reviews

Non-negative review fraction

Positive review fraction

PG-13

PG

Ln budget 2

Ln budget 1

Price

(0.1600)

266,057

(-0.3400)

-743,374

(-0.3500)

-44,477,158

(-0.8400)

(0.6200)

981,975

(-1.1700)

-2,702,804

(1.9700)

(1.3100)

-1,569,172

(1.0400)

(1.2900)

(1.7300) 195,783,564*

(0.6900) 49,650,851

(-0.5300)

(-0.6900)

(-0.5700)

7,968,990

(-0.9800)

-1,961,663

(0.7400)

1,896,997

10,829,964

(-0.5300)

-1,102,379

(0.8700)

2,352,526

(-0.1800) 62,708,013*

-690,734

-10,299,810

(-0.1900)

(-0.1000)

-8,057,946

-215,669

(-0.2600)

-123,433

-415,980

(-0.2000)

(-1.5800)

(-1.4800)

-328,790

-252,480,753

(-0.2100)

(1.2100)

42,702,792

-140,817,394

(2.1200)

(1.1000) -22,153,945

(2.1200)

42,873,965

144,325,086**

(1.9500)

(-0.0700) 51,911,419**

(0.4400)

48,248,354*

51,508,269

(2.7000)

-4,311,865

28,031,828

-14,666,164

(2.2600)

(0.0900)

7,260

111,764,449

(1.3000)

71,652,341**

(0.4200)

34,437

(4.1800)

(1.0200)

27,346,358

(4.3200)

38,674***

Contingent contract

88,771,154***

(2.5500)

(3.8900)

39,197*** 25,470,126

(3.8000)

Non contingent cont

43,390,412**

33,276***

39,586***

(4.4600)

Entire sample

Table 4 OLS regressions for the total revenues of films produced

J Cult Econ (2013) 37:271–307 295

123

123 0.3897

-247,997,362

(-1.1300) 0.5469

31

(-0.6500) 0.5905

31 0.1145

47

(-2.2500)

-1,171,848,225**

0.1076

47

(-2.5400)

-1,870,921,912

(0.1300)

-447,040,252

5,259,528

(-1.0000)

(-0.7900)

-3,104,871

Contingent contract

-428,355,684

Non contingent cont

Total Revenues-Films produced. This table reports the OLS estimates of the regression of total revenues on screenwriter compensation, and a set of control variables that includes movie reviews and cast reputation. All financial data is CPI-adjusted from the year of movie release to thousands of 2003 dollars, except for the script price, which is adjusted from the year of the script sale. Total Revenues equals Domestic Gross, Foreign Gross, Domestic Video Gross and Domestic DVD Gross. regression of domestic gross revenues on screenwriter compensation, and a set of control variables that includes movie reviews and cast reputation. The first two columns report estimates for our sample of produced scripts for which Baseline FT gathers financial data. Columns three and four are restricted to scripts in which a non-contingent contract is offered to the screenwriter. Columns five and six are restricted to scripts in which a contingent contract is offered to the screenwriter and the compensation measure used is the initial payment made to the screenwriter. For each film we gather Variety Reviews. Each Variety reviewer grades the movie as positive, negative, or mixed. Positive review fraction is the fraction of all reviews that are positive. Non-negative review fraction is the fraction of all reviews that are positive or mixed. Total Reviews is the total number of reviews. Before 2000 is a dummy variable that is 1 for years before 2000 and 0 otherwise. It is used because there was a significant drop in the total number of reviews for all movies around that time. For each movie, we gather several measures of cast reputation: the total number of Oscar and major festival nominations and awards for the entire cast. We then create a set of dummy variables that equal one if any cast member is defined as a star for each star definition. Starmeter measures cast reputation following the opinion of IMDb readers. We classify as a star any actor/actress who in the January prior to the film’s release has a Starmeter rating below 150. t statistics are in parentheses. *,**,*** indicate significance at the 10, 5 and 1 % levels

0.3357

Adjusted R-squared

(-4.0800) 78

(-1.8600)

78

-1,542,188,620***

-578,203,483*

Entire sample

Observations

Constant

Cast Dummy Starmeter

Cast Any Award

Table 4 continued

296 J Cult Econ (2013) 37:271–307

J Cult Econ (2013) 37:271–307

297

In the regressions presented in Table 4, we run total revenues of the movies produced from screenplays in our sample (domestic, international, video and DVD as well as total revenues and rate of return) against control variables and the price paid for the screenplay.37 We only report revenue regressions where the dependent variable is total revenues in Table 4. The control variables that are significant are similar to those that mattered in other work—namely, budget and reviews.38 The star status of the cast does not make a difference (See Ravid 1999, for similar results on a different sample, as well as De Vany 2004; Fee 2002 or Elberse 2007). PG-13 films are better than R-rated films (our default). However, the revenues of PG-rated films are not significantly different from those of R-rated films. It may be that the small number of PG-rated films contributes to that result–in the means comparisons PG-rated films were better performers, consistent with most other studies of the film industry (see Ravid 1999; De Vany and Walls 2002; Palia et al. 2008.). The most interesting finding for this study, which supports the thrust of the argument in Eliashberg et al. (2007), is the role of the price paid for screenplays. The price variable is positive and significant in regressions for the whole sample and for the subset with non-contingent contracts. In other words, the sales pitch matters and can affect not only the price of the screenplay but the success of the completed product. This also suggests that screenplay buyers make rational economic decisions.39 In a way, prices paid serve as a signal for the perceived quality of the subsequent project. The fact that prices are not correlated with revenues for contingent contracts is consistent with the idea that these are the ‘‘softer’’ screenplays, and hence harder to judge. Results are similar for other revenue components (not reported). The rate of return regressions are presented in Table 5. The dependent variable is the total revenues for each film divided by total costs (production costs plus promotion and advertising).40 The control variables that matter vary (see Ravid 1999). However, the rate of return increases significantly with the price paid both for the entire sample and for non-contingent contracts. This means that more expensive 37 See Ravid (1999) and Ravid and Basuroy (2004) for a discussion of the methodology. De Vany and Walls (1999) argue that because the movie revenue function has fat tails and can be approximated by a Levy stable distribution which may have an infinite variance, inferences may be difficult. We should note however, that total revenues, which we use here, tend to be less skewed than US theatrical revenues which De Vany and Walls use, and the variance of total revenues is lower (See Ravid 1999). This latter point has an industry counterpart as well—executives always tend to believe that if one source of revenues fails them, others may fill in the gap. For example, the Titanic re-release in 3d in 2012 made about 58 million dollars during the first 6 weeks of its release in the US, but international box office was 283 million. On the other hand, 21 Jump Street made in 9 weeks 135 million dollars in the US and only 49 million dollars overseas (Variety, May 21–27, 2012). Also, even in a De Vany Walls (1999) world some inferences are possible. Finally, most recent leading papers in the area such as Elberse and Eliashberg (2003), Einav (2007) and Chintagunta et al. (2010) use regressions in the analysis. 38 We use a dummy variable for years before 2000 because there was a significant drop in the total number of reviews for all movies around that time. 39

There is of course a selection bias in the set of films produced- they may be the ‘‘better’’ screenplays. However, our results suggest that within this group, higher prices for the screenplay are correlated with a higher rate of return on the film. 40 The results and the measures are somewhat different from Eliashberg et al. (2007) - in that paper revenues include only US theatrical revenues whereas we include revenues from all sources.

123

123

Cast Dummy Awarded Oscar

Cast Any Award

Cast Nominated Oscar

(-0.2200)

(-0.7000)

(1.1200)

0.8118

-0.0230

(-0.1600)

(-0.7400)

-0.0180

(1.0100)

0.0322

-0.0318

(0.0900)

(0.0000) -0.0078

(0.2000)

0.0062

(0.0000)

0.0026

(-0.2400)

(-0.4700)

Before 2000 9 total reviews

-0.0045

-0.0087

Total reviews

(0.0400) (-1.2000)

-1.2860

(2.0400)

(1.2900) 0.0442

(1.8800)

0.6315

1.6092**

(2.4200)

(1.1400) 0.5302*

(0.6200)

0.6794**

(0.8123)

(-1.1200)

(-0.4800)

0.4613

-0.4050

-0.1150

(0.0900)

0.0081

(-1.1700)

-0.0271

(0.8300)

0.0248

(-1.7900)

-3.3793*

(0.8100)

0.3218

(-1.3500)

(3.6600)

0.0004***

(-0.3900)

(2.5300)

0.0003** -0.3381

(3.6100)

(3.0600)

Non contingent contracts

-0.0760

0.0004***

0.0003***

Non-negative review fraction

Positive review fraction

PG-13

PG

Ln budget 2

Ln budget 1

Price

Entire Sample

Table 5 OLS regression for the rate of return of films produced

(-0.9400)

-0.0556

(0.8500)

0.0136

(-1.6000)

-0.0362

(3.1000)

3.1534***

(1.7100)

0.6115

(-0.1100)

-0.0874

(1.9800)

0.7958

(-0.6700)

-0.0005

Contingent contracts

(-0.6400)

-0.0403

(0.8100)

0.0130

(-1.4300)

-0.0332

(2.7100)

2.9128**

(1.8200)

0.6720*

(-0.0200)

-0.0179

(2.0300)

0.8288**

(-0.7200)

-0.0006

298 J Cult Econ (2013) 37:271–307

0.1372

0.2898

31

(1.2100)

7.7275

0.3764

31

(2.0900)

9.3131**

Non contingent contracts

0.1542

47

(-1.8500)

-12.8924*

Contingent contracts

0.1436

47

(-1.8800)

-13.2389*

(-0.7300)

-0.3597

This table reports the OLS estimates of the regression of the films’ rate of return on screenwriter compensation, and a set of control variables that includes movie reviews and cast reputation. All financial data is CPI-adjusted from the year of movie release to thousands of 2003 dollars, except for the script price, which is adjusted from the year of the script sale. Rate of return is defined as the ratio between total revenues and negative costs plus print and advertisements costs. The Regression of domestic gross revenues on screenwriter compensation, and a set of control variables that includes movie reviews and cast reputation. The first two columns report estimates for our sample of produced scripts for which Baseline FT gathers financial data. Columns three and four are restricted to scripts in which a non-contingent contract is offered to the screenwriter. Columns five and six are restricted to scripts in which a contingent contract is offered to the screenwriter and the compensation measure used is the initial payment made to the screenwriter. For each film we gather Variety Reviews. Each Variety reviewer grades the movie as positive, negative, or mixed. Positive review fraction is the fraction of all reviews that are positive. Non-negative review fraction is the fraction of all reviews that are positive or mixed. Total Reviews is the total number of reviews. Before 2000 is a dummy variable that is 1 for years before 2000 and 0 otherwise. It is used because there was a significant drop in the total number of reviews for all movies around that time. For each movie, we gather several measures of cast reputation: the total number of Oscar and major festival nominations and awards for the entire cast. We then create a set of dummy variables that equal one if any cast member is defined as a star for each star definition. Starmeter measures cast reputation following the opinion of IMDb readers. We classify as a star any actor/actress who in the January prior to the film’s release has a Starmeter rating below 150. t statistics are in parentheses

0.1912

Adjusted R-squared

(0.9100) 78

(0.6300)

78

4.0930

2.1057

Entire Sample

Observations

Constant

Cast Dummy Any Nomination

Table 5 continued

J Cult Econ (2013) 37:271–307 299

123

300

J Cult Econ (2013) 37:271–307

screenplays not only increase revenues, but also actually increase profitability. A large production budget, on the other hand, seems to have a significant positive effect on revenues, while it has a negative but insignificant effect on profitability in most specifications.41 This negative effect appears in other studies as well and is often significant (see for example Ravid (1999)). Again, for contingent contracts the price is insignificant but good reviews matter, supporting the idea that these are indeed the ‘‘softer’’ screenplays, which are easier to evaluate only after production is completed. Perhaps those are the ones which can benefit most from extensive textual analysis as suggested in Eliashberg et al. (2007). The findings of this section seem to show forward looking pricing of screenplays and to cast doubt on the famous statement by screenwriter William Goldman, regarding the movie industry: ‘‘Nobody knows anything.’’ It seems that purchase prices predict the success of subsequent movies. However, this does not prove that the ‘‘best’’ screenplays indeed lead to the best movies, or that bad screenplays cannot succeed. Since there are no ‘‘screenplay reviews’’, we could not directly consider this idea using our entire sample. However, we consider the issue in two different ways. First, we perform a rather extensive press search on this issue using different key words. The general belief in Hollywood, as quoted in many sources we found, is that a good screenplay does not necessarily result in a good film but a ‘‘bad screenplay often dooms a project’’ (Jeffrey Katzenberg, quoted in the NY Times, 7/14/1983). Academy Award winning screenwriter Pamela Wallace is quoted in the Australian, a major outlet in Australia, as saying: ‘‘you can’t make a good movie out of a bad script,’’ (The Australian, 5/19/2004). There are many similar quotes. Perhaps we can finally mention a Variety article that asked writers for their favorite screenplay (Daily Variety, 1/4/2012). These may be considered ‘‘well reviewed’’ screenplays and interestingly, they corresponded rather closely to the list of the February 2012 academy award nominees for best picture (6 out of 8 nominated movies are on the list). In summary, it seems that professionals in Hollywood believe that good screenplays are a necessary (but not a sufficient) condition for good movies. We also tried to pursue a somewhat more scientific approach i.e. to consider more precisely the correlation between screenplays’ quality and movie quality through the perspective of the Academy Awards and the ‘‘Raspberry Awards’’ (‘‘awards’’ for the worst films, worst screenplays, worst actors). The table below shows the correlations between Oscar awards and nominations in different categories and between the best picture award. A similar tabulation is presented for the Razzies, the worst of the lot. For example, the top left cell shows that in 98 % of the cases the writer for the best picture was also nominated for the best writing. The next cell to the right shows that in 75 % of the cases the writer of the best picture also won an Oscar for writing. Similarly, all writers for the worst film were nominated for the worst screenplay and 82 % ‘‘won’’. 41

The budget does have a slightly significant and positive effect on the rate of return for movies whose scripts were purchased with contingent contracts. Perhaps these movies, often from less experienced screenwriters, need more production expenditures and promotion to be successful. The negative sign is consistent with findings by Ravid (1999) which suggest that low budget films are more profitable.

123

J Cult Econ (2013) 37:271–307

301

Category

Oscar nominated (best)

Oscar won (best)

Razzies nominated (worst)

Razzies won (worst)

Writing

98

75

100

82

Directing

98

83

97

67

Editing

93

50

Sound (combined)

68

43

Best actor

63

33

67

33

Best actress

38

20

55

39

Percentage of nominations and awards in different categories for the Oscar winning film over 40 years (1971–2010) and Razzies from its inception, 1981–2010

The conclusion from 40 years of Oscars and the entire history of the Razzies presented in this table is very clear. Best pictures are highly correlated with best writing. Worst pictures are highly correlated with bad writing. In only one case (Titanic 1997) the academy did not nominate the writer who wrote the best picture, and the writer of the ‘‘worst picture’’ was nominated for a Razzie every year since the inception of the awards. We should note that the correlation between acting awards and the best picture Oscar is much lower. At the low end of the spectrum, in only 20 % of the cases did the best actress act in the best picture. This illustration seems to support the view that it is the screenplay (and the director42) that drive the success of the movie.

3 Conclusions Aesthetic evaluation is central to the film industry. However, despite the message of the annual Academy Awards ceremony, the industry does not make art for art’s sake—it processes complex inputs from many different fields of art with the ultimate goal of making a profit. Our major findings highlight the dual role of soft and hard information in the successful sale of intellectual property. The screenwriter’s experience and past success, which can be easily expressed in measurable terms, are important, and increase the screenplays’ prices. However, the presence of soft information in the sales pitch, as indicated by our proxies, depresses prices. That is, screenplays characterized in ‘‘softer’’ terms, particularly if they are written by lesser-known writers, command substantially lower prices. We view this ‘‘soft information’’ as a cost element or a contractual barrier, as does the banking literature. We are thus able to justify the movie industry’s emphasis on short pitches as an effective selling tool. Our analysis suggests that when one pitches a product, a short, concise, and simple description increases the sales price, in particular for intangible products. This may also be applicable for selling other products as well from a new drug to a new book or even a new public policy. 42

See John et al. (2012).

123

302

J Cult Econ (2013) 37:271–307

We also find that it matters to whom you sell. We affirm the theoretical predictions and evidence from the banking sector that small organizations can handle ‘‘softer’’ pitches better, perhaps because the hierarchy is much less of an issue and sellers may interact directly with the people who make the final decision. In our final empirical tables we show that this high pricing for short pitches may be justified–the screenplays that cost more were the ones that culminated in more successful movies. In other words, it seems audiences too prefer simple ‘‘high concept’’ stories. A corollary of the previous conclusion is the tangible difficulty in marketing a complex product. If you cannot simplify and highlight the concept not only will the sale be less likely, but there will be a measurable impact on the price received. These findings also seem to suggest that pricing is efficient, even in an industry with a complex production function relying fundamentally on soft information. In the final analysis, ‘‘somebody knows something’’(paraphrasing the famous William Goldman characterization of the movie industry- nobody knows anything). Higher priced screenplays lead to more successful movies, suggesting that executives are able to identify the more promising properties upfront. Acknowledgments We thank Judy Chevalier and Kose John for helpful conversations on this topic. We thank seminar participants at Rutgers University, especially Darius Palia, SMU, Emory, especially George Bentson, SIFR Stockholm, especially Per Stromberg, and Cornell, for many comments and Jose Liberti for many detailed insights. We thank Mitchell Petersen for a very insightful discussion in the Econometric Society meetings. All errors are the responsibility of the authors. We thank the International Center for Finance at the Yale School of Management and the Whitcomb Center at the Rutgers Business School for financial support.

Appendix A: variable definitions Soft information: script complexity variables • LogWords counts the number of words in the script logline. • SoftWords equals 0 if the logline contains up to 20 words; 1 if it contains between 21 and 30 words; 2 if it contains between 31 and 40 words; and 3 if it contains more than 40 words. • HighWords is 1 if the logline contains more than 40 words (SoftWords = 3) and 0 otherwise. • InfoDummy equals 1 if additional information about the script is available. • TransparentScript is a script complexity index that equals 1 when the logline contains up to 20 words (i.e. SoftWords equals 0), and additional information about the script is available (i.e. InfoDummy equals 1). • SoftGenres equals 1 if the qualified number of genres is greater than 1, and 0 otherwise. • SoftLogmovies equals 1 if the script’s logline refers to any other movie, and 0 otherwise. • SoftIndex = SoftWords ? SoftGenres ? (1-InfoDummy) ? (1-SoftLogMovies) with a value between 0 and 6. Soft information data are from the Spec Screenplay Sales Directory.

123

J Cult Econ (2013) 37:271–307

303

Hard information variables • •

• • •

• •





NumberMovies is the number of scripts previously sold by the script’s screenwriter. ReputationMovies is 0 if the screenwriter has not previously sold any script; 1 if the screenwriter has previously sold between 1 and 3 scripts; 2 if the screenwriter has previously sold between 4 and 10 scripts; and 3 if the screenwriter has previously sold more than 10 scripts. First Movie is 1 if the screenwriter has not previously sold any script, and 0 otherwise. Nom Oscar (AwardOscar) is 1 if the screenwriter has been nominated for (won) an Oscar. AnyNom (AnyAward) is 1 if the screenwriter has been previously nominated for an award in one of the following festivals and competitions: Oscars, Golden Globes, British Academy Awards, Emmy Award, European Film Award, Cannes, Sundance, Toronto, Berlin. Cast Nominated Oscar (Awarded Oscar, Any Nomination, Any Award) is the total number of Oscar and major festival nominations and awards for the entire cast. Cast Dummy Nominated Oscar (Awarded Oscar, Any Nomination, Any Award) is 1 if any cast member is defined as a star for each star definition, and 0 otherwise. Starmeter: We use IMDb Starmeter to classify an actor as a star. Starmeter uses proprietary algorithms that take into account several measures of popularity for people and titles. The primary measure captures who or what is being viewed on the public imdb.com website. Other factors include box office receipts and user quality votes on a scale of 1–10. The rankings are updated on a weekly basis. We classify an actor as a star if he or she has a Starmeter ranking higher than 150 in the first entry in January of the year the movie is released. Our Starmeter variable counts for each film (similar to other cast reputation variables), the total number of cast members who were classified as stars in January of the year the movie was released. HardIndex = RepnMovies ? AnyNom, with a value between 0 and 4.

Reputation variables data are from IMDb. Compensation: contractual variables •



Price reflects the payment made to the screenwriter when he sells the script. In non-contingent contracts, the screenwriter compensation is fixed (i.e. the screenwriter compensation does not depend on whether the movie is produced or not). In contingent contracts, Price reflects the screenwriter compensation when the movie is not produced. All prices are adjusted from the purchase date to 2003 dollars using the Consumer Price Index. Cont is a dummy variable that equals 0 if the screenwriter’s compensation is fixed; that is, the screenwriter receives a certain salary regardless of whether the movie is produced or not. The variable equals 1 when the contract is contingent and

123

304



J Cult Econ (2013) 37:271–307

compensation is structured in two steps: the screenwriter receives a certain amount for selling the script, and additional payment if the movie is actually made. Produced is a dummy variable that takes the value 1 if the script has been produced or is in production, and 0 otherwise.

Movie financial variables • • • •

All financial data (revenues and costs) are adjusted from the release date to 2003 dollars using the Consumer Price Index. Total Revenues is the sum of Domestic Gross, Foreign Gross, Domestic Video Gross and Domestic DVD Gross. Rate1 equals Total Revenues divided by Negative Costs (budget). Rate2 equals Total Revenues divided by Negative Costs plus Domestic Print and Advertising Costs.

Additional control variables •

• •



GenreDummies: Action (Comedy, Drama, Romance, Thriller) takes the value 1 if the script is classified in the ‘‘Action’’ (Comedy, Drama, Romance, Thriller) category by Spec Screenplay Directory, and 0 otherwise. Other is 1 when all the others are 0. Manager takes the value of 1 if the screenwriter has a manager, and 0 otherwise. MPAA ratings: We obtain ratings for all films released. Interestingly, our sample of films produced tends to be somewhat skewed—there are no G rated films, and more PG-13 than expected. Variety Reviews. Each reviewer included in Variety’s ‘‘Crix Picks’’ column grades the movie as positive, negative, or mixed. Positive Reviews equals the ratio between positive reviews and total reviews. Non-negative Reviews equals the ratio between positive plus mixed reviews and total reviews. Total Reviews equals the total number of reviews.

Appendix B: databases and variables SPEC screenplay sales directory, 2003 edition Database description: Compiled by Hollywoodsales.com, Spec Screenplay Sales Directory contains approximately 6 years of screenplays sales, covering 1,269 scripts. The information provided on each sale includes: title, pitch (presumably, as provided by the agents of the buyer or seller), genre, agent, producer, date-of-sale, purchase price, and buyer. Sometimes the directory provides additional information regarding the particular screenplay. This additional information may include parties who are interested in the project, or information about the screenwriter, etc. The information may be tentative (e.g. a possible director for the script), or more definite (e.g. a star actor or director who has already confirmed his participation).

123

J Cult Econ (2013) 37:271–307

305

Variables included in the study: Soft information—script complexity variables: LogWords, SoftWords, HighWords, InfoDummy, TransparentScript, SoftGenres, SoftLogMovies. Compensation—contractual variables: Price, Cont. Additional control variables: Genre Dummies, Manager. IMDb (internet movie data base) Database description: IMDb includes comprehensive information about many movies. We gather data from IMDb and IMDb-PRO on our ‘‘hard information’’ variables, including screenwriter’s experience and past success. To measure the screenwriter’s experience, we search for the number of scripts previously sold by the screenwriter and produced. (If we find no entries in IMDb, we also search our own database to see if this writer had previously sold any screenplay). We also use IMDb to check whether the screenwriter had been previously nominated for (had won) an award in any of the major festivals tracked by IMDb: Oscars, Golden Globe, British Academy Awards, Emmy Awards, European Film Awards, Cannes, Sundance, Toronto and Berlin. Alternatively, we use the IMDb Starmeter measure to classify an actor as a star. Starmeter uses proprietary algorithms that take into account several measures of popularity for people and titles. The primary measure captures who or what is being viewed on the public imdb.com website. Other factors include box office receipts and user quality votes on a scale of 1–10. The rankings are updated on a weekly basis. We classify an actor as a star if he or she has a Starmeter ranking higher than 150 in the first entry in January of the year the movie is released. Variables included in the study: Hard information variables: NumberMovies, ReputationMovies, FirstMovie, NomOscar (AwardOscar), AnyNom (AnyAward), Cast Nominated Oscar (Awarded Oscar, Any Nomination, Any Award), Cast Dummy Nominated Oscar (Awarded Oscar, Any Nomination, Any Award), Starmeter, Dummy Starmeter, Produced. MPAA.ORG MPAA is the professional association of movie producers. Its web site includes macro movie information and ratings of movies, which we use. It also includes the six studios that make up its board. We use these for our group of large studios. Variables Included in the study: Control variables: Ratings- (G, PG, PG-13, R), LargeStudio. Baseline services Database description: Baseline collects financial information about movies and productions. Specifically, we have the budget of each film, domestic revenues, international revenues, as well as video and DVD revenues. We use two measures of return. One is total revenues over budget; the other is total revenues over budget plus advertising and promotion expenditures.

123

306

J Cult Econ (2013) 37:271–307

Variables included in the study: Movie financial variables: Total Revenues, Domestic Gross, Foreign Gross, Domestic Video Gross and Domestic DVD Gross, Rate of return. VARIETY CRIX PIX Database description: The trade publication Variety lists reviews for the first weekend in which a film opens in New York, Los Angeles, and Chicago (and Washington, D.C. in earlier years of the sample). In its ‘‘Crix Picks ‘‘column, Variety classifies (based on critics’ own assessments) reviews as ‘‘pro’’, ‘‘con’’, or ‘‘mixed.’’ We use these classifications to come up with consensus measures of critical opinion: Positive Reviews is the ratio of number of ‘‘pro’’ reviews to the total number of reviews. Non-Negative Reviews is the ratio of non-negative reviews (i.e. pro plus mixed) to the total number of reviews. Variables included in the study: Movie control variables: Positive Reviews, Nonnegative Reviews, Total Reviews.

References Aghion, P., & Tirole, J. (1997). Formal and Real Authority in Organizations. Journal of Political Economy, 105(1), 1–29. Banerjee, A. V., & Duflo, E. (2000). Reputation effects and the limits of contracting: A study of the Indian software industry. The Quarterly Journal of Economics, 115(3), 989–1017. Basuroy, S., Chatterjee, S., Ravid, S. A. (2003). How critical are critical reviews? The box office effects of film critics, star power, and budgets. Journal of Marketing, 67, 103–117. Berger, A., Miller, N. M., Petersen, M. A., Rajan, R., & Stein, J. (2005). Does function follow organizational form? Evidence from the lending practices of large and small banks. Journal of Financial Economics, 76, 237–269. Chevalier, J. A., & Mayzlin, D. (2006). The effect of word of mouth on sales: Online book reviews. Journal of Marketing Research, 43, 345–354. Chintagunta, P. K., Gopinath, S., & Venkataraman, S. (2010). Online word-of-mouth effects on the offline sales of sequentially released new products: An application to the movie market. Marketing Science, 29(5), 944–957. Chisholm, D. C. (1997). Profit sharing vs. fixed payment contracts—evidence from the motion pictures industry. Journal of Law , Economics, & Organization, 13(1), 169–201. Chung, K. H., & Cox, R. A. K. (1994). A stochastic model of superstardom: An application of the Yule distribution. The Review of Economics and Statistics, 76, 771–775. De Vany, A. (2004). Hollywood economics. New York: Routledge. De Vany, A., & Walls, W. (1999). Uncertainty in the movies: Can star power reduce the terror of the box office? Journal of Cultural Economics, 23(November), 285–318. De Vany, A., & Walls, W. D. (2002). Does Hollywood make too many R-rated movies? Risk, stochastic dominance, and the illusion of expectation. Journal of Business, 75(July), 425–451. Downs, W. M., & Russin, R. U. (2003). Screenplay: Writing the play. Los Angeles: Silman-James Press. Einav, L. (2007). Seasonality in the Motion Picture industry. Rand Journal of Economics, 38(1), 127–145. Spring. Elberse, A. (2007). The power of stars: Do star actors drive the success of movies? Journal of Marketing, 71(4), 1547–7185. Elberse, A., & Eliashberg, J. (2003). Demand and supply dynamics behavior for sequentially released products in international markets: The case of motion pictures. Marketing Science, 22(3), 329–354. Eliashberg, J., Hui, S. K., & Zhang, Z. J. (2007). From story line to box office: A new approach to Green Lighting movie screens. Management Science, 53(6), 881–893.

123

J Cult Econ (2013) 37:271–307

307

Eliashberg, J., & Shugan, S. M. (1997). Film critics: Influencers or predictors? Journal of Marketing, 61(2), 68–78. Fee, C. E. (2002). The costs of outside equity control: Evidence from motion picture financing decisions. Journal of Business, 75(October), 681–711. Gill, D., & Sgroi, D. (2012). The optimal choice of pre-launch reviewer. Journal of Economic Theory, 147, 1247–1260. Godes, David., & Mayzlin, Dina. (2004). Using online conversations to study word-of-mouth communication. Marketing Science, 23(4), 545–560. Gompers, P., & Lerner, J. (1996). The use of covenants: An empirical analysis of venture partnership agreements. Journal of Law and Economics, 39(2), 463–498. Hamlen, W. A. Jr. (1991). Superstardom in popular music: Empirical evidence. The Review of Economics and Statistics, 74, 729–733. Harris, M., Ravid, S. A., & Basuroy S. (2012). Contingency without moral hazard: A theory of intellectual property contracts and evidence from screenplay sales. Working paper, University of Chicago, Booth School of Business. John, K., Ravid, S. A., Sunder, J. (2012). Managerial ability, job matching and success: Evidence from the career path of film directors. Working paper, NYU. Kaplan, S. N., & Stro¨mberg, P. (2003). Financial contracting theory meets the real world: An empirical analysis of venture capital contracts. The Review of Economic Studies, 70, 281–315. Lerch, J. (1999). 500 ways to beat the hollywood script reader: Writing the screenplay the reader will recommend. Los Angeles: Fireside Press. Lerner, J., & Merges, R. (1998). The control of technology alliances: An empirical analysis of the biotechnology industry. Journal of Industrial Economics, 46(2), 125–156. Liberti, J. M., & Mian, A. (2009). Estimating the effect of hierarchies on information use. Review of Financial Studies, 22, 4057–4090. Luo, H. (2011). When to sell your idea: Theory and evidence from the movie industry. Harvard Business School Strategy Unit Working Paper No. 12-039. Palia, D., Ravid, S. A., & Reisel, N. (2008). Choosing to co-finance: Analysis of project-specific alliances in the movie industry. Review of Financial Studies, 21(2), 483–511. Petersen, M. A. (2004). Information: Soft and hard. Working paper, Northwestern University. Petersen, M. A., & Rajan, R. (1994). The benefits of firm-creditor relationships: Evidence from small business data. Journal of Finance, 49, 3–37. Petersen, M. A., & Rajan, R. (2002). Does distance still matter? The information revolution in small business lending. Journal of Finance, 57, 2533–2570. Rajan, U., Seru, A. & Vig, V. (2010). The failure of models that predict failure: Distance, incentives and defaults. Working paper, University of Michigan, SSRN. Ravid, S. A. (1999). Information, blockbusters and stars. Journal of Business, 72, 463–492. Ravid, S. A., & Basuroy, S. (2004). Executive objective function, the R-rating puzzle and the production of violent movies. Journal of Business, 77(2), 155–192. Rosen, S. (1981). The economics of superstars. American Economic Review, 71, 845–858. Simonoff, J., & Sparrow, I. R. (2000). Predicting movie grosses: Winners and losers, blockbusters and sleepers. Chance, 13(Summer), 15–24. Stein, J. (2002). Information production and capital allocation: Decentralized vs. hierarchical firms. Journal of Finance, 1891–1921. Tetlock, P. C. (2007). Giving content to investor sentiment: The role of media in the stock market. Journal of Finance, 62(3), 1139–1168. Uzzi, B. (1999). Embeddedness in the making of financial capital: How social relations and networks benefit firms seeking finance. American Sociological Review, 64, 481–505. Uzzi, B., & Gillespie, J. (2002). Knowledge spillover in corporate financing networks: Embeddedness, network transitivity and trade credit performance. Strategic Management Journal, 23, 595–618.

123