rule does outperform simple buy and hold, it really does not provide sufficient .... can in principle be found in some well defined mathematical domain, and then to treat ..... Double Top/Twin Peaks - The name of this pattern is self-explanatory.
R E S E A R C H
C u t t i n g E d g e 19 9 7 Te c h n i c a l a n a l y s i s v e r s u s m a r k e t e f f i c i e n c y : a g e n e t i c programming approach
H e a t h e r Ta r b e r t , H e r i o t - W a t t U n i v e r s i t y, C o l i n F y f e a n d J o h n Marney, University ofPaisley
ISBN 0-85406-864-3
TECHNICAL ANALYSIS VERSUS MARKET EFFICIENCY - A GENETIC PROGRAMMING APPROACH Colin Fyfe, University of Paisley John Paul Marney, University of Paisley Heather F.E. Tarbert, Heriot-Watt University Correspondence to: Dr. J.P. Marney Dept. of Marketing and Management The University of Paisley High St. Paisley PA1 2BE
Abstract In this paper, the authors maintain that the prevalence of technical analysis in professional investment argues that such techniques should perhaps be taken more seriously by academics. They use the new technique of genetic programming to investigate a long time series of price data for a quoted property investment company, to discern whether there are any patterns in the data which could be used for technical trading purposes. A successful buy rule is found which generates returns in excess of what would be expected from the bestfitting null time-series model. Nevertheless, this turns out to be a more sophisticated variant of the buy and hold rule, which the authors term timing specific buy and hold. Although the rule does outperform simple buy and hold, it really does not provide sufficient grounds for the rejection of the eficeint market hypothesis, though it does suggest that further investigation of the specific conditions of applicablity of the EMH may be appropriate.
1.
Introduction: Technical Trading and the Efficient Markets Hypothesis
Economists have traditionally been highly sceptical of technical analysis and trading rules. It is normally argued that any discernible pattern in time-series price data would be eradicated almost instantaneously by the actions of rational investors who would take advantage of the profit opportunities presented by any kind of pattern. Nevertheless, technical analysis remains an important ‘tool of the trade’ for professional investors. For example, in a survey of dealers in the foreign exchange markets, Taylor and Allen (1992) found that 90% of respondents reported the use of some technical analysis, with 60% stating that they regarded such information as at least as important as economic fundamentals. It is also common for large investment firms to employ technical analysts alongside their fundamental counterparts (Reilly, 1994). There is, therefore, some grounds for belief in a considerable and apparently irreconcilable difference between the views of investment professionals, who emphasise the importance of technical analysis for gauging market sentiment and market psychology, and the views of academics doing research in economics and finance, who emphasise the rationality of investors and the efficiency of markets. However, evidence of a willingness on the part of academics to take technical analysis more seriously, is to be found in a number of recent papers in which it is argued that there may be grounds for reconsidering strict adherence to the proposition that the efficient markets hypothesis holds in all financial asset markets at all times. Arthur et. al. (1996, p.1-2) cite various studies which show that, contrary to the efficient market hypothesis, trading volume and price volatility are large - and that both show significant autocorrelation. Other anomalous results found by various researchers include the fact that stock returns show small but significant serial correlations, and that when investors apply full rationality to the market, they lack incentives to trade and gather information.
1
Brock et al. (1992), in an extensive survey of the literature, report studies which shows other phenomena inconsistent with the efficient markets hypothesis (hereafter EMH), including; negative serial correlation for individual stocks and portfolios over a three to ten year period, ii) predictable return reversals on monthly basis for individual securities; iii) negative serial correlation for lags up to two months and positive serial correlation for longer lags for individual securities; iv) positive serial correlation for weekly returns on portfolios and indices, and negative serial correlation for individual stocks, v) negative serial correlations for individual securities’ weekly and daily returns; vi) it was found that for many different asset markets, monthly returns were positively correlated, while those over a three to five year period were negatively correlated. The EMH has also been attacked by indisputably successful traders such as George Soros, who stated that, “...this [efficient markets theory] interpretation of the way financial markets operate is severely distorted...It may seem strange that a patently false theory should gain such widespread acceptance.” (1994, quote from Arthur et al. (1996) P.2). Finally and most importantly, a number of studies suggest that it is possible to make excess profits from technical trading. Brock et al. (1992) investigate stock index trading using two test trading strategies, moving average and trading range break. They find that these generate significant returns which cannot be explained by any standard bootstrapping models. However, in common with most tests of trading rules, the rules are implemented on an ex post basis with the concomitant risk of data mining. The ‘best’ or most common rules are imposed on the data and appear to demonstrate excess returns. Nevertheless, there remains a possibility that bias in choice of rule remains. The preferred strategy to test technical trading rules is to formulate the rules ex ante, thus eliminating potential bias. Neely et. al (1996) use ex ante rules generated by genetic programming and find strong evidence of economically significant out-of-sample excess returns to the rules for each of six exchange rates. By excess, they mean that the returns are greater than would be expected for bearing systematic risk. They also report various studies which suggested significant profits using trading rules in both spot and future foreign exchange markets using a variety of rules, including filter rules, head and shoulder rules and moving average rules. In some, though not all of these studies, bootstrapping is again used to establish that the returns are greater than would be expected from a number of null models. Notwithstanding the foregoing discussion , the authors would wish to emphasise that they are not undertaking the current exercise with a view to refutation or thoroughgoing critique of the efficient markets hypothesis. The EMH is a useful hypothesis which few could deny tends to hold on the average, and in general. Nevertheless, they would argue that sufficient evidence has now built up to justify investigating the possibility that the EMH may not hold as a universal phenomenon. Arthur et al.(1996) provide a rationale for arguing for the possible existence of successful trading rules, while not necessarily rejecting the potential validity of the EMH. They point out that in models of share price formation, once one drops the assumption of homogeneous investors, investors no longer have a common objective model of expectations formation, and no way of anticipating other agent’s expectations of dividends, and intelligent agents cannot form expectations in a determinate deductive way. In their own words, ... Instead, traders continually hypothesize-continually explore-expectational models, buy or sell on the basis of those that perform the best, and confirm pr discard these according to their performance....Within a regime where investors explore alternative expectational models at a low rate, the market settles into the rational-expectations equilibrium of the efficient market literature. Within a regime where the rate of exploration of alternative expectations is higher, the market self-organizes into a complex pattern. It acquires a rich psychology, technical trading emerges, temporary
2
bubbles and crashes occur, and asset prices and trading volume show statistical features -in particular, GARCH behavior-characteristic of actual market data. (1996, p.2) In other words, either an EMH or a trading-rules regime can hold, depending on the amount of guessing that investors have to do concerning the expectations of their fellow investors. If the trading rules are picking up patterns which are not detected by these models, then, firstly, this suggests that there is some justification for technical trading; secondly, economists may have to rethink their notions of market efficiency. Therefore, in view of the fact that genetic programming techniques have discovered technical rules which do appear to generate excess returns in the stock and foreign exchange markets, it was decided to apply a similar methodology to a property share price series to test whether this type of stock can be characterised either as an efficient market or, alternatively, as ‘regime two’ of the two alternatives presented by Arthur et. al., in which the market selforganises into a complex pattern with market psychology, bubbles and crashes etc. 2.
Methodology
The approach taken in this paper, following Neely et al, is to use genetic programming techniques to identify optimal trading rules ex ante for a number of property companies, and then to examine the performance of the pre-identified trading rules out of sample. There are several advantages to using the genetic programming technique. First, the solutions are not constrained to any pre-identified “successful” strategy. Second, we do not need to rely on any statistical distributions, constraints, or probabilistic results. Third, because the programme searches across the whole solution set, the number and complexity of technical rules generated and tested is far greater than previous work in this area, which is generally limited to testing a small number of imposed rules. This advantage is important because a standard criticism of the rejection of the usefulness of technical analysis is that the (academic) researcher has failed to test the “correct” technical rule. In order to confirm that the trading rules uncovered by genetic programming techniques are not simply exploiting known properties of the data, the trading rules are compared with data generated by means of bootstrapping using three well-known plausible statistical models, the random walk, AR(1) and a variant of ARCH. The bootstrap technique allows artificial Land Securities returns series’ to be generated under these null models and comparisons to be made relative to the actual series. Thus, the returns generated from the Land Securities series, conditioned on the fittest genetic rules, can be compared to the returns produced from applying the same rule to the simulated series. 2.1 Genetic Programming Genetic Programming is a new inductive technique which relies on the massive information processing capabilities of modern computers. Broadly speaking, in genetic programming, the computer generates hundreds, thousands, or even millions of potential solutions to a particular problem. It then isolates the ‘best’ solution or a group of best solutions through a process which is analogous to biological evolution and natural selection. The distinctive feature of the genetic programming approach is that there is a minimum of explicit preprogramming and pre-structuring of how the computer solves the problem. The computer is given a fitness criterion and searches for programmes which are highly fit, in the sense of providing a close match to the fitness criterion. More specifically, the computer searches for those sequences of operations, amongst the many which are tested, which provide solutions which most closely match the basic fitness criterion. Once the set of valid operators has been defined, the genetic programme is initiated with the random generation of hundreds or thousands of trial solutions of various shapes and sizes. In the words of the leading authority in this areas, Koza (1992), We then genetically breed the population of computer programs using the Darwinian principle of survival and reproduction of the fittest and the genetic operation of recombination (crossover). Both reproduction and recombination are applied to computer programs selected from the population in proportion to their observed fitness in solving
3
the given problem. Over a period of many generations, we breed populations of computer programs that are ever more fit in solving the problem at hand. Koza (1992, p.4), The reader should bear in mind that the technique is much more inductive and heuristic than standard optimisation techniques. The idea is not to search for some global optimum which can in principle be found in some well defined mathematical domain, and then to treat the problem set as if it were a member of that domain space. Rather, the basic approach is to provide a domain-independent solution which is fit for the task. This is why the phrase, ‘fit solution’ has been used above, rather than ‘best solution’. In other words, if successful, the genetic program will have provided a program which fits the criterion which has been provided. Although it is the best, or one of the best solutions of the many solutions which have been examined, it cannot in many cases be proved to be the global optimum. Indeed to require global optimisation of the genetic program would be to miss the point somewhat. The approach is inspired by biological evolution in which the adaptations of animals and plants are not necessarily absolutely optimal, but are ‘fit’, in the sense that the species survives, and what is more, the solution is eminently practical for the purpose in hand. In the context of our own particular study, the specified fitness criterion is the profit from buying and selling shares in Land Securities, a property investment company . The computer will examine a multitude of ways in which identifiable patterns in the data can be linked to the maximisation of return on buying and selling the stock which is used in our example, Land Securities, property investment company. The basic unit of analysis in the genetic programming approach is the S-expression or tree. The following illustrations shows such a tree.
4
figure 1
*
+
X
Y
Z
The S-expression or tree illustrated above represents the expression X*(Y+Z). The reader should note the following points about the tree. Firstly, the circles and ellipses, or nodes, either represents a function or a terminal. A function is a member of the set of legitimate operators, which is defined at the preliminary stage of genetic programming exercise. The set of functions which will be used in the present exercise include the following; a) Arithmetic operations; plus, minus, times, quotient, absolute, average, max, min, lag. b) Boolean operations; and, or, not, greater than, less than. c) Conditional operators; if-then, if-then-else. d) Numerical constants. d) Boolean constants such as True and False. Secondly, the terminals represent the numerical or symbolic input which are to be transformed using the function set in order to provided the desired output. These nodes are called terminals, as they constitute the end of a particular branch. In our example, the terminals are X, Y and Z. Thirdly, the branches from any particular node represent the arguments to that particular function. Thus, in our example, the arguments to the ‘plus’ function are Y and Z. The arguments to the ‘times’ function are, firstly, the value associated with X from one branch of the two associated with this node; secondly, the result of the evaluation of the ‘plus’ (including its associated arguments) from the other branch associated with this node. Fourthly, the order of priority of evaluation is from the bottom of the tree up. Hence, Y and Z are evaluated first, and the values of these arguments are passed to ‘plus’. This is followed by the simultaneous evaluation of X and ‘plus’ (i.e. (Y+Z)), as they are at the same hierarchical level in tree. The values of these arguments are then passed to ‘times’. Finally, ‘times’ is evaluated (i.e. X*(Y+Z)). The computer generates a multitude of such trees, each representing a potential solution to the problem. Solutions which prove to have better than average fitness will tend to increasingly dominate the set of solutions, while those with less than average fitness will tend to disappear. This takes place in two ways. Firstly, through reproduction of extant trees in proportion to fitness. Secondly, through experimentation with new trees through sexual recombination, whereby extant trees are split
5
and recombined in various ways. The resulting ‘children’ will then prosper or fail depending on their fitness. It should be noted that Genetic programming is not only a new technique but represents a considerable departure from many of the conventional norms of scientific methodology. In this respect, Koza (1992) makes the following points. i) GP will not necessarily produce precise analytic solutions. It is designed to produce working solutions which may involve a degree of approximation. ii) Concomitantly, GP solutions cannot normally be justified on the basis of pure analytical logic. That is; it is not necessarily the case that logically determinate path can be traced from the original problem input to the solution to the problem. iii) Path dependency as opposed to central tendency. The outcome of an evolutionary process depends on the initial starting conditions and may have no well defined terminal point. iv) Parsimony - GP solutions are not parsimonious unless, parsimony is built into the fitness criterion. Again, the analogy can be drawn with natural evolution. There are numerous apparently redundant functions in biology (for example a human appendix), which either had a use in the evolutionary past, have some function which is as yet unsuspected, or allow a sufficient degree of flexibility to cope with a certain amount of environmental change. In addition, it should also be noted that there is a minimum of pre-structuring of the data. The only real structure is the fitness criterion and the constraints imposed by the definition of the set of legitimate operators. 3.
Data
The raw data were obtained from the FT information service and comprise nominal daily prices in pounds sterling (adjusted for capital changes) for Land Securities Plc for the period January 2nd 1980 to 14th July 1997. This gives a total of 4514 available data points, allowing for weekends and 67 points which were unavailable. Table 3.1 sets out the summary statistics for the price and returns series. Returns are calculated as the log differences of the daily price series. Returns do not appear to be skewed although they are leptrokurtic. Prices are both slightly skewed and mildly leptrokurtic. Autocorrelations are large and highly significant for the price series, but insignificant for the returns.
6
Table 3.1: Series N Mean Std Error Minimum Maximum Skewness Kurtosis r(1) r(2) r(3) r(4) r(5) Q-stat(5)
Summary statistics for Price and Returns Price 4514 4.426** 0.02667 1.362 9.540 0.15347** -1.0045** 0.9379573** 0.8747364** 0.8164943** 0.7574571** 0.6968934** 35.6076**
Returns 4513 0.000424* 0.000186 -0.104302 0.083979 0.00475 3.70798** -0.2384726 0.0579812 -0.0722510 0.0040658 0.0490903 1.5388
Returns are measured as the log difference of the daily price. ** indicates significant at the 1% level. * indicates significant at the 5% level. r(i) is the estimated ith autocorrelation. Q-stat(5) is the Ljung-Box Q-Statistic for first five lags.
We then implemented some of the standard tests for unit roots. The presence of a unit root in prices suggests that the hypothesis of weak form efficiency cannot be rejected, because a unit root implies that shocks are permanent and consequently unpredictable in the long run. Given the relatively low power of such tests, a range of unit root tests were applied, including the Phillips-Perron (1987) procedure, the Sims (1988) Bayesian odds ratio test, and the Perron (1989) test for a unit root in a series with a structural break, the Cochrane (1988) variance ratio test, and the Campbell-Manikiw (1987) decomposition test. The results, tabulated in table 3.2, demonstrate that the null of a unit root cannot be rejected.
Table 3.2: Procedure lag = k
Results of Unit Root Tests Phillips - Perron
Perron
-0.0151 (4.59) k=48
-1.764 (-3.72), k=51
k=1 k=2 k=3 k=4 k=5 k = 10 k = 25 k = 100 k = 500
Cochrane ( Vk)
Campbell Manikiw (A(1))
1.05730 1.06753 1.08417 1.10325 1.10751 1.11470 1.10628 1.07840 1.26693
1.02993 1.00390 1.04294 1.05207 1.05410 1.05752 1.05351 1.04016 1.12881
The Sims test generated a t2 statistic of 0.010 against the Schwarz (asymptotic) limit critical value of 15.354, with a marginal alpha of 0.9977, indicating that the data evidence is strongly supportive of the unit root hypothesis. All of the unit root tests appear to support the hypothesis that a unit root is present in the price series, and we should therefore be able to conclude that this market is weak form price efficient.
7
4.
Results from genetic programming
In this case, the genetic rules were generated and evolved over the second 500 data points (roughly equivalent to 1982-1984), because it is necessary to have the potential for long moving averages or data lags. Two hundred rules are initially generated by the programme and these rules are then run through the data, with the weakest rules dying out and the fitter rules reproducing or re-combining as described above. This step is one trial. At the end of each trial, the fittest rule is examined. If the return generated by this rule is negative, then it is discarded; otherwise this rule is saved for further comparison against rules generated by further trials. At the end of the process, the fittest rules are ranked according to the specified fitness criterion - maximise returns subject to a 1% transaction cost per buy or sell decision.1 The two fittest rules are represented diagramatically in appendix II, and the remainder of this paper concentrates on the performance of the fittest rule. The fittest rule, which emerged in several trials, has the following representation Buy = if [abs ( price20 - 0.182134)] > price Sell = if [abs (price - price3) ] > 1 and ; (price = price16 - abs[ price - (average15 + price ] ) The tree diagrams for these expressions are given in appendix 2. The interpretation of the performance of these rules is rather problematic, because over the 16 year validation period these signals generate 760 buy decisions but the sell signal is never triggered (although it comes close at the 1987 crash). If we define the naive buy and hold strategy as buying at the price of the opening of trade (data500 ) and selling at the end of our period (data4514 ), then the total return on the original share is 335.5385%. However, the above rule does not trigger a buy decision until data727 which, if held for the same period generates a total appreciation of 407.8378% (not including the extra interest which could have been earned while waiting for the buy signal). We term this type of buy and hold rule as timing specific. If the data is (abituarily) split into periods of 500 points (approx. two year periods), then the rule produces the following outcomes (buying at the first buy signal) -
Table 4.1: Period 500 - 1000 1000 - 1500 1500 - 2000 2000 - 2500 2500 - 3000 3000 - 3500 3500 - 4000 4000 - 4500 mean
Returns for two year periods Buy and hold 20.9958% 12.0301% 50.5017% 12.4444% 7.3123% 22.2836% -5.1205% 46.74% 20.8985%
Buy at first signal 41.0811% 12.8788% 50.0% 22.5182% 7.3123% 30.9665% -12.0012% 51.3093% 25.5081%
Excess return 20.0853% 0.8487% -0.5017% 10.0737% 0.0% 8.6829% -6.8907% 4.5633% 4.6077%
The rule performs reasonably well over the entire sample with the exception of one period. Therefore, at first sight, a rule does appear to exist which will generally allow profits to be made which are above the naive buy and hold return, thus providing evidence against the EMH. Brown (1991) , notes that “ a random walk requires....the return parameters to be the same (over time) with or without the information subset”. However, since the rule is basically 1
The initial trials were run without a transaction cost and resulted in huge numbers of buy /sell decisions - the fittest rule generated excess profits of 40% but needed 240 trades to achieve this outcome.
8
saying “buy at a specific point in time and then hold” , then it could be argued that the genetic programme has actually found that the market cannot be beaten, and therefore the market is efficient. 5.
Comparison of genetic programming rules against specific null statistical models.
The performance of the trading rules generated by the genetic algorithms is perhaps significant for two reasons. First, there appears to be some evidence that the market for this particular share is inefficient. Although not directly comparable with the direct property market (but see appendix II), it can nevertheless be tentatively concluded that if the property share market is inefficient, then the direct property market, would be even more likely to be inefficient at pricing assets. Second, given that there appear to be opportunities to earn excess profits, it is important to check whether the genetically bred trading rules are not merely exploiting the known statistical properties of the underlying data. This can be achieved via the use of the “bootstrapping” methodology. Bootstrapping is a technique whereby computer simulations are generated from a range of suitable null models for the stock price. Each of the null models is fitted to the original series by OLS or maximum likelihood methods (as appropriate) and this enables extraction of the fitted parameter estimates and the residuals series. These fitted parameters are then used to produce the new data sets by re-sampling with replacement from the residual distribution. The simulations are run 5002 times to provide a reasonable approximation of the conditional return distribution under each of the null models. The chosen models are representative of a variety of popular models of stock prices, but are limited to the random walk with drift, the AR(1) model and the AR(1) combined with autoregressive conditional heteroscedasticity (ARCH) model. The ARCH model, as formulated by Engle (1982), is used to capture excess volatility of the second moment. It was decided to implement both the random walk and the AR(1) because of the well documented difficulties in distinguishing between data containing a unit root and data with a root close to one. We could not reject the hypothesis of Arch effects in the price and returns series, and thus decided to simulate the series using a variant of this data generating process. A simple AR(1), ARCH(3) model appeared to maximise the log likelihood function (as measured against alternative higher order ARCH and GARCH processes), and thus this DGP was the preferred null model. The Random Walk model This model can be represented by Rt = µt + εt
εt iid(0,s2)
where Rt = log(Pt) - log(Pt-1) This model is simulated by random sampling with replacement from the original returns series and then these samples are transformed into a new price series. The new series have the same expected drift, variance and unconditional distribution.
The AR(1) model This model can be written as 2
Brock et al (1992) demonstrate that there is no significant increase in reliability of estimated p values for number of simulations >500.
9
Pt = α + ρPt-1 + εt
IρI
Abs
Price
Price20
0.182134
SELL RULE AND =
> Price
-
1 Abs
Price16 Abs
Price
Price3
+
Price
Price 13
Avg15
Appendix 3 There is no time series transaction based indicator for the property market. However, since Land Securities real asset base comprises only UK commercial property, then the share price of this company may serve as a suitable proxy for current underlying property market conditions since a change in property market fundamentals, and thus potential earnings, should lead to a change in share price. Comparison of asset base of Land Securities and IPD(1996) The table below illustrates the relative mix of property types. IPD 52,651 45.6 37.4 14.0 3.0
Value of Property(m) % Retail % Office % Industrial % Other
Land Securities Ltd 5760 52.7 38.8 5.7 2.8
Comparison of Summary Statistics (based on annual data) Statistic Mean St. Err Median St. Dev Kurtosis Skewness
L.S. 12.9 6.3 3.9 26.2 1.2 1.1
IPD-Tot 10.0 2.5 9.1 10.0 0.1 0.1
IPD-Cap 3.2 2.5 2.2 10.1 0.0 0.2
FT-100 18.5 3.2 20.4 12.8 0.6 -1.0
Correlations (annual) Although correlations are an imperfect guide to relative co-movements between series’, they nevertheless provide a rough guide as to how the series’ in question are related.
Land Securities IPD - Capital IPD - Total FT-100
Land Securities 1.0
IPD Capital 0.43
0.43 0.49 0.37
1.0 0.99 0.09
-
14
IPD - Total
FT-100
0.49
0.37
0.99 1.0 012
0.09 0.12 1.0
References Arthur W.B., Holland J., LeBaron R., Palmer R. and Taylor P. (1996), ‘An artificial stock market’, Mimeo, Santa Fe Institute, Santa Fe, California. Brock W., Lakonishok J., and LeBaron B., (1992) ‘Simple technical trading rules and the stochastic properties of stock returns’, The Journal of Finance, v.47, no.5. Brown, G. R. , (1991) Property Investment and the Capital Markets, E&FN Spon. Campbell, J., and Mankiw, G.,(1987) Are output fluctuations transitory?, QJE, pp. 319-33. Cochrane, J., (1988) How big is the random walk in GNP, JPE Vol. 96, pp. 893-920. Engle, R.F. (1982) Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation, Econometrica, Vol .50, pp.987-1007. Koza J., (1992), Genetic Programming, MIT press. Neely C., Weller P. and Dittmar R., (1996), ‘Is technical analysis in the foreign exchange market profitable? A genetic programming approach.’ CEPR Discussion paper no. 1480, September 1996. Phillips, P. C. B., and Perron, P., (1988)Testing for a Unit Root in Time Series Regressions, Biometrika, Vol. 65, pp. 335-346. Perron, Pierre. The Great Crash, the Oil Price Shock, and the Unit Root Hypothesis, Econometrica, Vol. 57, No 6(November,1989), 1361-1401 Reilly, F.K. (1994), Investment Analysis and Portfolio Management, 4th ed, published by Dryden Press Sims, C. A., (1988) Bayesian Skepticism on Unit Root Econometrics", J. of Economic Dynamics and Control, Vol 12, pp. 463-474. Taylor M. and Allen H., (1992) ‘The use of technical analysis in the foreign exchange market’, Journal of International Money and Finance, v11., pp. 304-14.
15