Sep 18, 2009 ... (2001). 3Thanks to Ramana Vinjamuri for pointing out a sign error in an earlier
version of .... Audi A4 3.0 Quattro manual .... Toyota Avalon XLS.
Principal Components: Mathematics, Example, Interpretation 36-350: Data Mining 18 September 2009 Reading: Section 3.6 in the textbook.
Contents 1 Mathematics of Principal Components 1.1 Minimizing Projection Residuals . . . . . . . . . . . . . . . . . . 1.2 Maximizing Variance . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 More Geometry; Back to the Residuals . . . . . . . . . . . . . . .
1 2 3 5
2 Example: Cars 2.1 A Recipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6 8
3 PCA Cautions 10 At the end of the last lecture, I set as our goal to find ways of reducing the dimensionality of our data by means of linear projections, and of choosing projections which in some sense respect the structure of the data. I further asserted that there was one way of doing this which was far more useful and important than others, called principal components analysis, where “respecting structure” means “preserving variance”. This lecture will explain that, explain how to do PCA, show an example, and describe some of the issues that come up in interpreting the results. PCA has been rediscovered many times in many fields, so it is also known as the Karhunen-Lo`eve transformation, the Hotelling transformation, the method of empirical orthogonal functions, and singular value decomposition1 . We will call it PCA.
1
Mathematics of Principal Components
We start with p-dimensional feature vectors, and want to summarize them by projecting down into a q-dimensional subspace. Our summary will be the pro1 Strictly speaking, singular value decomposition is a matrix algebra trick which is used in the most common algorithm for PCA.
1
jection of the original vectors on to q directions, the principal components, which span the sub-space. There are several equivalent ways of deriving the principal components mathematically. The simplest one is by finding the projections which maximize the variance. The first principal component is the direction in feature space along which projections have the largest variance. The second principal component is the direction which maximizes variance among all directions orthogonal to the first. The k th component is the variance-maximizing direction orthogonal to the previous k − 1 components. There are p principal components in all. Rather than maximizing variance, it might sound more plausible to look for the projection with the smallest average (mean-squared) distance between the original vectors and their projections on to the principal components; this turns out to be equivalent to maximizing the variance. Throughout, assume that the data have been “centered”, so that every feature has mean 0. If we write the centered data in a matrix X, where rows are objects and columns are features, then XT X = nV, where V is the covariance matrix of the data. (You should check that last statement!)
1.1
Minimizing Projection Residuals
We’ll start by looking for a one-dimensional projection. That is, we have pdimensional feature vectors, and we want to project them on to a line through the origin. We can specify the line by a unit vector along it, w, ~ and then the projection of a data vector x~i on to the line is x~i · w, ~ which is a scalar. (Sanity check: this gives us the right answer when we project on to one of the coordinate axes.) This is the distance of the projection from the origin; the actual coordinate in p-dimensional space is (x~i · w) ~ w. ~ The mean of the projections will be zero, because the mean of the vectors x~i is zero: ! ! n n 1X 1X (x~i · w) ~ w ~= xi · w ~ w ~ (1) n i=1 n i=1 If we try to use our projected or image vectors instead of our original vectors, there will be some error, because (in general) the images do not coincide with the original vectors. (When do they coincide?) The difference is the error or residual of the projection. How big is it? For any one vector, say x~i , it’s 2
kx~i − (w ~ · x~i )wk ~
= kx~i k2 − 2(w ~ · x~i )(w ~ · x~i ) + kwk ~ 2 2
2
= kx~i k − 2(w ~ · x~i ) + 1
(2) (3)
(This is the same trick used to compute distance matrices in the solution to the first homework; it’s really just the Pythagorean theorem.) Add those residuals up across all the vectors: RSS(w) ~
=
n X
2
kx~i k2 − 2(w ~ · x~i ) + 1
i=1
2
(4)
=
n X
n+
! 2
kx~i k
−2
i=1
n X
2
(w ~ · x~i )
(5)
i=1
The term in the big parenthesis doesn’t depend on w, ~ so it doesn’t matter for trying to minimize the residual sum-of-squares. To make RSS small, what we must do is make the second sum big, i.e., we want to maximize n X
2
(w ~ · x~i )
(6)
i=1
Equivalently, since n doesn’t depend on w, ~ we want to maximize n
1X 2 (w ~ · x~i ) n i=1
(7)
2
which we can see is the sample mean of (w ~ · x~i ) . The mean of a square is always equal to the square of the mean plus the variance: !2 n n 1X 1X 2 (w ~ · x~i ) = x~i · w ~ + Var [w ~ · x~i ] (8) n i=1 n i=1 Since we’ve just seen that the mean of the projections is zero, minimizing the residual sum of squares turns out to be equivalent to maximizing the variance of the projections. (Of course in general we don’t want to project on to just one vector, but on to multiple principal components. If those components are orthogonal and have the unit vectors w~1 , w~2 , . . . w~k , then the image of xi is its projection into the space spanned by these vectors, k X
(x~i · w~j )w~j
(9)
j=1
The mean of the projection on to each component is still zero. If we go through the same algebra for the residual sum of squares, it turns out that the crossterms between different components all cancel out, and we are left with trying to maximize the sum of the variances of the projections on to the components. Exercise: Do this algebra.)
1.2
Maximizing Variance
Accordingly, let’s maximize the variance! Writing out all the summations grows tedious, so let’s do our algebra in matrix form. If we stack our n data vectors into an n × p matrix, X, then the projections are given by Xw, which is an n × 1 matrix. The variance is 1X 2 2 σw = (x~i · w) ~ (10) ~ n i 3
= = = =
1 T (Xw) (Xw) n 1 T T w X Xw n XT X w wT n wT Vw
(11) (12) (13) (14)
2 We want to chose a unit vector w ~ so as to maximize σw ~ . To do this, we need to make sure that we only look at unit vectors — we need to constrain the maximization. The constraint is that w ~ ·w ~ = 1, or wT w = 1. This needs a brief excursion into constrained optimization. We start with a function f (w) that we want to maximize. (Here, that function is wT V w.) We also have an equality constraint, g(w) = c. (Here, g(w) = wT w and c = 1.) We re-arrange the constraint equation so its righthand side is zero, g(w) − c = 0. We now add an extra variable to the problem, the Lagrange multiplier λ, and consider u(w, λ) = f (w)−λ(g(w)−c). This is our new objective function, so we differentiate with respect to both arguments and set the derivatives equal to zero:
∂u ∂w ∂u ∂λ
∂f ∂g −λ ∂w ∂w
=0=
= 0 = −(g(w) − c)
(15) (16)
That is, maximizing with respect to λ gives us back our constraint equation, g(w) = c. At the same time, when we have the constraint satisfied, our new objective function is the same as the old one. (If we had more than one constraint, we would just need more Lagrange multipliers.)23 For our projection problem, u = wT Vw − λ(wT w − 1) ∂u = 2Vw − 2λw = 0 ∂w Vw = λw
(17) (18) (19)
Thus, desired vector w is an eigenvector of the covariance matrix V, and the maximizing vector will be the one associated with the largest eigenvalue λ. This is good news, because finding eigenvectors is something which can be done comparatively rapidly (see Principles of Data Mining p. 81), and because eigenvectors have many nice mathematical properties, which we can use as follows. We know that V is a p × p matrix, so it will have p different eigenvectors.4 We know that V is a covariance matrix, so it is symmetric, and then linear 2 To
learn more about Lagrange multipliers, read Boas (1983) or (more compactly) Klein (2001). 3 Thanks to Ramana Vinjamuri for pointing out a sign error in an earlier version of this paragraph. 4 Exception: if n < p, there are only n distinct eigenvectors and eigenvalues.
4
algebra tells us that the eigenvectors must be orthogonal to one another. Again because V is a covariance matrix, it is a positive matrix, in the sense that ~x · V~x ≥ 0 for any ~x. This tells us that the eigenvalues of V must all be ≥ 0. The eigenvectors of V are the principal components of the data. We know that they are all orthogonal top each other from the previous paragraph, so together they span the whole p-dimensional feature space. The first principal component, i.e. the eigenvector which goes the largest value of λ, is the direction along which the data have the most variance. The second principal component, i.e. the second eigenvector, is the direction orthogonal to the first component with the most variance. Because it is orthogonal to the first eigenvector, their projections will be uncorrelated. In fact, projections on to all the principal components are uncorrelated with each other. If we use q principal components, our weight matrix w will be a p×q matrix, where each column will be a different eigenvector of the covariance matrix V. The eigenvalues will give the total variance described by each component. P The variance of the projections on to q the first q principal components is then i=1 λi .
1.3
More Geometry; Back to the Residuals
Suppose that the data really are q-dimensional. Then V will have only q positive eigenvalues, and p − q zero eigenvalues. If the data fall near a q-dimensional subspace, then p − q of the eigenvalues will be nearly zero. If we pick the top q components, we can define a projection operator Pq . The images of the data are then XPq . The projection residuals are X − XPq or X(1 − Pq ). (Notice that the residuals here are vectors, not just magnitudes.) If the data really are q-dimensional, then the residuals will be zero. If the data are approximately q-dimensional, then the residuals will be small. In any case, we can define the R2 of the projection as the fraction of the original variance kept by the image vectors, Pq λi 2 (20) R ≡ Ppi=1 j=1 λj just as the R2 of a linear regression is the fraction of the original variance of the dependent variable retained by the fitted values. The q = 1 case is especially instructive. We know, from the discussion of projections in the last lecture, that the residual vectors are all orthogonal to the projections. Suppose we ask for the first principal component of the residuals. This will be the direction of largest variance which is perpendicular to the first principal component. In other words, it will be the second principal component of the data. This suggests a recursive algorithm for finding all the principal components: the k th principal component is the leading component of the residuals after subtracting off the first k − 1 components. In practice, it is faster to use eigenvector-solvers to get all the components at once from V, but we will see versions of this idea later. This is a good place to remark that if the data really fall in a q-dimensional subspace, then V will have only q positive eigenvalues, because after subtracting
5
Variable Sports SUV Wagon Minivan Pickup AWD RWD Retail Dealer Engine Cylinders Horsepower CityMPG HighwayMPG Weight Wheelbase Length Width
Meaning Binary indicator for being a sports car Indicator for sports utility vehicle Indicator Indicator Indicator Indicator for all-wheel drive Indicator for rear-wheel drive Suggested retail price (US$) Price to dealer (US$) Engine size (liters) Number of engine cylinders Engine horsepower City gas mileage Highway gas mileage Weight (pounds) Wheelbase (inches) Length (inches) Width (inches)
Table 1: Features for the 2004 cars data. off those components there will be no residuals. The other p − q eigenvectors will all have eigenvalue 0. If the data cluster around a q-dimensional subspace, then p − q of the eigenvalues will be very small, though how small they need to be before we can neglect them is a tricky question.5
2
Example: Cars
Today’s dataset is 388 cars from the 2004 model year, with 18 features (from http://www.amstat.org/publications/jse/datasets/04cars.txt, with incomplete records removed). Eight features are binary indicators; the other 11 features are numerical (Table 1). All of the features except Type are numerical. Table 2 shows the first few lines from the data set. PCA only works with numerical features, so we have ten of them to play with. There are two R functions for doing PCA, princomp and prcomp, which differ in how they do the actual calculation.6 The latter is generally more robust, so 5 One tricky case where this can occur is if n < p. Any two points define a line, and three points define a plane, etc., so if there are fewer data points than features, it is necessarily true that the fall on a low-dimensional subspace. If we look at the bags-of-words for the Times stories, for instance, we have p ≈ 4400 but n ≈ 102. Finding that only 102 principal components account for all the variance is not an empirical discovery but a mathematical artifact. 6 princomp actually calculates the covariance matrix and takes its eigenvalues. prcomp uses a different technique called “singular value decomposition”.
6
Sports, SUV, Wagon, Minivan, Pickup, AWD, RWD, Retail,Dealer,Engine,Cylinders,Horsepower,Cit Acura 3.5 RL,0,0,0,0,0,0,0,43755,39014,3.5,6,225,18,24,3880,115,197,72 Acura MDX,0,1,0,0,0,1,0,36945,33337,3.5,6,265,17,23,4451,106,189,77 Acura NSX S,1,0,0,0,0,0,1,89765,79978,3.2,6,290,17,24,3153,100,174,71 Table 2: The first few lines of the 2004 cars data set. we’ll just use it. cars04 = read.csv("cars-fixed04.dat") cars04.pca = prcomp(cars04[,8:18], scale.=TRUE) The second argument to prcomp tells it to first scale all the variables to have variance 1, i.e., to standardize them. You should experiment with what happens with this data when we don’t standardize. We can now extract the loadings or weight matrix from the cars04.pca object. For comprehensibility I’ll just show the first two components. > round(cars04.pca$rotation[,1:2],2) PC1 PC2 Retail -0.26 -0.47 Dealer -0.26 -0.47 Engine -0.35 0.02 Cylinders -0.33 -0.08 Horsepower -0.32 -0.29 CityMPG 0.31 0.00 HighwayMPG 0.31 0.01 Weight -0.34 0.17 Wheelbase -0.27 0.42 Length -0.26 0.41 Width -0.30 0.31 This says that all the variables except the gas-mileages have a negative projection on to the first component. This means that there is a negative correlation between mileage and everything else. The first principal component tells us about whether we are getting a big, expensive gas-guzzling car with a powerful engine, or whether we are getting a small, cheap, fuel-efficient car with a wimpy engine. The second component is a little more interesting. Engine size and gas mileage hardly project on to it at all. Instead we have a contrast between the physical size of the car (positive projection) and the price and horsepower. Basically, this axis separates mini-vans, trucks and SUVs (big, not so expensive, not so much horse-power) from sports-cars (small, expensive, lots of horse-power). To check this interpretation, we can use a useful tool called a biplot, which plots the data, along with the projections of the original features, on to the first two components (Figure 1). Notice that the car with the lowest value of the 7
second component is a Porsche 911, with pick-up trucks and mini-vans at the other end of the scale. Similarly, the highest values of the first component all belong to hybrids.
2.1
A Recipe
There is a more-or-less standard recipe for interpreting PCA plots, which goes as follows. To begin with, find the first two principal components of your data. (I say “two” only because that’s what you can plot; see below.) It’s generally a good idea to standardized all the features first, but not strictly necessary. Coordinates Using the arrows, summarize what each component means. For the cars, the first component is something like size vs. fuel economy, and the second is something like sporty vs. boxy. Correlations For many datasets, the arrows cluster into groups of highly correlated attributes. Describe these attributes. Also determine the overall level of correlation (given by the R2 value). Here we get groups of arrows like the two MPGs (unsurprising), retail and dealer price (ditto) and the physical dimensions of the car (maybe a bit more interesting). Clusters Clusters indicate a preference for particular combinations of attribute values. Summarize each cluster by its prototypical member. For the cars data, we see a cluster of very similar values for sports-cars, for instance, slightly below the main blob of data. Funnels Funnels are wide at one end and narrow at the other. They happen when one dimension affects the variance of another, orthogonal dimension. Thus, even though the components are uncorrelated (because they are perpendicular) they still affect each other. (They are uncorrelated but not independent.) The cars data has a funnel, showing that small cars are similar in sportiness, while large cars are more varied. Voids Voids are areas inside the range of the data which are unusually unpopulated. A permutation plot is a good way to spot voids. (Randomly permute the data in each column, and see if any new areas become occupied.) For the cars data, there is a void of sporty cars which are very small or very large. This suggests that such cars are undesirable or difficult to make. Projections on to the first two or three principal components can be visualized; however they may not be enough to really give a good summary of the data. Usually, to get an R2 of 1, you need to use all p principal components.7 7 The exceptions are when some of your features are linear combinations of the others, so that you don’t really have p different features, or when n < p.
8
10
0.1
Wheelbase Length
0.0
Cadillac XLR
10
0
Nissan 1500 LT GMCChevrolet Yukon XLSuburban 2500 SLTEnvoy Isuzu Ascender S Quest S GMC XUVTown SLE Chrysler andSE Country LX Ford Freestar Width Mercury Grand Marquis GSCE Ford Crown Victoria Nissan Quest SE Toyota Sienna Ford Expedition 4.6 XLT Ford Crown Victoria LX Nissan Pathfinder Armada SE Dodge Grand Caravan SXT Dodge Caravan SE Mercury Monterey Mercury Grand Marquis LS Premium Honda Odyssey LX Kia Odyssey Sedona LX Mercury GrandHonda Marquis LSLuxury Ultimate Toyota Sienna XLE Ford Crown Victoria Sport Toyota Sequoia SR5 EX Lincoln Town Car Ultimate LLX Chrysler Pacifica Dodge Durango SLT Chrysler Town and Country Dodge Limited Intrepid Pontiac Montana EWB SE Lincoln Town Car Signature Chrysler Concorde LX Oldsmobile Silhouette Chevrolet Astro GMC SafariIntrepid SLE ES GL Lincoln Town1500 Car Ultimate Dodge Mercury Marauder Chrysler Concorde LXi GMC Yukon SLE Chevrolet Impala Mercury Sable GS four-door Ford Taurus LX LS Pontiac Grand Prix GT1 Chevrolet Monte Carlo Buick Park Avenue Mercury Sable GS Ford Taurus SE Lincoln Navigator Luxury Buick Pontiac LeSabre Custom four-door Grand Prix GT2 Hummer H2 Chevrolet Impala LS Weight Monte Carlo SS Buick Century Custom Toyota Camry Solara Kia Optima LX SE Mercury Sable LS Premium Chevrolet Tahoe LT Chevrolet Dodge Stratus SXT Chrysler Sebring Ford Taurus SES Duratec Buick Rendezvous CX Dodge Stratus Buick LeSabre Limited Buick Regal LS Buick Park Avenue Ultra Nissan Altima Pontiac Montana Ford Explorer XLT V6 Chevrolet Impala SS Chrysler 300M Volkswagen Touareg V6 Suzuki Verona LXSSE LE Mercury Mountaineer Chevrolet TrailBlazer LT Toyota Camry Pontiac Aztekt Mitsubishi Montero Honda Pilot LX Lincoln LS V6 Luxury Chevrolet Malibu Hyundai XG350 Oldsmobile Alero GX Chevrolet Venture Saturn VUE Kia Optima LX V6 Chrysler 300M Special Edition Hyundai Sonata GLS Kia Sorento LX Chevrolet Malibu Maxx Toyota 4Runner SR5 V6 Hyundai XG350 LLS Toyota Camry Solara SE V6L300 Hyundai Sonata LX Honda Accord LX Mitsubishi Endeavor Mazda MPV ES Buick Regal GS Chrysler Sebring Touring Cadillac Escaladet Buick Rainier Saturn Nissan Murano Honda Accord Mazda6 i EXIon1 Suzuki XL-7 EX Lincoln LS V6 Premium Saturn Nissan Maxima SE Chevrolet Cavalier two-door Toyota Avalon XL Mitsubishi Galant Infiniti FX35 Jeep Liberty Sport Toyota Camry Solara SLE V6 two-door Honda Accord LX V6 Camry LE V6 Cadillac Deville Saturn Ion2 quad coupe Saturn L300-2 Chevrolet Cavalier LS Chevrolet Cavalier four-door Chevrolet Malibu LS Nissan Maxima SL Nissan Altima SE BMW 525i four-door Ford Mustang Saturn Ion3 quad coupe Pontiac Sunfire 1SA Pontiac Grand Am GT Oldsmobile Alero GLS Saturn Ion2 Lincoln Aviator Ultimate Acura 3.5 RL Toyota Camry XLE V6 Cadillac CTS VVT Subaru Legacy LSpectra Toyota Avalon XLS Acura MDX Volvo XC90 T6 Suzuki Forenza Shatch Saturn Ion3 Hyundai Santa Fe GLS Chevrolet Malibu LT Volkswagen Passat GLS four-door Subaru Outback Kia Hyundai Elantra GLS Pontiac Sunfire 1SC Toyota Prius Toyota Highlander V6 Acura 3.5 RL Navigation Mitsubishi Diamante LS Infiniti FX45 Mazda Tribute DX 2.0 Isuzu Rodeo S Volkswagen Passat GLS Kia Spectra GS Honda CR-V LX Hyundai Elantra GT Volvo XC70 Hyundai Elantra GT Dodge Neon SE Volvo S80 2.5T Kia Spectra GSX Mitsubishi Outlander Jeep Cherokee Laredo Lincoln LSGrand V8 Sport Honda Civic DX Suzuki Forenza EX Honda Civic LX Dodge Neon SXT Chrysler Sebring convertible Honda Civic Honda Civic Cadillac Deville DTS Nissan Xterra XE Toyota Corolla CE Subaru Legacy GT Jaguar S-Type 3.0 Toyota Matrix Toyota Corolla SHXHybrid HighwayMPG Pontiac Vibe Cadillac Seville Toyota Corolla LE Volvo S80 2.9 Nissan Sentra 1.8 Lexus ES 330Audi Nissan Pathfinder SE Subaru Outback Limited Sedan Chrysler Sebring Limited convertible Engine Ford Focus ZTW Infiniti G35 Sport Coupe Honda Element LX Honda Civic EX BMW 530i four-door Audi A6 3.0 Infiniti G35 Infiniti I35 Honda Accord EX V6 LincolnGX LS V8 Ultimate Nissan Sentra 1.8 S Ford Focus LX A4 1.8T Jaguar X-Type 2.5 Lexus 470 CityMPG Audi A6 3.0 Quattro BMW X3 3.0i Audi A6 3.0 Avant Quattro Infiniti G35 AWD Volvo S60 2.5 Ford Focus ZX3 Saab 9-5 Arc Lexus RX 330 Acura TSX Cadillac SRX V8 Volkswagen Jetta GL Jetta GLS Mercedes-Benz C230 SportZX5 Acura TLV6 Chrysler GS 300 Ford Focus Ford Focus Toyota Land Volkswagen VolkswagenLexus Passat GLX 4MOTION PTfour-door Cruiser Lexus LX Cruiser 470 Nissan Sentra SE-R Suzuki Aeno SSE Aveo Land Rover Freelander SE Ford Escape XLS Subaru Impreza 2.5 RS Subaru Outback H6 Audi A4 1.8T convertible BMW 325i Audi A6 2.7 Turbo Quattro four-door Jaguar X-Type 3.0 Hyundai Tiburon GT V6 BMW 325xi Suzuki Aerio LX Saab 9-3 Arc Sport Chevrolet Jaguar XJ8 Land Rover Discovery SE Volvo C70 LPT BMW 325Ci Subaru Outback H-6 VDC Mercedes-Benz C320 Sport two-door Mercedes-Benz E320 four-door Chrysler PT Cruiser Limited S60 T5 Subaru Forester BMW 325xi Sport Volkswagen Golf Saab 9-5 Aero four-door Hyundai Accent Mercedes-Benz C240 RWD Saab 9-5 Aero Mercedes-Benz E320 BMW Jaguar S-Type 4.2 Mercedes-Benz C240 Audi745Li A8 L four-door Quattro Ford Mustang GT Premium Kia Rio manual Volvo V40 Hyundai Accent GT GL Lexus LS 430 Saab 9-3 Aero Ford Focus SVT Volvo S80 T6 Mercedes-Benz C240 AWD Volvo S40 auto Suzuki Aerio SX Infiniti Q45 Luxury Ford Thunderbird Deluxe Audi A4 3.0 Kia Rio Cinco Mitsubishi Eclipse GTS Audi A4 3.0 Quattro manual Mercedes-Benz ML500 Toyota Celica Porsche Cayenne S Toyota RAV4 Cylinders Honda Civic Si convertible Audi A4 IS 3.0 Quattro auto Volkswagen GTI Mitsubishi Eclipse Spyder GT Infiniti M45 BMW 325Ci convertible Mercedes-Benz S430 Mitsubishi Lancer Evolution Volvo C70 HPT Lexus IS 300 manual Lexus 300 SportCross Volkswagen New Beetle GLS Acura RSX 745i four-door Lexus IS 300 auto Scion xB Mercedes-Benz C320 Sport four-door Subaru Impreza WRX Land Rover BMW Range Rover HSE Volvo S60 R Nissan 350Z BMW 330i BMW X5 4.4i Jaguar Vanden Plas Saab 9-3 Arc Volkswagen Jetta GLI Volkswagen New Beetle GLS 1.8T BMW 330xi Chevrolet Aveo LS BMW 545iA four-door AudiVolkswagen A6 4.2 Quattro BMW 330Ci Mercedes-Benz C320 Volkswagen Passat W8 4MOTION Chrysler PTVitara Cruiser GT Audi A4 3.0 convertible Saab 9-3 Aero convertible Toyota Echo two-door manual Lexus GS 430 Suzuki LX Toyota Toyota Echo four-door Audi A4 3.0 Quattro convertible Mercedes-Benz CLK320 Passat W8 Echo two-door auto Honda Insight Scion xA Chevrolet Tracker Nissan 350Z Enthusiast BMW 330Ci convertible Mercedes-Benz E500E500 four-door Mercedes-Benz Subaru Impreza WRX STi Mini Cooper Chevrolet Corvette Mercedes-Benz S500 Audi TT 1.8 JaguarXJR S-Type R Jaguar BMW Z4 convertible 2.5i two-door Toyota MR2 Spyder Mini Cooper S Chevrolet Corvette BMW convertible M3 Audi TT 1.8 Quattro Mercedes-Benz CLK500 Audi S4 Quattro Audi S4 Avant Quattro Chrysler Crossfire Mercedes-Benz G500 Honda S2000 Jeep Wrangler Mazda MX-5 Miata Audi TT 3.2Sahara BMW M3BMW convertible Z4 convertible 3.0i two-door Mazda MX-5 Miata LS Porsche Mercedes-Benz C32 AMG Boxster Lexus SC 430 Horsepower Mercedes-Benz SLK230 Jaguar XK8 coupe Mercedes-Benz Audi RS 6 CL500 Porsche Boxster S Jaguar XK8 convertible
-0.1
Jaguar XKR coupe Jaguar XKR convertible Mercedes-Benz SL500 Mercedes-Benz SLK32 AMG Porsche 911 Targa Acura Porsche 911NSX Carrera 4S Porsche 911SCarrera
Retail Dealer
-0.2
Mercedes-Benz CL600
-20
PC2
-10
0
-20
-10
-30
Mercedes-Benz SL55 AMG
-30
-0.3
Mercedes-Benz SL600
Porsche 911 GT2
-0.3
-0.2
-0.1
0.0
0.1
PC1
biplot(cars04.pca,cex=0.4) Figure 1: “Biplot” of the 2004 cars data. The horizontal axis shows projections on to the first principal component, the vertical axis the second component. Car names are written at their projections on to the components (using the coordinate scales on the top and the right). Red arrows show the projections of the original features on to the principal components (using the coordinate scales on the bottom and on the left).
9
How many principal components you should use depends on your data, and how big an R2 you need. In some fields, you can get better than 80% of the variance described with just two or three components. A sometimes-useful device is to plot 1 − R2 versus the number of components, and keep extending the curve it until it flattens out.
3
PCA Cautions
Trying to guess at what the components might mean is a good idea, but like many god ideas it’s easy to go overboard. Specifically, once you attach an idea in your mind to a component, and especially once you attach a name to it, it’s very easy to forget that those are names and ideas you made up; to reify them, as you might reify clusters. Sometimes the components actually do measure real variables, but sometimes they just reflect patterns of covariance which have many different causes. If I did a PCA of the same features but for, say, 2007 cars, I might well get a similar first component, but the second component would probably be rather different, since SUVs are now common but don’t really fit along the sports car/mini-van axis. A more important example comes from population genetics. Starting in the late 1960s, L. L. Cavalli-Sforza and collaborators began a huge project of mapping human genetic variation — of determining the frequencies of different genes in different populations throughout the world. (Cavalli-Sforza et al. (1994) is the main summary; Cavalli-Sforza has also written several excellent popularizations.) For each point in space, there are a very large number of features, which are the frequencies of the various genes among the people living there. Plotted over space, this gives a map of that gene’s frequency. What they noticed (unsurprisingly) is that many genes had similar, but not identical, maps. This led them to use PCA, reducing the huge number of features (genes) to a few components. Results look like Figure 2. They interpreted these components, very reasonably, as signs of large population movements. The first principal component for Europe and the Near East, for example, was supposed to show the expansion of agriculture out of the Fertile Crescent. The third, centered in steppes just north of the Caucasus, was supposed to reflect the expansion of Indo-European speakers towards the end of the Bronze Age. Similar stories were told of other components elsewhere. Unfortunately, as Novembre and Stephens (2008) showed, spatial patterns like this are what one should expect to get when doing PCA of any kind of spatial data with local correlations, because that essentially amounts to taking a Fourier transform, and picking out the low-frequency components.8 They simulated genetic diffusion processes, without any migration or population expansion, and 8 Remember that PCA re-writes the original vectors as a weighted sum of new, orthogonal vectors, just as Fourier transforms do. When there is a lot of spatial correlation, values at nearby points are similar, so the low-frequency modes will have a lot of amplitude, i.e., carry a lot of the variance. So first principal components will tend to be similar to the low-frequency Fourier modes.
10
got results that looked very like the real maps (Figure 3). This doesn’t mean that the stories of the maps must be wrong, but it does undercut the principal components as evidence for those stories.
References Boas, Mary L. (1983). Mathematical Methods in the Physical Sciences. New York: Wiley, 2nd edn. Cavalli-Sforza, L. L., P. Menozzi and A. Piazza (1994). The History and Geography of Human Genes. Princeton: Princeton University Press. Klein, Dan (2001). “Lagrange Multipliers without Permanent Scarring.” Online tutorial. URL http://dbpubs.stanford.edu:8091/~klein/ lagrange-multipliers.pdf. Novembre, John and Matthew Stephens (2008). “Interpreting principal component analyses of spatial population genetic variation.” Nature Genetics, 40: 646–649. doi:10.1038/ng.139.
11
Figure 2: Principal components of genetic variation in the old world, according to Cavalli-Sforza et al. (1994), as re-drawn by Novembre and Stephens (2008).
12
Figure 3: How the PCA patterns can arise as numerical artifacts (far left column) or through simple genetic diffusion (next column). From Novembre and Stephens (2008).
13