PUBLIC OPINION AND CRIMINAL JUSTICE POLICY: THEORY AND RESEARCH Forthcoming in the Annual Review of Criminology Justin T. Pickett* School of Criminal Justice University at Albany, SUNY
ABSTRACT This article reviews evidence for the effects of public opinion on court decision-making, capital punishment policy and use, correctional expenditures, and incarceration rates. It also assesses evidence about the factors explaining changes over time in public support for punitive crime policies. Most of this evidence originates from outside of our discipline. I identify two reasons that criminologists have not made more progress toward understanding the opinionpolicy relationship. One is an unfamiliarity with important theoretical and empirical developments in political science pertaining to public policy mood, parallel opinion change, majoritarian congruence, and dynamic representation. Another is our overreliance on crosssectional studies and preoccupation with comparing support levels elicited with different questions (global versus specific) and under different conditions (uninformed versus informed). I show how the resultant findings have contributed to misunderstandings about the nature of public opinion and created a false summit in our analysis of the opinion-policy relationship.
KEYWORDS: Punitiveness, policy mood, capital punishment, sentencing, courts, incarceration
*Direct correspondence to Justin T. Pickett, School of Criminal Justice, University at Albany, SUNY, 135 Western Avenue, Albany, NY 12222 e-mail (
[email protected]), phone (803-2158954). The author thanks Frank Cullen, Sean Patrick Roche, and Andrew Thompson for helpful feedback on an earlier draft. 1
INTRODUCTION Do you and I, and our neighbors and friends, collectively decide that lawbreakers should suffer more when crime increases and then get our representatives to carry out this wish? Many of us with expertise in criminal justice have doubts (Beckett & Sasson 2004; Roberts et al. 2003; Zimring & Johnson 2006), given the partial information we possess. It is hard for us to reconcile an image of the public reacting intelligibly, albeit punitively, to upsurges in offending and then leading policymakers (and enforcers) toward the desired response with the evidence we have accumulated that most people know little about crime or criminal justice (Roberts & Stalans 1997) and have “mushy” attitudes toward punitive policies (Cullen et al. 2000, p. 7). Popular punitiveness, we reason, is “therefore highly malleable” (Travis et al. 2014, p. 121), if not “fickle and vulnerable to dramatic shifts” (Drakulich & Kirck 2016, p. 173). You might sum up our position with the phrase “the myth of the punitive public” (Thielo et al. 2016, p. 143). As a consequence, answering the question accurately—the answer is “yes,” by the way—requires consulting empirical findings originating mostly from outside of our discipline (Enns 2016). Before turning our attention to this literature, however, let us first look at why misperceptions about public opinion have become entrenched in our field and slowed our progress toward understanding the opinion-policy relationship. One reason is that we have not kept up with key theoretical and empirical developments in political science, especially those surrounding public policy mood, parallel opinion change, and the distinction between majoritarian congruence and dynamic representation. For this reason, we will spend some time exploring this literature. We will see that policy preferences are different from other types of attitudes in several important ways (Lax & Phillips 2012; Jennings & Wlezien 2015), and how aggregate policy preferences can respond coherently to changes in objective realities, even when
2
many or most people lack relevant knowledge (Page & Shapiro 1992). Not having this information, we will come to realize, has led us to dismiss trends in aggregate policy preferences, even while giving great weight to trends in other types of attitudes and cross-sectional findings. Beckett (1997, p. 80), for example, mistakenly concluded that “although support for punitive anticrime policies [like capital punishment] has grown in recent years … these shifts are fairly superficial.” We will also find out whether public officials care more about general trends in aggregate policy preferences or if a majority of their constituents supports a specific policy. Another reason is that we have relied too heavily—indeed, almost exclusively—on crosssectional studies examining individual differences in punitive attitudes. I say this even though every public opinion study I have published has been cross-sectional. To be sure, such research is important. But it also has led us astray (or, more accurately, let us wander off) in several ways. One is that we have frequently overgeneralized cross-sectional findings to draw conclusions about longitudinal relationships between macro-level phenomena (see, e.g., Roberts et al. 2003, pp. 63-64). Consider this sweeping conclusion from a recent cross-sectional study examining whether people with greater exposure to crime (e.g., live in high-crime counties, know crime victims) are more likely to support punitive policies: Because our individual-level results indicate that punitiveness is not a response to crime, this suggests that at the societal level, the rise of punitive criminal justice policies in the 1980s and 1990s likewise was not likely to have been due to a popular response to higher or rising crime rates, increased fear of crime, or greater perceived risk of victimization, even if those factors did increase. Therefore, an alternative explanation for this development is needed (Kleck & Jackson 2017, p. 1591). Conveniently, our cross-sectional research tradition has also allowed us to overlook the forest (shared trends) for the trees (contemporaneous differences in levels). As Cullen et al. (2000, pp. 14-15) observed, our “underlying intellectual and ideological project” has been to delegitimize punitive attitudes, and so we have “scrutinized public opinion research in hopes of 3
discrediting it.” The problem is that we have become preoccupied with showing that support for punitive policies is lower when respondents are asked specific versus global questions and have more versus less information. Roberts & Hough (2005, p. 24) called this “the best documented— and most important—lesson from public opinion research.” But it also turns out to be almost entirely irrelevant to the opinion-policy relationship. Why is this? Let us see.
TREES IN THE FOREST: DISPARITIES IN LEVELS OF POLICY SUPPORT
Global Versus Specific When asked general (or global) survey questions, most people will say they support punitive policies (Cullen et al. 2000), including habitual-offender laws, truth-in-sentencing statutes, longer sentences, “supermax” confinement, and offender registries (Craun et al. 2011; Mears et al. 2013; Ramirez 2013a). For example, since the mid-1970s, between 55% and 80% of respondents have answered affirmatively to the Gallup question: “Are you in favor of the death penalty for a person convicted of murder?” But detailed (or specific) questions that offer different punishment options and/or focus on specific offenders and provide case information yield lower support levels (Roberts & Stalans 1997; Cullen et al. 2000). In one study, 76% of respondents supported capital punishment, but only 40% preferred it when life without parole (LWOP) was an option (Sandys & McGarrell 1995). Levels of specific support for other policies like juvenile transfer also vary depending on question wording (Steinberg & Piquero 2010). Applegate et al. (1996) found high global support (88%) for a three-strikes law, but low specific support (17%) when respondents answered vignette questions describing eligible offenders. In fact, when survey questions include relevant
4
case information, respondents tend to prefer sentences similar to (and sometimes more lenient than) those actually given out by judges and recommended in sentencing guidelines (Miller & Applegate 2015; Roberts & Stalans 1997; Rossi & Berk 1997). Alternative question wordings produce different levels of support because they prompt respondents to think about different things when answering: if a sanction should be available for the worst offenders (violent recidivists), is it the best (or only) option, would it be fair in a given case (Cullen et al. 2000; Roberts & Stalans 1997). These findings are important. When sentences in specific cases are too lenient or harsh relative to public opinion it can undermine the legitimacy of the law (Robinson 2013). Unfortunately, the findings also became a false summit in our analysis of the opinionpolicy relationship. Question-wording effects yielded evidence consistent with our assumption that public punitiveness is a myth, supplying a conclusion aligned with our “professional or disciplinary ideology” (Cullen et al. 2000, p. 14)—namely, that “politicians’ claims that the public ‘wanted’ tougher crime policies were disingenuously based on misleading poll results” (Tonry 1995, p. 34). Finding comfort in the cross-sectional mushiness of support levels (the trees), we missed the robustness of attitudinal trends to question wording (the forest). Aggregate responses to global and specific questions exhibit parallel movements over time (Enns 2016; Ramirez 2013a, 2013b). Adding the LWOP option to death penalty questions, for example, “consistently produces lower responses … [but] the dynamics are strongly similar” (Baumgartner et al. 2008, p. 174). We will see why later. For now, let us focus on the implication: global questions are neither misleading nor “overly simplistic” (Gottshchalk 2006, p. 27) when judging trends. We
5
have long known that “policymakers [and enforcers] are interested in trends” (Hough & Roberts 2005, p. 19), but we did not put two and two together. Instead, we dismissed the trends: For the past 25 years, the percentage of the public supporting the execution of offenders has been quite stable: in 1976, 66% of the public expressed support for capital punishment, and when this question was posed in 2000, exactly the same percentage expressed this view, with little variation over the intervening years (Roberts et al. 2003, p. 28). It does appear, however, that support for punitive anticrime policies has grown in recent years … It is important to note, however, that these shifts are fairly superficial. For example, support for capital punishment diminishes considerably when people are given the option of life sentences without the possibility of parole (Beckett 1997, p. 80). Roberts et al. (2003) are wrong, unless you count a 14-percentage point increase (6680%, 1976-1994) and then decline (back to 66% in 2000) in support as a “little variation” (the reference is to the global Gallup question). Beckett (1997) turns out to be both right and wrong. (The punitive shift was far from superficial.) During most of the period, the public “grew more supportive of the death penalty” and “this would be reflected in almost any question asked repeatedly in any national opinion poll” (Baumgarter et al. 2008, p. 176). As we shall see, if public officials are dynamically responsive, they would follow these types of opinion movements irrespective of the actual level of support for specific policies. The global-specific disparity would be irrelevant.
Uninformed Versus Informed Most people know little about the extent or nature of crime, legal statutes or procedures, sentencing reforms, or the administration of punishment (Pickett et al. 2015; Roberts & Stalans 1997), which is the type of information that we possess and think should shape public views about punitive policies (Cullen et al. 2000). For example, the average American believes crime is always increasing (Ramirez 2013a), thinks over half of all offenses are violent (Chiricos et al. 6
2004), overestimates the risk of being victimized (sometimes by more than 1000%) (Quillian & Pager 2010), and underestimates imprisonment rates for convicted criminals (Kleck et al. 2005). In numerous studies, we have shown that providing people with accurate information about crime rates and punishment—through brief manipulations (fact sheets or videos), weekend seminars (deliberative polls), or semester-long college classes—lowers their support for punitive policies (Bohm & Vogel 2004; Hough & Park 2002; Indermaur et al. 2012; Roberts et al. 2012).1 Unsurprisingly, other experiments have shown that giving people different kinds of information—such as victim impact evidence or racial disproportionality statistics—can have the opposite effect, increasing punitiveness (Hetey & Eberhardt 2014; Kennedy-Kollar & Mandery 2010; Paternoster & Deise 2011; Peffley & Hurwitz 2007). That is beside the point. Information treatments create cross-sectional differences in punitiveness between treated and untreated respondents that disappear later (Bohm & Vogel 2004; Indermaur et al. 2012)— their effects are “a temporal glitch created by social scientists” (de Keijser 2014, p. 108). But say we found an information treatment that permanently knocked punitiveness down a notch, and we could treat everyone, how much impact would it have on policy? That depends on whether public officials focus on trends or levels of support. A new “informed” baseline level does not a trend make. And there is evidence that people with different amounts of existing knowledge respond similarly to the broader social processes that underlie trends in other types of policy preferences (Enns & Kellstedt 2008; Erikson et al. 2002a; Soroka & Wlezien 2010). If trends in support for punitive policies are also similar across information (or sophistication) strata, and public officials focus on trends, the effect of our grand information treatment on policy would be trivial. Our new “informed” public would exhibit the same attitudinal trends as the old
In these studies, as Cullen et al. (2000, p. 15) explained, we were trying to create a “self-portrait of [ourselves]: if the public were like us, they would not support executing offenders!” 1
7
uninformed one. This is the forest we missed while focusing on cross-sectional information effects. (Informed versus uninformed levels are just trees.) So, do public officials care more about trends in public opinion or levels? Are trends in punitiveness similar across groups? And how can a public composed mostly of uninformed people respond intelligibly to anything? We will have to look outside of our discipline to answer these questions. Let us start with the last one, and then work back to the first.
INDIVIDUAL AND COLLECTIVE OPINIONS: KEY THEORETICAL IDEAS
The Nature of Individual Attitude Reports The bleak picture of public opinion held by many criminologists today mirrors one that prevailed in other fields before being discredited at the end of the last century (Page & Shapiro 1992). Similar to criminologists, political scientists were startled to find abysmally low levels of public knowledge about politics—for example, less than half of respondents could identify both senators from their state, or knew that members of the U.S. House of Representatives serve twoyear terms (Althaus 2003; Glynn et al. 2004). Respondents’ answers in surveys were often inconsistent or even contradictory, and were highly sensitive to question wording, interview context, and counterarguments (Zaller & Feldman 1992). Many respondents happily answered nonsense questions about fictional issues and policies (Bishop et al. 1986). A large number of respondents, sometimes more than half, changed their answers when asked identical questions at different times (Converse 1964, 1970; Zaller & Feldman 1992). Nevertheless, evidence accumulated showing uninformed individuals were often still able to formulate policy opinions consistent with their interests by using information shortcuts (Lupia
8
& Matsusaka 2004). Another important finding was that despite great instability in individual attitude reports over time, most survey marginals changed very little, which motivated efforts to explain how gross change occurs with net stability (Converse 2000). Several theoretical models of survey response emerged, each with different assumptions about the prevalence of real attitudes in the population, but all positing that instability in individual attitude reports reflects probabilistic response processes and amounts to random noise. Three survey response models that have been especially influential in advancing understanding of public opinion are Page and Shapiro’s (1992) measurement error model, Converse’s (1964, 1970) “black-and-white” model, and Zaller’s (1992) “Receive-Accept-Sample” model. The first assumes that everyone has true underlying attitudes, the second that only some people do, and the third that almost none do. Page & Shapiro (1992) assume all individuals have real attitudes, defined as their true long-term opinions. Because of measurement error, however, survey respondents provide responses that fluctuate randomly around their central tendency of opinion, with the amount of error variance being an inverse function of their current information level. By contrast, Converse (1964, 1970), whose model is sometimes characterized as an “errors in respondents” model, assumes there are two distinct groups of respondents. The first group has real and stable opinions. Answers from this group constitute the signal in survey results. The noise comes from the second and larger group who lacks true opinions and when interviewed expresses “nonattitudes” or “doorstep opinions.” Respondents in this group have wide “latitudes of acceptance” covering the entire response continuum and answer questions randomly (Converse 2000). What differentiates the two groups is their existing knowledge levels. In both of the above models, then, the most informed respondents should provide the most reliable answers. Page and
9
Shapiro assume the distribution of noise across response options is approximately normal, whereas Converse assumes the distribution is uniform (Althaus 2003). In Zaller’s (1992) “Receive-Accept-Sample” model, virtually all respondents lack preformed attitudes, instead having in their heads a mix of real but often conflicting considerations, the number of which is determined by their topic-specific knowledge, interest and attentiveness. Survey respondents conduct an incomplete but probabilistic memory search when interviewed, drawing a biased but stochastic sample of considerations, which they average across (when N > 1) to form their answers (Zaller & Feldman 1992). Question wording and ordering, as well as the broader interview situation, can increase the accessibility of certain ideas, systematically biasing the sample of considerations. Conditional on the survey particulars and context, however, which narrow the sampling frame of considerations to a salient subset, the sampled considerations should be representative (a random sample) of the salient considerations in respondents’ heads (Zaller & Feldman 1992). Similar to Page & Shapiro (1992) and Converse (1964, 1970), then, Zaller (1992) and Zaller & Feldman (1992) posit that response instability will have a specific structure. Holding the survey particulars constant (e.g., global versus specific question wording), respondents’ attitude statements should fluctuate randomly around the stable long-term central tendency of relevant considerations they carry around in their heads. Because the population and sample size of considerations should both increase with topical information and interest, there should be less fluctuation over time in attitude reports for knowledgeable and engaged respondents (Zaller & Feldman 1992).
10
The Power of Aggregation: Stability by Default According to each of the three models reviewed above, response instability occurs because some specific probabilistic response process underlies individual attitude reports, causing random reporting errors, which are largest among (or limited to) the inattentive and uninformed. This matters a great deal for understanding collective public opinion. Regardless of the source of random errors in reporting, those errors will balance out when aggregated as long as they are independent, giving aggregate survey responses emergent properties that are absent from individual attitude reports (Page & Shapiro 1992). The statistical aggregation process, by accentuating “the orderly over the disorderly, the signal over the noise” (Stimson 2004, p. 19), produces meaningful aggregate survey responses that are stable by default, although not immovable, despite low levels of attentiveness and knowledge among many or most respondents (Converse 1990; Page & Shapiro 1992). Indeed, even if there is no signal, and all respondents answer randomly (imagine the repeated outcomes of coin-flip responses to a favor/oppose question), which is unlikely excepting nonsense questions about fictional policies, aggregate responses to identical survey questions will remain stable over time on expectation (Althaus 2003; Glynn et al. 2004). Aggregate survey responses will also remain stable over time when a signal exists, large or small, but remains unchanged. In this situation, an inert signal implies stability in the mix of considerations and information environment underlying aggregate attitudes (Converse 1990, 2000). By extension, when there is a signal, and the attitude object is a dynamic phenomenon, collective judgments about the object should vary considerably over time (Althaus 2003), which is exactly the pattern observed in aggregate responses to survey questions about such issues as the nation’s “most important problem” (MIP) (Wlezien 2005). By contrast, the defining feature
11
of policy preferences is that they constitute summary judgments balancing many different considerations as well as competing values (Stimson 1999, 2004). As a result, even with a large signal, aggregate policy preferences should have a strong tendency toward inertia (Page & Shapiro 1992). This too is borne out by the data (Glynn et al. 2004). In their seminal book, The Rational Public, Page & Shapiro (1992) demonstrated that aggregate policy preferences are remarkably stable. They analyzed trends in aggregate responses to 1,128 policy questions repeated over long time intervals, and found that most failed to show any significant change, especially those pertaining to domestic affairs rather than international relations.
Aggregate Opinion Change: Coherent Responsiveness An important implication of the default stability in aggregate attitudes is that when public opinion does change, there is a reason (Stimson 2004). Supporting this notion, Page and Shapiro (1992) found that in the minority of instances where aggregate policy preferences did change, those changes were intelligible. In the case of policy preferences about domestic affairs, including criminal justice, most of the changes were also gradual, constituting coherent responses to slowly changing social and economic conditions in the United States. Just as important, Page & Shapiro (1992) found strong evidence of parallel opinion change. Their analysis of more than 3,000 subgroup time series—trends in policy preferences by sex, race, education, occupation, income, religion, age, partisanship, region, and community type— revealed that the trend lines for these groups were almost always similar, with significant differences emerging in only 5.7% of cases (out of 24,284 pairs of opinion shifts). Other studies have also found strong evidence of parallel opinion change across groups (Enns & Kellstedt 2008; Erikson et al. 2002a; Soroka & Wlezien 2010).
12
There are at least two pieces to the theoretical puzzle of how a public composed of grossly uninformed individuals can collectively respond in an orderly and sensible manner to new information and events, exhibiting what Converse (2000: 349) characterized as “coherent responsiveness.” The first is the process of information pooling, as elucidated by Condorcet Jury Theorem (CJT) and evidenced in research on “collective wisdom” (Landemore & Elster 2012; Miller 1986; Murr 2011). Applying the law of large numbers from probability theory, CJT shows that collective decisions will be more accurate than individual choices, as long as group members have better than a random chance of choosing correctly themselves. Extended to public opinion, information pooling should result in aggregate attitudes, both static and dynamic, that are more perceptive, thoughtful, and informed than those of individuals (Page & Shapiro 1992, but see Althaus 2004). The other puzzle piece has to do with the role (or lack thereof) of noise in public opinion change. Although noise influences static measures of aggregate attitudes, because random errors in individual attitude reports only balance rather than cancel out (Althaus 2003), it has a flat line over time, contributing little to systematic opinion movements (Erikson et al. 2002a; Stimson 2004). Net public opinion change occurs either when the population composition changes or some members of the public, even just a tiny minority, update their attitudes simultaneously and systematically (Converse 2000). The second process (mutual updating) accounts for most opinion movements, especially shifts in attitudes about criminal justice policies (Anderson et al. 2017; Enns 2016). It happens when attentive individuals—regardless of their initial (or baseline) knowledge level (Enns & Kellstedt 2008)—receive the same new information or have similar experiences, personal or vicarious, and as a consequence change their attitudes in the same direction (Erikson et al. 2002a; Glynn et al. 2004).
13
What explains parallel opinion change? The answer is that, at least historically, the mass media in the United States have helped to expose individuals from different backgrounds and with different knowledge levels to the same important social and economic events and trends, including crime trends (Enns 2016; Miller 2016), via changes in news coverage of these issues (Page & Shapiro 1992). This process results in attitude trends that are typically similar across information (or sophistication) strata, as well as demographic groups (Enns & Kellstedt 2008; Erikson et al. 2002a; Soroka & Wlezien 2010).
A Second Level of Aggregation: Public Policy Mood
The Concept of Policy Mood. As we have seen, aggregate policy preferences, as measured in surveys, vary tremendously depending on how questions are asked and their level of specificity (general versus detailed) (Cullen et al. 2000). But such question wording effects are mostly irrelevant when studying trends with identically worded questions (Page & Shapiro 1992). As a result, analyzing changes in aggregate responses to single survey questions (even global ones) asking about individual policies like capital punishment can provide useful information about movements over time in collective public opinion toward those policies. However, one limitation of analyses focusing on particularistic measures of policy preferences is that they neglect issue bundling (Stimson 1999, 2004). Aggregate policy preferences on diverse issues (and measured with differently worded questions) frequently move together in similar ways over time, suggesting that they share a common cause (Erikson et al. 2002a). Stimson (1999) theorized that the common cause is “public policy mood,” a latent, macro-level, and multi-dimensional concept comprising the public’s generalized dispositions
14
toward certain kinds of policies, such as those expanding the size of government or increasing the harshness of formal social controls. Measuring Policy Mood: The Dyad Ratio Algorithm. If a general public mood exists, then aggregate responses to individual survey questions asking about particular policies are imperfect indicators of this latent attitude structure (Stimson 1999). Measuring public policy mood thus requires aggregating survey responses twice—first over respondents, and then over questions and issues (Stimson 1999, 2004). The challenge to combining longitudinal survey data for different questions is overcoming two hurdles: 1) survey marginals for different questions are not directly comparable, and 2) many marginals are missing (often more than 60%), because most questions are asked infrequently and some are asked only in certain time periods (Stimson 1999, 2017). Most commonly, researchers use Stimson’s (1999) dyad ratio algorithm (DRA) to address the above issues and measure mood, although other methods exist, such as McGann’s (2014) item response theory model. Similar to a dynamic factor analysis, the DRA extracts common dimensions in survey marginals for different questions and issues over time. First, the DRA converts survey marginals (i.e., percent favoring a policy, excluding neutral and don’t know responses) for all questions to a common metric, dyad ratios, which are within-item ratios of responses for pairs of years. Ratio transformation assumes only that survey marginals are comparable within an item series over time. Next, the DRA uses forward and backward recursion to impute missing ratios, averaging the two estimates. The estimates are then smoothed (exponentially weighted moving average) to control for differences in sampling variability across surveys, weighted by item validity (correlation with the latent mood dimension, estimated iteratively), and averaged (Stimson 2017). Multiple dimensions of mood can be extracted by
15
residualizing item series on the dimension(s) extracted previously and repeating the process (Stimson 1999). Dimensions of Policy Mood: Liberalism and Punitiveness. Stimson (2004) applied his algorithm to 181 item series comprised of 2,144 aggregate responses to all survey questions about domestic policies posed repeatedly since 1952 to nationally representative samples by reputed pollsters (e.g., Gallup, Harris, National Opinion Research Center). He found two underlying latent dimensions of policy mood. The first dimension, explaining 70% of the systematic variation over time in aggregate policy preferences, was public liberalism, consisting of generalized views about whether the government should do more or less on New Deal-related issues (e.g., health, education, welfare, racial equality). The second dimension, explaining the remaining 30% of systematic variation, primarily reflected views about crime and criminals. According to Stimson (2004, p. 80), this second dimension tapped global preferences for “hardline versus soft-line” approaches to controlling social deviance. Erikson and colleagues (2002a, p. 208) described the second dimension as measuring “social compassion” (or a lack thereof) mainly toward criminals. Punitive Mood. Building on Stimson’s findings (1999, 2004), several recent studies have developed more refined measures of the public’s generalized attitudes toward criminals, or punitive mood, by focusing only on aggregate responses to survey questions asking specifically about criminal justice issues, and then applying the DRA to estimate the common over time variance. Ramirez (2013b) used the algorithm to analyze 242 aggregate responses to 24 survey questions (global and specific) asked between 1951 and 2006 about diverse criminal justice policies (e.g., the death penalty, purpose of prisons, rehabilitation, three-strikes laws, crime spending). He showed that although the levels of aggregate support for these different policies
16
differed greatly, they exhibited parallelism in movement over time, all reflecting a single underlying latent construct. Enns (2014, 2016) applied the DRA to 381 aggregate responses to 33 questions about criminal justice asked between 1953 and 2012. Unlike Ramirez (2013b), Enns also included questions asking about confidence and trust in police, and again found that aggregate responses to the different questions moved together over time, all driven by the same latent attitude structure—punitive mood. Although Ramirez (2013b) and Enns (2014, 2016) both focused on the United States, Jennings et al. (2017) reported the same basic finding in Britain. They used the DRA to analyze aggregate responses from 1938 to 2013 to 2,007 survey items asking either about crime policies or confidence in legal authorities and found that most of the variance in these responses was explained by a single underlying dimension.
RESEARCH ON TRENDS IN PUBLIC OPINION ABOUT CRIMINAL JUSTICE
Beckett’s (1997) Making Crime Pay In our field, Beckett’s (1997) analysis of crime salience has been by far the most influential study of changes over time in public opinion. Her findings are also the primary source of the myth that public opinion about criminal justice is volatile (e.g., Drakulich & Kirk 2016; Gottschalk 2006; Travis et al. 2014). Thus, her analysis and findings warrant special attention before proceeding to a discussion of additional evidence. Beckett analyzed shifts in the public salience of crime to test the “democracy-at-work” thesis against the social constructionist account of criminal justice policymaking. According to the democracy-at-work thesis, increases in crime rates caused the punitive turn in criminal justice by elevating public fear of crime and punitiveness. By contrast, the social constructionist account
17
holds that media and political elites manipulated public opinion for profit and political ends, with little regard for actual trends in crime rates or drug usage. To measure public opinion, Beckett (1997) used responses to a survey question about the nation’s MIP (most important problem). Specifically, she operationalized public concern about crime and drug usage, respectively, as the percentage of respondents in nationally representative surveys conducted by two polling firms whose answers to the MIP question pertained to crime or drug use. She focused on two time periods, analyzing public concern about crime from 1964 and 1974, and public concern about drug use from 1985 and 1992. Her models tested whether violent crime rates or drug usage rates correlated with subsequent levels of public concern about these issues. She also analyzed whether news coverage of these issues or relevant political initiative (e.g., speeches, statements) by federal officials correlated with subsequent levels of public concern. What Beckett found was startling, and strongly supportive of the social constructionist account. First, public concern about crime and drugs “fluctuated quickly and dramatically” (p. 16). These striking shifts in popular attitudes appeared to lack any relationship with actual rates of crime or drug use, which moved slowly and gradually over time. Rather, what explained shifts in public concern about crime was media coverage and political initiative in the preceding months. Political initiative on the drug issue, but not media coverage, was also strongly associated with subsequent levels of public concern about drug use. The central problem with using Beckett’s (1997) findings to draw conclusions about the nature or predictors of public opinion on criminal justice is that she used MIP responses rather policy preferences (Enns 2014, 2016). Other scholars have also examined MIP to understand public opinion on the importance of the crime problem (Gottschalk 2006; Miller 2016). However, MIP responses “tell us little, if anything, about the importance of issues” to the public
18
(Wlezien 1995, p. 575) and they “do not necessarily tell us anything, one way or the other, about the stability or instability of policy preferences” (Page and Shapiro 1992, p. 40). They tell us even less about the correlates or effects of policy preferences (Jennings & Wlezien 2015). One shortcoming of MIP responses is that they are relative. As Beckett (1997) explained in a footnote: The sharpness of the fluctuations in levels of concern about crime and drugs reflects, in part, the use of this measure: the percentage of poll respondents who identify a given problem as the nation’s most important may shift suddenly as other issues assume prominence in political debate (p. 116). Indeed, this is why scholars have long argued that the MIP question “seems to ask, in effect, ‘what have newspapers and radio and television treated as most important during the last few weeks?’”(Page & Shapiro 1992, p. 423). But it is not simply that MIP responses about crime or drugs are media-based and relative, depending on coverage and concerns about other issues. The question also conflates the importance (or policy relevance) of an issue with its problem status, which explains the dramatic shifts in MIP responses over time (Jennings & Wlezien 2015). MIP responses primarily measure the changing problem status of issues that are always important but only sometimes problematic, like national security and the economy. In Wlezien’s (2005, p. 560) words, “give us peace and we need to look elsewhere for our MIP. Add in prosperity and we have to look further still.” He showed that changes in national security and the economy could explain most (68%) of the variation in non-economic, non-foreign MIP responses, such as those about crime and drugs. Perhaps most importantly, the MIP question does not measure punitiveness, and aggregate responses to the question are only weakly related to punitive mood (Ramirez 2013b). Believing that crime is the MIP facing the country is not synonymous with wanting policymakers to address the crime problem with harsher policies. Furthermore, public punitiveness can (and 19
does) vary over time independently of MIP responses, in part because punitive attitudes do not hinge on the problem status of other issues. This is important because Beckett’s (1997) analysis of MIP responses revealed “no evidence that political elites’ initial involvement in the wars on crime and drugs was a response to popular sentiments” (p. 25). Beckett & Sasson (2004) came to a similar conclusion. Other studies have found no relationship between changes in the percentage of Americans perceiving crime/drugs as the MIP and various measures of federal criminal justice policy, such as new commitments to federal penitentiaries, seemingly providing further evidence against the democracy-at-work thesis (Nicholson-Crotty & Meier 2003; Schneider 2006). However, to understand the relationship between public opinion and criminal justice policy we have to move beyond MIP responses. Jennings & Wlezien (2015) have shown that research with MIP responses substantially underestimates the responsiveness of politicians and policy to public opinion.
Trends in Punitive Policy Preferences Three primary findings have emerged from research analyzing trends in punitive policy preferences, all consistent with the view of collective public opinion as meaningful and coherent. First, the movements observed in criminal justice opinions in the United States have been gradual, with important directional changes occurring during the 1960s and 1990s (Page & Shapiro 1992; Ramirez 2013b). For example, death penalty support, as measured by Gallup, fell over several years to a low of 42% in 1966, then gradually rose to a high of 80% in 1994, before declining slowly to 55% in 2017. The public’s generalized punitiveness has followed the same basic trend. Although they estimated punitive mood using different item series (see above), Ramirez (2013b) and Enns (2014, 2016) both found that it increased gradually from a low in the
20
mid-1960s to a high in the mid-1990s, and then declined steadily thereafter. The punitive mood in Britain also changed gradually, increasing from the mid-1980s to a peak in 2005, before beginning a steady decline to the present-day level (Jennings et al. 2017). Second, trends in criminal justice opinions generally have been parallel for different population groups (but see Shirley & Gelman 2015). Anderson et al. (2017) found similar trends in death penalty support across age groups and birth cohorts. Enns (2016) and Ramirez (2013b) both documented that punitive mood generally moved in tandem across demographic (e.g., blacks, whites, men, women, old, young), educational, partisan, and regional groups. Third, there has been only one consistent predictor of changes in aggregate support for punitive policies: crime rates. Most previous reviews by criminologists came to the opposite conclusion: “there is a consensus [in our field] that trends in public punitiveness simply do not correspond clearly with rises in crime rates” (Roberts et al. 2003, p. 61). Cullen et al. (2000, p. 5) made the same point. They are wrong. Early studies reported a very strong bivariate correlation (as large as .80) between trends in crime and death penalty support (Page & Shapiro 1992, p. 338). Mayer (1992), for instance, found that the lagged (by five years) homicide rate explained 78% of the variance in death penalty support between 1953 and 1988. More recently, Miller (2016, pp. 109-10) showed that from the early 1950s to 2010, there was a strong correlation between the homicide rate (one-year lag) and death penalty support (r = .60). Studies employing rigorous research designs have confirmed the robustness of the crimedeath penalty support relationship. Using multivariate time series models to control for alternative explanations, Jacobs & Kent (2007) found a significant positive effect of homicide rates on death penalty support between 1951 and 1998. Baumgartner et al. (2008) used an error correction model to analyze quarterly changes in death penalty support, measured by applying
21
the DRA to aggregate responses from 292 surveys asking about capital punishment between 1976 and 2006. Like previous studies, they found a significant positive effect of the homicide rate, and concluded that “had homicides not been on the decline from 1993 onward, we would expect net support in 2006 to be nine points higher” (p. 195). Anderson et al. (2017) estimated age-period-cohort models of death penalty support between 1974 and 2014, documented small cohort effects but large period effects, and found that the only significant period-level predictor was violent crime. Similar to the evidence for death penalty support, studies analyzing changes over time in the public’s general punitiveness have found a significant positive effect of crime rates. Nicholson-Crotty & Ramirez (2009) showed that the index crime rate Granger-caused the second dimension of public mood (punitiveness), when measured identically to Stimson (2004). Analyzing more refined measures of punitive mood, Enns (2016) found the crime rate Grangercaused public punitiveness, and Ramirez (2013b) found that both the homicide rate and drug usage rate predicted public punitiveness. Examining public opinion in Britain, Jennings et al. (2017) documented a significant and positive effect of the crime rate on punitive mood that was mediated in part by the public’s fear of crime. The research to date has found that the crime rate’s effects on both death penalty support and punitive mood are delayed rather than immediate, taking a year or more to fully materialize (Baumgartner et al. 2008; Ramirez 2013b). Besides crime rates, studies have yielded mixed evidence about other potential predicators of changes in public punitiveness, such as racial integration, economic inequality, and exonerations (Anderson et al. 2017; Jacobs & Kent 2007; Ramirez 2013b). For example, Nicholson-Crotty & Ramirez (2009) found that the number of presidential statements on crime failed to significantly predict subsequent levels of public punitiveness. Yet, when focusing
22
instead on the tone of presidential statements, Ramirez (2013b) found that pro-punitive statements had a significant positive effect on public punitiveness. Enns (2016) used the same measure of presidential tone as Ramirez (2013b), but found the effect was only marginally significant. Rather, he showed that public punitiveness Granger-causes future changes in the number of pro-punitive presidential statements and congressional hearings on crime, suggesting the public mostly leads politicians, instead of the other way around. His qualitative analysis of how political campaigns and politicians have used opinion polls to formulate their criminal justice positions strongly supported this interpretation. Similarly, findings about whether changes in media coverage explain movements in public punitiveness have varied depending on the measures analyzed. Ramirez (2013b) found no evidence that media coverage of black crime explained changes in punitive mood. Focusing instead on pro-death penalty media coverage, Baumgartner et al. (2008) found a significant positive effect on death penalty support, estimating that support increases by about 1.5 percentage points for every ten additional news stories. Enns (2016) found that news stories about crime Granger-caused public punitiveness. He also showed that despite persistent biases in how the media cover crime, the number of news stories devoted to crime changes with (follows) the actual crime rate. Like Enns (2016), Miller (2016, p. 112) also reported a strong correlation (r = .63) between state homicide rates and the number of newspaper stories about “crime” or “criminal justice.” We will come back to this point later. In summary, there is strong evidence that although most members of the public are uninformed about crime and criminal justice, aggregate support for punitive policies moves gradually over time in response to changes in the crime rate. The weight of the evidence also
23
suggests that this relationship is mediated by changes in the number of news stories devoted to crime and the public’s fear of crime.
DEMOCRATIC REPRESENTATION: VARIETIES AND MECHANISMS
In evaluating the political effects of public opinion, we encounter three basic questions. First, does public opinion matter to public officials? Indisputably, the answer is yes (Shapiro 2011). Public officials in the United States care immensely about collective public opinion, and did so long before its reification in scientific surveys (Geer 1996). Public opinion is “the most important factor in American politics … it is the drive wheel” (Stimson 2004, p. xvi). Second, how do public officials learn about public opinion? The answer to this question has changed over time. Blumer (1948) famously critiqued polls for measuring a type of public opinion that escaped the attention of public officials, who instead used personal information sources to understand their constituents’ concerns. Since the middle of the last century, however, when scientific surveys emerged as the most legitimate and popularly accepted source of social and political knowledge about the public, public officials have relied heavily on polls to ascertain its opinions (Geer 1996; Igo 2007). Many U.S. Supreme Court decisions, for example, have cited public opinion polls by Gallup and other firms (Baumgartner et al. 2008). Third, what type of public opinion matters most to public officials? The answer is aggregate, generalized and dynamic policy preferences (Stimson 1999, 2004). As we saw earlier, collective policy preferences are distinct from other types of aggregate attitudes and liberalconservative ideological self-placements (Page & Shapiro 1992; Wlezien 2017); they also are more politically influential (Jennings & Wlezien 2015; Lax & Philips 2012). There is strong
24
evidence that various public officials, including the President, senators, members of the House of Representatives, and U.S. Supreme Court justices respond to public opinion primarily by attending to trends in the main dimensions of policy mood (Erikson et al. 2002a). However, they pay less attention to public preferences for specific policies, except when those policies are highly salient or are favored by a supermajority of citizens (Burstein 2006; Cohen 1997; Lax & Phillips 2012). Put another way, public officials attempt to lead the public “in particulars by following [it] in general” (Stimson 1999, p. 11). Of course, policy-specific opinions are still important—“ideas and positions that … cannot draw majority support are systematically suppressed by self-interested politicians” (Stimson 1999, p. 19)—but mood trends are the more pressing concern. Consequently, they may ignore majority or even supermajority policy preferences when they are inconsistent with trends in public mood, but attend closely to them when they align with broader mood movements. Because policy popularity is secondary to mood trends (Stimson 2004), public officials tend to implement policies in consistent sets, rather than mix and match based on the public’s policy-specific preferences (Lax & Phillips 2012). As we will see, this form of representation, responsiveness irrespective of congruence, often results in a “democratic deficit” whereby the public fails to get the specific policies it wants despite (or because of) reforms occurring primarily in the desired direction (Lax & Phillips 2012; Shapiro 2011). The criminal justice implication is that high levels of popular support for specific nonpunitive policies (e.g., rehabilitation, early intervention) may fall on deaf ears when the public’s mood is trending toward greater punitiveness, because public officials will concentrate primarily on responding to the broader trend with harsher policies and practices. So even if the public is pragmatic—
25
simultaneously supporting punitive as well as nonpunitive policies (Cullen et al. 2000, but see Pickett & Baker 2014)—policymakers may not be.
Congruence Versus Responsiveness Corresponding to the “majority rule model” (Norrander 2000, p. 774) or “majoritarian approach” (Weissberg 1976, p. 83) to democratic representation, congruence necessitates that policy is calibrated precisely to (or equals) majority opinion (Lax & Phillips 2012; Wlezien 2017). One problem for the rational public official is that excepting supermajority preferences, majority opinion “cannot be trusted to communicate underlying signals from informed opinion” (Althaus 2004, p. 54). Because noise balances rather than cancels out when aggregated, it still influences the distribution and central tendency of aggregate opinion, and if the ratio of noise to signal is large, majority opinion can fail to convey the signal accurately (Althaus 2004). Consequently, “static [majority] opinion is often toothless,” reducing the political utility of majoritarian congruence (Stimson 2004, p. xvii). It is unsurprising, then, that across all issues, policies are congruent with majority opinion only about half of the time (Lax & Phillips 2012; Monroe 1998). For criminal justice policies, Lax & Phillips (2012) found a congruence rate of 45% and pronounced punitive bias, such that a very high level of public support was required before nonpunitive policies were adopted. A second form of democratic representation is covariational or dynamic (Stimson et al. 1995; Weissberg 1976), and rests on responsiveness rather than congruence. Responsiveness requires only that aggregate policy preferences positively affect policy, irrespective of whether the two actually match (Lax & Philips 2012; Wlezien 2017). We will consider criminal justice findings shortly. For now, it suffices to say that there is strong evidence of dynamic
26
representation (Brooks & Manza 2007; Page & Shapiro 1983; Soroka & Wlezien 2010), especially in response to movements in public mood (Erikson et al. 2002a). As we have seen, static majority opinion can be impotent, especially lacking supermajority standing. Changes in aggregate policy preferences, however, even small ones, are politically potent because they signify that something important is going on, that millions of citizens are taking new positions, which happens only when they care and are paying attention (regardless of their baseline information level) (Stimson 2004). The political potency of opinion dynamics stems from the default stability of aggregate policy preferences (Stimson 1999, 2004). Some underlying “motive force” is necessary to set aggregate policy preferences in motion, and thus opinion movement “becomes a filter through which passes only meaningful responses” (Stimson 2004, p. 166). For this reason, when change occurs, rational public officials take notice (Erikson et al. 2002a).
Mechanisms for Criminal Justice Representation
Politicians. Public opinion may shape criminal justice policy by influencing politicians’ support for different policies and practices, such as sentencing laws and budgetary appropriations (Enns 2016). Theoretically, this might occur consistent with either (or both) a retrospective model of political responsiveness, in which public opinion effects are mediated by voting-booth sanctions and electoral turnover, or a prospective model, where the mediator is rational anticipation by politicians currently in office (Manza & Cook 2002). Under the prospective model, public opinion (or mood) provides an asymmetrical and moving “zone of acquiescence,” or range of acceptable policies, and rational politicians—a “species … [whose] preference sensitivity is … finely honed”—anticipate its trajectory and stay (or lead) within it (Stimson
27
1999, pp. 21-23). Stimson (1999, 2004) argues that the prospective model explains most public opinion effects on policy. Legal Actors. Most states hold judicial elections for their high (or Supreme), appellate and trial courts (Huber & Gordon 2004; Brace & Boyea 2008). Chief prosecutors—variously referred to as state’s attorneys, county attorneys, district attorneys, prosecuting attorneys, and solicitors—are elected in all states, excepting Alaska, Connecticut, and New Jersey, but even in these states they are still appointed by an elected attorney general (Wright 2009). One recent analysis identified 2,437 elected state and local prosecutors in the United States (Reflective Democracy Campaign 2015). Similarly, police executives often are elected (sheriffs) or appointed (police chiefs) by elected officials, such as mayors (Falcone & Wells 1995). As with politicians, then, electoral turnover and rational anticipation may both increase responsiveness to public opinion among elected state Supreme Court justices, appellate and trial judges, prosecutors, and police executives (Aspin & Hall 1994; Canes-Wrone et al. 2014; Enns 2016). Elected legal actors are likely to be especially concerned about the potential for “firealarm oversight” (McGubins & Schwartz 1984, p. 166), where an otherwise disengaged but punitive public becomes outraged at publicized instances of offenders getting off easy (Huber & Gordon 2004). When the opinion climate is punitive, for example, “the impression that a judge is soft on crime can have great electoral impact” (Baum 2003, p. 35). The situation is likely to be different, however, when punitive mood is on the decline, as it now. “In the past few years,” Sklansky (2018, p. 459) explained, “incumbent district attorneys have been defeated in several high-visibility races around the country, often by challengers promising a rollback of ‘tough on crime’ policies, greater scrutiny of the police, or both.”
28
Importantly, even nonelected legal actors may be responsive to public opinion because of concerns about the maintenance of institutional legitimacy (Casillas et al. 2011). Studies analyzing the U.S. Supreme Court—where the justices have lifetime tenure—have shown that decisions in both criminal procedure and other types of cases are responsive to the first dimension of public policy mood (liberalism) (Casillas et al. 2011; Erikson et al. 2002a; McGuire & Stimson 2004). Owens & Wohlfarth (2017) have likewise found that federal circuit court judges follow circuit-level policy mood (liberalism), despite having life tenure. Additionally, by influencing expected trial outcomes, public opinion may affect how prosecutors and defense attorneys present their cases to juries, which information cues they use, and whether prosecutors choose to pursue the death penalty (Baumgartner et al. 2008). Direct Legislation. Finally, public opinion may affect criminal justice through ballot propositions (Enns 2016; Simon 2007), either in the form of initiatives (direct or indirect) or referendums (popular or submitted) (Gerber 1999). The first three-strikes law, for instance, was passed via citizen initiative in Washington in 1993 (Karch & Cravens 2014). Similarly, voters in California approved a three-strikes law in 1994, and then approved changes to the law in 2012, making it somewhat less punitive (Karch & Cravens 2014). Another example is voters in several states voting to preserve the death penalty (Simon 2007). Gerber (1999) found that ballot propositions on morality policy or corrections had especially high passage rates. In the 24 states that allow citizen initiatives, their threat may also increase the responsiveness of legislators and elected legal actors to public opinion (Gerber 1999; Lewis et al. 2014).
29
RESEARCH ON CRIMINAL JUSTICE RESPONSIVENESS
Many studies have examined whether aggregate MIP responses or political ideology affect criminal justice outcomes (Campbell et al. 2015; Jacobs & Carmichael 2001; NicholsonCrotty & Meier 2003; Smith 2004; Yates & Fondling 2005). For example, Spelman (2009, p. 29) analyzed the effects of citizen ideology on state prison population growth from 1977 to 2005 and found that “public opinion … had little effect on the number of prisoners.” As we have seen, however, such measures are very poor proxies for policy preferences (Jennings & Wlezien 2015; Wlezien 2017). Indeed, Lax & Phillips (2012, p. 160) found that “the average effect of policyspecific opinion [on policy] is over twice that of diffuse voter ideology.” Mood trends may have even larger effects (Erikson et al. 2002a). So, our focus here will be restricted to research examining either policy-specific preferences or punitive mood. Although time-series analyses are superior to cross-sectional studies for testing the causal effect of aggregate opinion on policy (Erikson et al. 2002a; Manza & Cook 2002), we will explore findings from both types of research.
Effects on Capital Punishment Policy and Use Studies using diverse research designs and analyzing different outcomes have provided strong evidence that capital punishment policy and use in the United States are both responsive to aggregate death penalty support.2 Gerber (1999), for example, documented that states with citizens who were more supportive of the death penalty were more likely to have the policy,
2
The one exception, as far as I am aware, is a study by Brace and colleagues (2002). They disaggregated General Social Survey data to the state-level and found a positive but nonsignificant bivariate correlation between death penalty support the number of death row inmates (per 1000). The association became significant but negative after controlling for state-level ideology.
30
especially when they also had an initiative process. Several other studies have likewise found that states with higher death penalty support are more likely to have the policy (Erikson 1976; Lax & Phillips 2012). Mooney & Lee’s (2000) event history analysis revealed that states with higher death penalty support (in 1936) were less likely to pass anti-death penalty reforms before Furman v. Georgia (1956-1971); for pro-death penalty reforms after the decision (1972-1982), the relationship was positive but nonsignificant. Norrander (2000) estimated a path model and found that past levels of death penalty support (in 1936) had positive indirect effects on states’ death penalty sentencing rates between 1993 and 1995 via past policy and current opinion. Jacobs & Kent’s (2007) national time-series analysis showed that death penalty support had a positive effect on annual executions between 1951 and 1998, which was delayed (ten-year lag) rather than immediate (one-year lag). Baumgartner and colleagues (2008) found a positive effect of death penalty support on changes in the annual number of U.S. death sentences from 1961 and 2005; the estimated long-term effect of a one-point increase in public support was seven additional death sentences (p. 209).
Effects on Court Decision-Making Extant evidence indicates that public punitiveness affects the decisions of many different court actors in criminal cases. Brace & Boyea (2008) showed that in states where Supreme Court justices are elected, higher state-level support for the death penalty reduces justices’ willingness to reverse death sentences, both directly (controlling for case characteristics and justices’ ideology) and indirectly (through court ideological composition). That is, justices are responsive to public opinion because of rational anticipation as well as electoral turnover. Public opinion, the authors concluded, “could literally mean the difference between life and death” (p. 370).
31
Canes-Wrone and colleagues (2014) analyzed 12,777 individual votes by state Supreme Court justices on 2,078 capital cases between 1980 and 2006. They found that although the decisions of justices with nonpartisan elections were most congruent with majority death penalty support, there was dynamic responsiveness among justices facing either partisan elections or reappointment by elected politicians. These justices were highly responsive to increases in statelevel support for the death penalty, becoming substantially more likely to uphold death sentences. Baumer & Martin (2013) examined murder cases adjudicated in 27 large urban counties. They found that controlling for case, victim, and defendant characteristics, higher local support for the death penalty increased the likelihood of prosecution as well as jury conviction, although it did not influence sentence length. Aggregate policy preferences affect court processing and outcomes in non-capital cases as well. Cook (1977), for example, found that between 1967 and 1975, both prosecutorial decisions (case dismissals) and judicial policy (sentence severity) for draft offenders were responsive to growing public opposition to the Vietnam War. Kuklinski & Stanga (1976) showed that sentences for marijuana possession in California superior courts were responsive to public preferences about marijuana penalties, as measured with ballot votes on a drug policy initiative. Using similar measures of policy preferences, more recent studies have documented that prosecutors and trial judges in Colorado both changed how they handled drug cases (dismissals and sentences) in response to public opinion (Boyd & Nelson 2017; Nelson 2014). Fernandez & Bowman (2004) examined the effects of aggregate policy preferences, as measured with votes on crime-related ballot initiatives, on court sentences in Ohio and Washington. Although the results were mixed, the weight of the evidence indicated that higher public punitiveness reduced courts’ use of drug sentencing alternatives.
32
Effects on Imprisonment and Criminal Justice Expenditures Aggregate support for specific criminal justice policies appears to have little effect on incarceration rates or crime spending (Schneider 2006; Soroka & Wlezien 2010; Wlezien 2004; but see Rhodes 1990). However, generalized punitiveness drives both policy outcomes, mainly through rational anticipation by public officials rather than electoral turnover, and mediates the effects of crime rates. This was first documented by Nicholson-Crotty et al. (2009). Analyzing changes in federal criminal justice policy (litigative budget, charges filed, and prison commitments) from 1951 to 2004, they found that movements in the second dimension of public mood (punitiveness) had a sizable positive effect, independent of the first dimension (liberalism), the party in office, as well as other relevant factors, and mediated the effect of crime rates. Enns (2014, 2016) also found a strong relationship (r = .82) between his more refined measure of punitive mood and changes in the overall incarceration rate (1953-2010), which remained sizable after controlling for the party in office and other relevant factors, and mediated the effect of crime rates. According to his estimates, had punitive mood merely leveled off, much less waned, two decades before it actually started declining (in the 1970s instead of 1990s), the United States would have had about “20 percent fewer incarcerations … [or] 185,000 fewer individuals behind bars each year” (p. 16, emphasis in original). To my knowledge, there is one international analysis of the effects of punitive mood. Jennings et al.’s (2017) time-series analysis showed that the British public’s punitive mood had a substantial positive effect on changes in the incarceration rate in Britain between 1980 and 2013. “As crime rose steadily through the 1980s and much of the 1990s in Britain,” Jennings and colleagues (2017: 478) concluded, “public anxiety and demand for penal policies that were tough
33
on crime grew,” and this “rising tide of punitive opinion was an important factor in the increased use of incarceration as a policy response.” There is also one relevant subnational analysis. In addition to his national-level time series (discussed above), Enns (2016) developed a dynamic state-level measure of punitive mood using multilevel regression and poststratification (Park et al. 2004) along with the DRA. He used it to predict changes in state incarceration rates (1953-2010) and correctional expenditures (1952-2012). Contrasting Spelman’s (2009) results for citizen ideology, he found that state-level punitive mood had substantial positive effects on both policy outcomes, independent of the party in power, state GDP, and other relevant factors. Enns (2016, p. 149) concluded that state-level punitive mood and crime rates together “can account for most of the variation in state incarceration rates and the proportion of state budges devoted to corrections.”
CONCLUSION
Now that we have seen that public opinion moves in response to crime rates, and that criminal justice policy and practice both respond to opinion movements, we arrive at several new questions. First, what are the implications for criminological research on public policy? Although the body of studies in our field examining issues like sentencing and incarceration rates is large and rich (Baumer 2013; Campbell 2018; Travis et al. 2014), the number including any measure of aggregate support for punitive policies can be counted on one hand. Twenty years ago, Burstein (1998) appealed to “sociologists to bring the public back in to their studies of public policy” (p. 51), warning “there is reason to believe that adding public opinion to [their] empirical analyses of policy change would undermine some of their conclusions about the influence of
34
other factors” (p. 27, see also Burstein & Linton 2002; Manza & Brooks 2012). Given the evidence we have just reviewed, his appeal and warning apply to our field too. Second, has the responsiveness of criminal justice policy and practice to public opinion changed over time? Will it change in the future? There is some evidence in other fields suggesting that public officials’ responsiveness to public opinion has varied during different time periods, and potentially has declined in recent decades (Ansolabehere et al. 2001; Jacobs & Shapiro 2000). In the current context of increasing political polarization (Iyengar & Westwood 2015), changes in responsiveness to public mood seem especially likely. As Erikson et al. (2002b, p. 81) explained, “where policy is polarized, there is little nuance, and the signal of a bundled public opinion, such as mood, should be all the stronger.” Therefore, one priority-area for future work should be to test whether the effects of public opinion on policy-relevant outcomes like execution rates, court decision-making, and prison admissions vary over time. Another important question is whether crime rates will continue to have a positive effect on public punitiveness in today’s changing media environment. Enns (2016) and Miller (2016) both showed that despite persistent biases in how the traditional media cover crime—focusing disproportionately on violent crimes and offenses committed by blacks, and ignoring context (Gilliam & Iyengar 2000)—the amount of coverage closely follows the crime rate. As a consequence, people receive a greater number of punitiveness-inducing media frames as crime increases. This, Enns (2016) argued, explains why higher crime increases punitive attitudes (instead of increasing support for early intervention or rehabilitation). He noted, however, that this relationship may change now that “large segments of the public … get their news from comedy shows, podcasts, Twitter, and Facebook” (p. 164). Indeed, cross-sectional studies (here we go again) suggest that whereas traditional news exposure (e.g., local TV news) is associated
35
with greater punitiveness, Internet news consumption is either not related or negatively related to punitive attitudes (Roche et al. 2016). Another priority-area, then, for future research is to examine whether the relationships between crime rates, media coverage, and aggregate punitiveness change as more people get their news from new media sources. Finally, does the public recognize when criminal justice policy or practice changes in response to its opinion movements? Wlezien’s (1995) thermostatic model of public policy preferences, which is strongly supported in other policy domains (Erikson et al. 2002a, Stimson 2004), suggests public opinion should be responsive to policy outputs (spending, incarceration), and not just policy outcomes (crime rates). That is to say, there should be “negative feedback of policy on preferences” (Soroka & Wlezien 2010, p. 25). The evidence to date is mixed, however, as to whether this occurs in the case of criminal justice. Jacobs and Kent (2007) found a negative effect of past executions on death penalty support, but Norrander (2000) found execution rates had no effect. Soroka & Wlezien (2010) showed that government spending on crime reduced aggregate support for additional crime spending. Other studies, however, have found null effects of past policy levels and incarceration rates on punitive mood (Enns 2014; Nicholson-Crotty et al. 2009; Ramirez 2013b). Noting the public’s apparent lack of responsiveness to mass incarceration, Sharp (1999, p. 62) questioned whether punitive mood is a “broken thermostat” or just has “a very long lag cycle.” We know now that public punitiveness started falling after the beginning of the 1990s crime drop. However, it remains unclear whether punitiveness also responds thermostatically to policy outputs. And if it does not, why? Does the media provide less useful information about policy reforms in criminal justice than in other areas (Beckett & Sasson 2004; Pickett et al. 2015)? Alternatively, does it take longer for members of the public to accumulate policy-
36
relevant information from personal and vicarious experiences with the criminal justice system (see, e.g., Davila et al. 2011; Lageson, Denver, & Pickett, 2018)? Future theoretical and empirical work is needed that considers these and other possibilities.
37
REFERENCES Althaus SL. 2003. Collective Preferences in Democratic Politics: Opinion Surveys and the Will of the People. New York: Camb. Univ. Press Anderson AL, Lytle R, Schwadel P. 2017. Age, period, and cohort effects on death penalty attitudes in the United States, 1974-2014. Criminology 55:833-68 Ansolabehere SD, Snyder JM Jr., Stewart C. 2001. Candidate positioning in U.S. House elections. Am. J. Pol. Sci. 45:136-59. Applegate BK, Cullen FT, Turner MG, Sundt JL. 1996. Assessing public support for three strikes-and-you’re-out laws: global versus specific attitudes. Crime Delinq. 42:517-34 Aspin LT, Hall WK. 1994. Retention election and judicial behavior. Judicature 77:306-15. Baum L. 2003. Judicial elections and judicial independence: The voter’s perspective. Ohio State Law J. 64:13-41 Baumer EP. 2013. Reassessing and redirecting research on race and sentencing. Justice Q. 30:231-61 Baumer EP, Martin KH. 2013. Social organization, collective sentiment, and legal sanctions in\ murder cases. Am. J. Sociol. 119:131-82 Baumgartner FR, De Boef SL, Boydstun AE. 2008. The Decline of the Death Penalty and the Discovery of Innocence. New York: Camb. Univ. Press Beckett K. 1997. Making Crime Pay: Law and Order in Contemporary American Politics. New York: Oxf. Univ. Press Beckett K, Sasson T. 2004. The Politics of Injustice: Crime and Punishment in America. Thousand Oaks, CA: Sage. 2nd ed.
38
Bishop GF, Tuchfarber AJ, Oldendick RW. 1986. Opinions on fictitious issues: the pressure to answer survey questions. Public Opin. Q. 50:240-50 Blumer H. 1948. Public opinion and public opinion rolling. Am. Sociol. Rev. 13: 542-49 Bohm RM, Vogel BL. 2004. More than ten years after: the long-term stability of informed death penalty opinions. J. Crim. Justice 32:307-27 Boyd CL, Nelson MJ. 2017. The effects of trial judge gender and public opinion on criminal sentencing decisions. Vanderbilt Law Rev. 70:1819-43 Brace P, Boyea BD. 2008. State public opinion, the death penalty, and the practice of electing judges. Am. J. Political Sci. 52:360-72 Brace P, Sims-Butler K, Arceneaux K, Johnson M. 2002. Public opinion in the American states: new perspectives using national survey data. Am. J. Political Sci. 46:173-89 Brooks, C, Manza J. 2007. Why Welfare States Persist: The Importance of Public Opinion in Democracies. Chic., IL: Univ. Chic. Press. Burstein P. 1998. Bringing the public back in: should sociologists consider the impact of public opinion on public policy? Soc. Forces 77:27-62 Burstein P. 2006. Why estimates of the impact of public opinion on public policy are too high: empirical and theoretical implications. Soc. Forces 84:2273-89. Burstein P, Linton A. 2002. The impact of political parties, interest groups, and social movement organizations on public policy. Soc. Forces 81:380-408 Campbell MC. 2018. Varieties of mass incarceration: what we learn from state histories. Annu. Rev. Criminol. 1:219-34
39
Campbell MC, Vogel M, Williams J. 2015. Historical contingencies and the evolving importance of race, violent crime, and region in explaining mass incarceration in the United States. Criminology 53:180-203 Canes-Wrone B, Clark TS, Kelly JP. 2014. Judicial selection and death penalty decisions. Am. Polit. Sci. Rev. 108:23-39 Casillas CJ, Enns PK, Wohlfarth PC. 2011. How public opinion constrains the U.S. Supreme Court. Am. J. Political Sci. 55:74-88 Chiricos T, Welch K, Gertz M. 2004. Racial typification of crime and support for punitive measures. Criminology 42:359-389 Cohen JE. 1997. Presidential Responsiveness and Public Policymaking. Ann Arbor, MI: Univ. of Michigan Press. Converse PE. 1964. The nature of belief systems in mass publics. In Ideology and Discontent, ed. DE Apter, pp. 206–61. New York: Free press Converse PE. 1970. Attitudes and nonattitudes: continuation of a dialogue. In The Quantitative Analysis of Social Problems, ed. ER Tufte, pp. 168-89. Reading, MA: Addison-Wesley Converse PE. 1990. Popular representation and the distribution of information. In Information and Democratic Processes, ed. JA Ferejohn, JH Kuklinski, pp. 369-88. Chic., IL: Univ. Ill. Press Converse PE. 2000. Assessing the capacity of mass electorates. Annu. Rev. Polit. Sci. 3:331-53 Cook BB. 1977. Public opinion and federal judicial policy. Am. J. Political Sci. 21:567-600 Craun SW, Kernsmith PD, Butler NK. 2011. “Anything that can be a danger to the public”: desire to extend registries beyond sex offenses. Crim. Justice Policy Rev 22:375-91
40
Cullen FT, Fisher BS, Applegate BK. 2000. Public opinion about punishment and corrections. In Crime and Justice, Vol. 27, ed. M Tonry, pp. 1–79. Chic.: Univ. Chic. Press Davila MA, Hartley DJ, Buckler K, Wilson S. 2011. Personal and vicarious experience with the criminal justice system as a predictor of punitive sentencing. Am. J. Crim. Just. 36:40820 de Keijser JW. 2014. Penal theory and popular opinion: the deficiencies of direct engagement. In Popular Punishment: On the Normative Significance of Public Opinion, ed. J Ryberg, JV Roberts, pp. 101-18. New York: Oxf. Univ. Press Drakulich KM, Kirk EM. 2016. Public opinion and criminal justice reform: framing matters. Criminol. Public Policy 15: 171-77 Enns PK. 2014. The public’s increasing punitiveness and its influence on mass incarceration in the United States. Am. J. Political Sci. 58:857-72 Enns PK. 2016. Incarceration Nation: How the United States Became the Most Punitive Democracy in the World. New York: Camb. Univ. Press Enns PK, Kellstedt PM. 2008. Policy mood and political sophistication: Why everyondy moves mood. B. J. Pol. S. 38:433-54 Erikson RS. 1976. The relationship between public opinion and state policy: a new look based on some forgotten data. Am. J. Political Sci. 20:25-36 Erikson RS, Mackuen MB, Stimson JA. 2002a. The Macro Polity. New York: Camb. Univ. Press Erikson RS, Mackuen MB, Stimson JA. 2002b. Panderers or shirkers? Politicians and public opinion. In Navigating Public Opinion: Polls, Policy, and the Future of American Democracy, ed. J Manza, FL Cook, BI Page, pp. 76-85. New York: Oxf. Univ. Press
41
Falcone DN, Wells LE. 1995. The county sheriff as a distinctive policing modality. American Journal of Police 3/4 :123-49 Fernandez KE, Bowman T. 2004. Race, political institutions, and criminal justice: an examination of the sentencing of Latino offenders. Columbia Human Rights Law Rev. 36:41-70 Geer JG. 1996. From Tea Leaves to Opinion Polls: A Theory of Democratic Leadership. New York: Columbia Univ. Press Gerber ER. 1999. The Populist Paradox: Interest Group Influence and the Promise of Direct Legislation. Princeton, NJ: Princeton Univ. Press Gilliam FD Jr., Iyengar S. 2000. Prime suspects: the influence of local television news on the viewing public. Am. J. Pol. Sci. 44:560-73 Glynn CJ, Herbst S, O’Keefe GJ, Shapiro RY, Lindeman M. 2004. Public Opinion. Boulder, CO: Westview Press. 2nd ed. Gottschalk M. 2006. The Prison and the Gallows: The Politics of Mass Incarceration in America. New York: Camb. Univ. Press Hetey, RC, Eberhardt JL. 2014. Racial disparities in incarceration increase acceptance of punitive policies. Psychol. Sci. 25:1949-54 Hough M, Park A. 2002. How malleable are attitudes to crime and punishment? Findings from a British deliberative poll. In Changing Attitudes to Punishment: Public Opinion, Crime And Justice, ed. JV Roberts, M Hough, pp. 163-83. Portland, OR: Willan Huber GA, Gordon SC. 2004. Accountability and coercion: is justice blind when it runs for office? Am. J. Political Sci. 48:247-63
42
Igo SE. 2007. The Averaged American: Surveys, Citizens, and the Making of a Mass Public. Camb., MA: Harv. Univ. Press Indermaur D, Roberts L, Spiranovic C, Mackenzie G, Gelb K. 2012. A matter of judgment: the effect of information and deliberation on public attitudes to punishment. Punishm. Soc. 14:147-65 Iyengar S, Westwood SJ. 2014. Fear and loathing across party lines: new evidence on group polarization. Am. J. Pol. Sci. 59:690-707 Jacobs D, Carmichael JT. 2001. The politics of punishment across time and space: A pooled time-series analysis of imprisonment rates. Soc. Forces 80:61-91 Jacobs D, Kent SL. 2007. The determinants of executions since 1951: how politics, protests, public opinion, and social divisions shape capital punishment. Soc. Probl. 54:297-318 Jacobs LR, Shapiro RY. 2000. Politicians Don’t Pander: Political Manipulation and the Loss of Democratic Responsiveness. Chic., IL: Univ. of Chic. Press. Jennings W, Wlezien C. 2015. Preferences, problems and representation. Political Science Research and Methods 3:659-81 Jennings W, Farrall S, Gray E, Hay C. 2017. Penal populism and the public thermostat: crime, public punitiveness, and public policy. Governance 30:463-81 Karch A, Cravens M. 2014. Rapid diffusion and policy reform: the adoption and modification of three strikes laws. State Politics Policy Q. 14:461-91. Kennedy-Kollar D, Mandery EJ. 2010. Testing the Marshall Hypothesis and its antithesis: the effect of biased information on death-penalty opinion. Crim. Justice Stud. 23:65-83 Kleck G, Jackson DB. 2017. Does crime cause punitiveness? Crime Delinq. 63:1572-99
43
Kleck G, Sever B, Li S, Gertz M. 2005. The missing link in general deterrence research. Criminology 43:623-60 Kuklinski JH, Stanga JE. 1979. Political participation and government responsiveness: the behavior of California superior courts. Am. Polit. Sci. Rev. 73:1090-99 Landemore H, Elster J, eds. 2012. Collective Wisdom: Principles and Mechanisms. New York: Camb. Univ. Press Lageson, Sarah E., Mega Denver, and Justin T. Pickett. 2018. Privatizing criminal stigma: Experience, intergroup contact, and public views about publicizing arrest records. Punishment & Society. Published online before print at: http://journals.sagepub.com/doi/full/10.1177/1462474518772040 Lax JR, Phillips JH. 2012. The democratic deficit in the states. Am. J. Political Sci. 56:148-66 Lewis DC, Wood FS, Jacobsmeier ML. 2014. Public opinion and judicial behavior in direct democracy systems: Gay rights in the American states. State Politics & Policy Q. 14:36788. Lupia A, Matsusaka JG. 2004. Direct democracy: new approaches to old questions. Annu. Rev. Polit. Sci. 7:463-82 Manza J, Brooks C. 2012. How sociology lost public opinion: a genealogy of a missing concept in the study of the political. Sociol. Theory 30:89-113 Manza J, Cook FL. 2002. The impact of public opinion on public policy: the state of the debate. In Navigating Public Opinion: Polls, Policy, and the Future of American Democracy, ed. J Manza, FL Cook, BI Page, pp. 17-32. New York: Oxf. Univ. Press Mayer WG. 1992. The Changing American Mind: How and Why American Public Opinion Changed Between 1960 and 1988. Ann Arbor, MI: Univ. of Mich. Press
44
McCubbins MD, Schwartz T. 1984. Congressional oversight overlooked: police patrols versus fire alarms. Am. J. Pol. Sci. 28:165-79. McGann AJ. 2014. Estimating the political center from aggregate data: an item response theory alternative to the Stimson dyad ratios algorithm. Polit. Anal. 22:115-29 McGuire KT, Stimson JA. 2004. The least dangerous branch revisited: new evidence on supreme court responsiveness to public preferences. J. Polit. 66:1018-35 Mears DP, Mancini C, Beaver KM, Gertz M. 2013. Houseing for the “worst of the worst” inmates: public support for supermax prisons. Crime Delinq. 59:587-615 Miller LL. 2016. The Myth of Mob Rule: Violent Crime and Democratic Politics. New York: Oxf. Univ. Press Miller NR. 1986. Information, electorates, and democracy: some extensions and interpretations of the Condorcet Jury Theorem? In Information Pooling and Group Decision Making, ed. B Grofman, G Owen, pp. 173-92. Greenwich: JAI Press Miller RN, Applegate BK. 2015. Adult crime, adult time? Benchmarking public views on punishing serious juvenile felons. Crim. Justice Rev. 40:151-68 Monroe AD. 1998. Public opinion and public policy, 1980-1993. Public Opin. Q 62:6-28 Mooney CZ, Lee M. 2000. The influence of values on consensus and contentious morality policy: U.S. death penalty reform, 1956-82. J. Polit. 62:223-39 Murr AE. 2011. “Wisdom of crowds”? A decentralized election forecasting model that uses citizens’ local expectations. Elect. Stud. 30:771-83 Nelson MJ. 2014. Responsive justice? Retention elections, prosecutors, and public opinion. Journal of Law and Courts 2:117-52
45
Nicholson-Crotty S, Meier KJ. 2003. Crime and punishment: the politics of federal criminal justice sanctions. Polit. Res. Q. 56:119-26 Nicholson-Crotty S, Peterson DAM, Ramirez MD. 2009. Dynamic representation(s): federal criminal justice policy and an alternative dimension of public mood. Polit. Behav. 31:629-55. Norrander B. 2000. The multi-layered impact of public opinion on capital punishment implementation in the American states. Polit. Res. Q. 53:771-93 Owens RJ, Wohlfarth PC. 2017. Public mood, previous electoral experience, and responsiveness among federal circuit court judges. Am. Politics Res. 45:1003-31 Page BI, Shapiro RY. 1983. Effects of public opinion on policy. Am. Polit. Sci. Rev. 77: 175-190 Page BI, Shapiro RY. 1992. The Rational Public: Fifty Years of Trends in Americans Policy Preferences. Chic., IL: Univ. of Chic. Press Paternoster R, Deise J. 2011. A heavy thumb on the scale: the effect of victim impact evidence on capital decision making. Criminology 49:129-61 Peffley M, Hurwitz J. 2007. Persuasion and resistance: race and the death penalty in America. Am. J. Political Sci. 51:996-1012 Park DK, Gelman A, Bafumi J. 2004. Bayesian multilevel estimation with poststratification: state-level estimates from national polls. Polit. Anal. 12:375-85 Pickett JT, Baker T. 2014. The pragmatic American: empirical reality or methodological artifact? Criminology 52:195-222 Pickett JT, Mancini C, Mears DP, Gertz M. 2015. Public (mis)understanding of crime policy: the effects of criminal justice experience and media reliance. Crim. Justice Policy Rev. 26:500-522.
46
Quillian L, Pager D. 2010. Estimating risk: stereotype amplification and the perceived risk of criminal victimization. Soc. Psychol. Q. 73:79-104 Ramirez MD. 2013a. The polls—trends: Americans’ changing views on crime and punishment. Public Opin. Q. 77:1006-31 Ramirez MD. 2013b. Punitive sentiment. Criminology 51:329-64 Reflective Democracy Campaign. 2015. Justice for All? https://wholeads.us/justice/wp content/themes/phase2/pdf/key-findings.pdf Rhodes SL. 1990. Democratic justice: the responsiveness of prison population size to public policy preferences. Am. Polit. Q. 18:337-85 Roche SP, Pickett JT, Gertz M. 2016. The scary world of online news? Internet news exposure and public attitudes toward crime and justice. J. Quant. Criminol. 32:215-36 Roberts JV, Hough M. 2005. Understanding Public Attitudes to Criminal Justice. New York: Open Univ. Press Roberts JV, Stalans LJ. 1997. Public Opinion, Crime, and Criminal Justice. Boulder, CO: Westview Press Roberts JV, Hough M, Jackson J, Gerber MM. 2012. Public opinion towards the lay magistracy and the sentencing council guidelines: The effects of information on attitudes. Br. J. Criminol. 52:1072-91 Roberts JV, Stalans LJ, Indermaur D, Hough M. 2003. Penal Populism and Public Opinion: Lessons From Five Countries. New York: Oxf. Univ. Press Robinson PH. 2013. Intuitions of Justice and the Utility of Desert. New York: Oxf. Univ. Press
47
Rossi PH, Berk RA. 1997. Just Punishments: Federal Guidelines and Public Views Compared. New York: Aldine de Gruyter Sandys M, McGarrell EF. 1996. Attitudes toward capital punishment: preference for the penalty or mere acceptance? J. Res. Crime Delinq. 32:191-213 Schneider AL. 2006. Patterns of change in the use of imprisonment in the American States: an integration of path dependence, punctuated equilibrium and policy design approaches. Polit. Res. Q. 59:457-70 Shapiro RY. 2011. Public opinion and American democracy. Public Opin. Q 75:982-1017. Sharp EB. 1999. The Sometime Connection: Public Opinion and Social Policy. Albany, New York: State Univ. of New York Press Shirley KE, Gelman A. 2015. Hierarchical models for estimating state and demographic trends in US death penalty public opinion. J. R. Statist. Soc. A 178:1-28 Sklansky DA. 2018. The problems with prosecutors. Annu. Rev. Criminol. 1:451-69 Simon J. 2007. Governing Through Crime: How the War on Crime Transformed American Democracy and Created a Culture of Fear. New York: Oxf. Univ. Press Smith KB. 2004. The politics of punishment: evaluating political explanations of incarceration rates. J. Polit. 66:925-38 Soroka SN, Wlezien C. 2010. Degrees of Democracy: Politics, Public Opinion, and Policy. New York: Camb. Univ. Press Spelman W. 2009. Crime, cash, and limited options: explaining the prison boom. Criminol. Public Policy 8:29-77 Steinberg L, Piquero AR. 2010. Manipulating public opinion about trying juveniles as adults: an experimental study. Crime Delinq. 56:487-506
48
Stimson JA. 1999. Public Opinion in America: Moods, Cycles, and Swings. Boulder, CO: Westview Press. 2nd ed. Stimson JA. 2004. Tides of Consent: How Public Opinion Shapes American Politics. New York: Camb. Univ. Press Stimson JA. 2017. The Dyad Ratios Algorithm for Estimating Latent Public Opinion: Estimation, Testing, and Comparison to Other Approaches. Work. Pap., Dep. Pol. Sci., Univ. North Carolina at Chapel Hill. Stimson JA, Mackuen MB, Erikson RS. 1995. Dynamic representation. Am. Polit. Sci. Rev. 89:543-65 Thielo AJ, Cullen FT, Cohen DM, Chouhy C. 2016. Rehabiliation in a red state: Public support for correctional reform in Texas. Criminol. Public Policy 15:137-70. Tonry M. 1995. Malign Neglect: Race, Crime, and Punishment in America. New York: Oxf. Univ. Press Travis J, Western B, Redburn S, eds. 2014. The Growth of Incarceration in the United States: Exploring Causes and Consequences. Washington, DC: Natl. Acad. Press Weissberg R. 1976. Public Opinion and Popular Government. Englewood Cliffs, NJ: PrenticeHall Wlezien C. 1995. The public as thermostat: dynamics of preferences for spending. Am. J. Pol. Sci. 39:981-1000 Wlezien C. 2004. Patterns of representation: dynamics of public preferences and policy. J. Polit. 66:1-24 Wlezien C. 2005. On the salience of political issues: the problem with ‘most important problem.’ Elect. Stud. 24:555-79
49
Wlezien C. 2017. Public opinion and policy representation: On conceptualization, measurement, and interpretation. Policy Stud. J. 45: 561-82 Wright RF. 2009. How prosecutor elections fail us. Ohio State Journal of Criminal Law 6:581610 Yates J, Fording R. 2005. Politics and state punitiveness in black and white. J. Pol. 67:1099-121 Zaller J. 1992. The Nature of and Origins of Mass Opinion. New York: Camb. Univ. Press Zaller J, Feldman S. 1992. A simple theory of the survey response: answering questions versus revealing preferences. Am. J. Political Sci. 36:579-616 Zimring FE, Johnson DT. 2006. Public opinion and the governance of punishment in democratic political systems. Ann. Am. Acad. Pol. Soc. Sci. 605:266-80
50