A brief introduction to QCA

3 downloads 0 Views 269KB Size Report
on truth tables; i.e., data matrices that order in rows all possible combinations of conditions .... “with quantitative grades representing the degree of presence of the concept” for each ... statement of necessity” (Schneider & Wagemann, 2012, p.
A brief introduction to QCA Author’s pre-print version. Publication: Verhoeven, M. (2016). A brief introduction to QCA. In M. B. von Rimscha, S. Studer, & M. Puppis (Eds.), Methodische Zugänge zur Erforschung von Medienstrukturen, Medienorganisationen und Medienstrategien (1st ed., pp. 175–196). Baden-Baden: Nomos.

Abstract

Qualitative Comparative Analysis (QCA) is an analytic approach that aims to bridge the gap between qualitative and quantitative research. The method is based on case-orientation, set theory, and Boolean techniques. QCA facilitates the analysis of causal complexity in an iterative way: it allows multiple conjunctural causation, equifinality, and asymmetry. QCA is conceived as a comprehensive research approach. The approach consists of a study of theoretical and empirical works, concept and argument building, and is then followed by an examination of cases to be sampled and variables to be investigated. A central feature of QCA is the calibration of raw data scores into values usable in QCA. The main variations are crisp and fuzzy set QCA. Calibration is guided by the research targets and bases on external, theory-driven criteria. QCA enables the investigation of necessary and sufficient conditions for an outcome of interest. Analysis of necessity investigates whether or not an investigated condition (independent variable) is a superset of the outcome (dependent variable): whenever the condition occurs, the outcome occurs as well. Analysis of sufficiency is based on truth tables; i.e., data matrices that order in rows all possible combinations of conditions and the respective outcome. A sufficient condition leads to the outcome, but there are other conditions leading to the outcome as well. Various robustness procedures are at disposal to validate the results. Inspection of systematically selected cases to substantiate the findings completes a QCA. In spite of pitfalls like a large work burden, and limitations in terms of number of conditions and cases, QCA is a valuable tool in the pursuit of insights into complicated relations between concepts in the field of media and communication studies.

Key words: Analysis approach, causal complexity, necessary and sufficient conditions for outcomes, set theory, Boolean techniques 1. Introduction Qualitative Comparative Analysis (QCA) is an analytic approach initially developed by Charles C. Ragin. During the last decades QCA has proliferated to a considerable extent, specifically in political science and sociology. The analytic approach distinguishes itself from other methods: it allows analysis of causal complexity, multiple conjunctural causation,

equifinality, asymmetry, and can render valid results using smaller samples than other methods. In this article QCA is briefly introduced and described. The information is foremost intended for readers who have no, or very little, knowledge of QCA. In the first section the history and origins of QCA are briefly sketched. In the next sections, the central elements of QCA are compared to possibly more familiar statistical analysis methods, and the different process steps in QCA are introduced. A field report illustrates the process sequences in section 5. The subsequent sections discuss the proliferation of QCA, the criticism, and the recent improvements of the method. The article concludes with an assessment of the merits of the method and an advisory note on when best to use QCA. 2. The development of QCA QCA tries to “both bridge and transcend the qualitative-quantitative divide in social research” (Ragin, 2014, p. xix). Frustrated by the limitations of conventional quantitative methods, Ragin began developing a method that integrates within-case as well as cross-case analysis. Ragin resorted to Boolean algebra, set theory, and switching circuits in an attempt to validate how different conditions combine to generate an effect of interest. In 1987, Ragin introduced QCA in The Comparative Method. Looking back in the 2014 edition, the author (2014, p. xxi) distinguishes four key elements: A) Cases are treated as configurations of conditions. Effects of variables are assessed in the context of the case, to link configurations of relevant conditions to outcomes. B) Similarities and differences on causal conditions can be explored across cases by displaying data in a matrix of possible configurations of conditions: the truth table 1. C) To link theory with empirical evidence, explanations are reached in an iterative way. Contradictions, instances where the truth table reveals cases with the same configurations of conditions showing divergent outcomes, have to be resolved to come to any explanatory model. Ideally this is done via inclusion of additional conditions, in practice also by (theoryguided, case knowledge based, substantiated) exclusion of cases. D) Causal complexity and multiple conjunctural causation are facilitated: combinations of conditions generate an outcome, different combinations can produce the same outcome, other combinations of conditions generate the negation of an outcome, and the impact of a condition depends on the context. 3. Research design elements: comparison of ‘conventional’ methods versus QCA A comparison of key elements of research designs between conventional quantitative methods, “from multiple regression to factor analysis to structural equation models” (Ragin, 2014, p. xxvi), and QCA sheds more light on the latter method. Table: Comparison of Research Design Elements ‘Conventional’ Methods

QCA

A table that contains the empirical evidence. Cases are sorted into one of the logically possible combinations of conditions. Combinations of conditions (displayed in the rows) that produce the outcome can be understood as sufficient. 1

Variables Measurement Dependent Variables ‘Given’ Populations Correlations Correlation Matrices Limited Diversity Various Parameters Net Effects

Figure adapted from Ragin (2014, p. xxiii)

Sets Calibration Qualitative Outcomes ‘Constructed’ Populations Set-theoretic Relations Truth Tables Logical Remainders Consistency, Coverage Causal Recipes

Variables vs. sets: The use of variables in conventional methods is mirrored by the deployment of sets in QCA. Variables sort cases in relation to each other (the cases’ degree of X), and to descriptive values like means, etc. The use of sets in QCA is more case-oriented; cases with a sufficient degree of X belong to a (condition or outcome) set. Sets are assembled and defined. A set classifies cases and requires criteria for a case’s membership in that set. Cases are grouped in the sets, and the relevant cases in a set can be specified. In ‘crisp’ sets cases are either a member of a set with a condition or an outcome, or they are not. In ‘fuzzy’ sets, cases vary in the degree to which they fulfil the membership criteria of a set. “The assignment of set membership scores follows directly from the definition and labeling of the set in question” (Ragin, 2014, p. xxiv). Measurement vs. calibration: Measurement in conventional methods is mirrored by calibration in QCA. Measurement rests upon indicators that rank cases and reflect the investigated concept. Scores are relative and interpretations are based on “sample-specific statistics” (Ragin, 2014, p. xxiv), like means and standard deviations. Variables in conventional methods are (usually) not calibrated. In QCA, the reference to external standards (derived from theory) enables the calibration of measuring. “How a set is conceptualized and labeled is crucially important to its calibration” (Ragin, 2014, p. xxiv). Dependent variable vs. outcome: The dependent variable in conventional methods is mirrored by a qualitative outcome in QCA. The goal of most conventional research is to explain variation in the selected dependent variable with different theories offering varying explanations. In QCA, after in-depth studying of instances, investigators formulate a qualitative outcome - a state of affairs that signifies the full extent of an outcome. In addition, researchers develop (externally derived) criteria for establishing the extent to which cases have membership in the outcome. Given vs. constructed population: The ‘given’ population in conventional methods is mirrored by the ‘constructed’ population in QCA. Conventional methods often use (and validate) given or convenient populations, or samples of interest to contractors, sponsors, etc. In QCA, identification of good instances of the outcome is the ideal scenario. A population of candidates for the outcome consists of positive as well as negative relevant cases. The latter are designated once their candidature for the investigated effect has been

recognized. Constructed populations thus avoid, as (Ragin, 2014, p. xxvi) states, the distorting effects that irrelevant cases exert on correlations in conventional methods. Correlations vs. set relations: Correlations in conventional methods are mirrored by set relations in QCA. Conventional methods rest upon matrices of bivariate correlations, and correlation coefficients are symmetric. Ragin infers (2014, p. xxvii) that set-theoretic relationships are thus overlooked. “(…) The assessment of both sufficiency (shared outcomes) and necessity (shared antecedents) is fundamentally set theoretic and asymmetric”. Correlation matrices vs. truth tables: Correlation matrices in conventional methods are mirrored in QCA by truth tables. Bivariate correlations are analyzed and the matching of series of values is assessed. “There is no direct consideration of how case aspects combine (…) or the consequences of these combinations” (Ragin, 2014, p. xxvii). In QCA, truth tables show combinations of causal conditions, and groups of cases with identical combinations are assessed with regard to agreement on the outcome. Limited diversity and logical remainders: The presence of logical remainders reflects the limited diversity of data; a problem common to almost all social research. Logical remainders are the combinations of conditions in truth table rows without (enough) cases with membership in the row. QCA’s standard analysis identifies logical remainders in order to render plausible solution terms. Parameters of fit - consistency, coverage: QCA calculates parameters of fit - consistency and coverage. In the assessment of sufficiency, consistency is calculated for conditions and for paths; i.e., combinatorial solutions, combinations of conditions. The parameter expresses the extent of the subset relation to assess whether or not a configuration in a truth table row is a sufficient condition. In addition, the same parameters are available for necessity. Consistency expresses how far the outcome is a subset of the condition. Coverage expresses the relevance of the necessary condition; low coverage indicates triviality.

Net effects vs. causal recipes: Net effects in conventional methods are mirrored in QCY by causal recipes. Theory testing in conventional methods is in essence a competition between causes to explain effects of interest. “Net-effect thinking isolates causal variables from one another and attempts to purify the estimate of each variable’s separate effect” (Ragin, 2014, p. xvii). QCA focuses on causal conditions combining to generate outcomes, a “recipe-like understanding of how social causation works” (Ragin, 2014, p. xvii), and possibly no single cause is necessary or sufficient. 4. The QCA process sequences In this section the process sequences in the deployment of QCA are described.

First step: theory, concepts, conditions, and case inspection QCA can be seen purely as a data analysis technique or more as a comprehensive approach to social research. Schneider and Wagemann (2012, p. 397) emphasize the latter. The authors (2010) list standards for good practice in QCA, before, during, and after the deployment of the computer-based analysis technique. Before deploying the analysis technique, the authors recommend reflecting on the original intentions of QCA: summarize and overview data and assumptions, check subset claims and existing hypotheses, and develop theoretical arguments. To substantiate, QCA should be combined with other analysis techniques. Case knowledge ought to be deep(-ened). Determining conditions and outcome has to be based on theory and empirical evidence. Selection and rejection of cases for analysis has to be transparent and reasoned. The number of conditions has to be kept under control. Thus, the first step in QCA also consists of thorough inspection of (potential) cases. QCA has a strong foundation in case-oriented research. Ragin (2014, pp. 34–52) lists key features of the methods. Case-oriented methods aim at invariant association, deviant cases must be accounted for, and probabilistic relationships do not demonstrate causality. The frequency distribution of types of cases has little meaning. In contrast, the range of meaningful patterns of causes and effects is of interest. The methods examine “how conditions combine in different ways and in different contexts to produce different outcomes” (Ragin, 2014, p. 52). Second step: calibration of raw data values The collected data have to be calibrated for use in QCA. QCA is based on Boolean algebra. The data are (essentially) binary. There are two states in Boolean algebra: true (present) and false (absent). Accordingly, in QCA with crisp sets all conditions and outcomes are dichotomous. The sets consist of cases with membership values in condition and outcome of either 1 or 0. An advancement is QCA with fuzzy sets. The fuzzy variation of QCA allows for more information than the crisp, dichotomous one. In fuzzy set QCA, cases have partial membership in the sets of conditions and outcomes, not just 0 or 1. Cases can be more out than in a set and vice versa. Fuzzy scales have (minimally) three “qualitative anchors”: presence (1), absence (0), and the “point of indifference” (0.5, the crossover threshold), “with quantitative grades representing the degree of presence of the concept” for each case (Schneider & Wagemann, 2012, p. 31). Raw scores of cases are calibrated into fuzzy values between 0 and 1. The calibration of raw data scores into fuzzy-set membership scores ought to be theory-based and the calibration criteria should be external to the data. In practice, however, researchers resort to data-driven calibration based on means or clusters of the raw data. In any case, calibration has to be transparent and its motivation substantiated. The membership of cases in sets can be combined, intersected, negated, etc. based on Boolean techniques. Boolean addition can be understood as the logical operator OR: a case’s

score in the union of conditions is the maximum of membership scores in the respective conditions. Boolean multiplication can be understood as the logical operator AND: the membership in the intersection of conditions is the lowest score of a case’s membership score in the respective conditions. Schneider and Wagemann (2012, p. 55) list more operations. In the logical operator NOT, the ‘opposite’ of a set is expressed. A case’s membership in the complementary set is 1 minus the membership score in the original condition. Boolean analysis is combinatorial: the absence of a cause has the same logical status as presence, and present and absent conditions may intersect. Before the next steps: analysis of causality in QCA The next steps in the QCA process consist of analyzing causality. The Boolean minimization rule allows reduction of complexity, and facilitates the recognition of the nature of causal conditions. Factoring in Boolean algebra is not different from standard algebraic factoring and serves to show necessary and causally equivalent (sufficient) conditions 2. Schneider and Wagemann (2012, p. 8) distinguish important elements and causal complexity in QCA from a set-theoretic perspective: causal relations are subset (sufficiency) or superset (necessity) relations with a focus on INUS and SUIN conditions 3. QCA offers parameters of fit - consistency and coverage. Consistency carries more interpretative weight than coverage. In QCA the emphasis is on the cases, and “parameters of fit are not an end in themselves” (Schneider & Wagemann, 2012, p. 150). As part of the analysis (of necessity as well as sufficiency), researchers have to explain the occurrence of cases that contradict the findings. Generating XY plots (with case labels) is a useful tool to deploy in QCA. Third step: analysis of necessity The data are first tested for necessity of conditions for the outcome. A test of necessity analyzes whether or not the condition is a superset of the outcome; i.e., the outcome is a subset of any of the investigated conditions. Four separate test procedures are carried out: the analyses of present vs. absent states of causes (conditions) for present vs. absent states of the result (outcome). Here, the parameter consistency expresses to what extent the data are in line with “the statement of necessity” (Schneider & Wagemann, 2012, p. 143); or, in other words, in how far the outcome is a subset of the condition. The consistency score of 0.9 is commonly the minimum threshold for necessity. Coverage in a test of necessity should, according to Factoring: A*B + A*c + A*D equals A*(B + c + D) Note: in QCA uppercase letters often signify that the condition is present and lowercase letters signify that the condition is absent. Other notations of absent conditions or outcomes are commonly signified by the symbol ~. 3 INUS: A condition that is insufficient (I) for producing the outcome, but is a necessary (N) part of a conjunction that is unnecessary (U) but sufficient (S) for producing the outcome (Schneider and Wagemann (p. 328). Put simply, INUS conditions are in combination with other conditions sufficient for the outcome. SUIN: A single (S) condition that is an unnecessary (U) part of a logical OR combination, insufficient (I) but necessary (N) for the outcome (Schneider and Wagemann (2012, p. 333). SUIN conditions are necessary conditions that are conceptually equivalent; i.e., that cover the same overarching concept. 2

Schneider and Wagemann (2012, p. 147), be interpreted as an indicator of relevance of the necessary condition. Low coverage indicates triviality of the condition in question. Conditions are deemed necessary if they are above the consistency threshold and the coverage is interpreted as not too low. In tests of necessity, separate XY plots of each ‘necessary’ condition and the outcome can be examined to find contradictory cases.

Fourth step: analysis of sufficiency In a next step, sufficiency of conditions for the (presence vs. absence of the) outcome is analyzed. To recapitulate: sufficiency signifies that the condition is a subset of the outcome. The conditions (or combinations of conditions) lead to the outcome, but are not the only conditions (or combinations) causing the outcome. A feature of QCA is the asymmetry of findings: causes that explain the outcome (may) differ from the ones explaining the notoutcome. Again, separate tests are in order for the outcome in both present and absent states. Truth tables are central to analysis of sufficiency 4. Schneider and Wagemann (2012, p. 103, 193) describe steps to construct a truth table (in practice this is carried out by QCA software). The data matrix is turned into a truth table. All possible AND combinations of conditions are noted. Then each case is placed in the row of the configuration where it has the highest membership. The outcome value 1 is assigned to rows that are a subset of the outcome, and the value 0 is assigned to those that are not. Each row is thus classified as consistent with the outcome, inconsistent, or as logical remainder if there are no cases in it. In practice, this is done by determining the inclusion (i.e., consistency) cut-off in the truth table 5. Then the truth table is minimized (by software). The Quine-McCluskey algorithm logically minimizes truth tables based on Boolean algebra. Different solutions can be produced and minimization guarantees that they are equivalent, as Schneider and Wagemann (2012, p. 115) state. Logical remainders are “all (..) possible combinations for which (…) no empirical evidence is at hand” (Schneider & Wagemann, 2012, p. 151). Logical remainders are the combinations of conditions in truth table rows that have no cases. Schneider and Wagemann (2012, p. 157) distinguish (logically) impossible remainders (e.g., pregnant men), arithmetic remainders (in case of more possible combinations of conditions than cases), and clustered remainders (within-case processes prohibit the combination of conditions to occur). The technique of QCA known as ‘Standard Analysis’ identifies logical remainders in order to render plausible solution terms. In case of limited diversity, the same truth table generates conservative (also known as complex), parsimonious, and intermediate solution terms. The Necessary conditions are not found by using truth tables. The analysis of sufficiency is separated from, and follows after, the analysis of necessity. 5 After inspection of the truth table, this is done by determining the ‘inclusion cut-off’ (e.g., in R software), the level where the consistency is high enough to claim the truth table row is sufficient for the outcome. One method is finding the first large gap in consistency levels of the rows and setting the cut-off there. Often higher levels than 0.75 are deployed as thresholds for consistency in the analysis of sufficiency. 4

conservative solution is not based on assumptions about logical remainders. The parsimonious solution takes all logical remainders into account and includes theoretically ‘untenable’ assumptions. The intermediate solution takes remainders into account, based, simply put, on the assumption of the direction of conditions’ effect on an outcome (positive, negative, or neutral impact on presence of the outcome). The intermediate solution has often been the preferred option of researchers deploying QCA. Schneider and Wagemann (2010) recommend that the treatment of logical remainders be made transparent. Based on one truth table, the conservative, parsimonious and intermediate solutions have to be included in the report. In the assessment of sufficiency, consistency is calculated for conditions and for combinatorial solutions. The parameter “provides a numerical expression for the degree to which the empirical information deviates from a perfect subset relation” (Schneider & Wagemann, 2012, p. 129) in the verdict of whether or not a configuration in a truth table row is a sufficient condition. Schneider and Wagemann (2012, p. 129) name a consistency of 0.75 or higher for a verdict of sufficiency. Coverage expresses to what extent the outcome is explained by the sufficient condition or combination. The parameter has no minimum threshold for relevance; conditions or paths with low coverage can be of importance. In tests of sufficiency, separate XY plots again contrast the membership scores on each ‘sufficient’ combination of conditions, and on the total solution, with the cases’ values on the outcome. Examination of the plots informs how many and which cases contradict the solutions. Fifth step: interpretation, discussion of results Case inspection is inseparable from QCA and follows computer-based analysis. In interpreting results, not too much significance ought to be assigned to single conditions in conjunctural and equifinal solution terms. If the researcher prefers one path to the outcome over others, the reasoning behind the decision needs to be revealed. The discovered causality needs to be narrated, and presented in Boolean notation. In addition, the data matrix, truth table, and parameters of fit should be included in reports. Schneider and Wagemann (2010) recommend using different presentation forms: graphs, tables, Venndiagrams, etc. “Researchers should make clear which cases – mentioned with their proper names – are covered by which of the paths in the solution formula” (Schneider & Wagemann, 2010, p. 410). 5. Field report: success factors of media products A field report on the deployment of QCA in an investigation of success factors of media brands in a trans-media context serves to illustrate the process and sequences of the method with an example. First step: success factors in theory and exploratory study

In the first phase of the research project, the research team distilled success factors from literature. The second phase consisted of an explorative, qualitative study of the distilled and categorized success factors. Twenty cases were investigated in 39 interviews with decision makers from all media, content, and seriality types (cf. Sommer, Krebs, Verhoeven, Von Rimscha & Siegert, awaiting publication). This phase represents the in-depth case inspection that the QCA approach requires. The findings of these phases then served to generate the items for an online survey of media professionals in all media in Germany, Austria, and Switzerland. In the survey, ten building blocks of success are tested by assessing three to eight items per block; the (meta) conditions in QCA. The conditions are: human resources, internal processes, leadership, environmental orientation, organizational aspects, external evaluation (all remote conditions), and form, content, distribution, and marketing (all proximate conditions). Respondents were invited to express, on a six-point Likert scale, their agreement with statements referring to success factors for their product/brand. The sample is ‘constructed’ in the sense that only media workers with decision making competences were contacted for participating in the online survey. In addition, the product the media worker referred to needed to offer its ‘own’ editorial content. Aggregators of content were excluded from the sample. The data had to be cleaned and the raw data matrix prepared. In QCA, cases with missing scores on conditions ought to be removed. The loss of data was then too large, so a different solution was used to fill the gaps: tenable values were generated randomly. The cases with no score on the selected outcome (success in terms of achieving audience market shares), however, were removed. The next step consisted of importing data in the ‘R’ statistical software. The raw data file has to be prepared accurately6. Several QCA packages have to be downloaded and activated before starting computing; QCA (and QCAgui) is somewhat older, QCAPro seeks to replace these with streamlined methods, and the R package Set Methods is indispensable for advanced validation and elaboration of results7.

Second step: calibration of raw data values; ‘real important factors’ for ‘real success’ After successful import, the ten conditions and one outcome (success in terms of achieving targets in audience market share) were calibrated for a QCA with fuzzy membership scores. A fuzzy set QCA was chosen because the data allowed it, and fuzzy sets carry more information than crisp sets. Theory and the essence of the research question governed the calibration. For all conditions and the outcome, the Likert scale scores five and six are calibrated as “more in the set” and “full membership”, respectively. The building blocks ought to indicate ‘real important’ factors for the outcome ‘real achievement of success’. In the process, we tried different calibrations to test the robustness of our results. Small and medium changes of the crossover point between more in and more out of a set only marginally influenced the results. Thus, the external, theory- and research design-based In Excel, for example, the csv format is required, and apparently data rows have to be merged into single cells. Be aware that bugs exist in spite of all efforts, and some incompatibilities show at certain commands (inexplicable error rendered in R). In that case, it is worthwhile to deactivate QCAPro before executing the command again.

6 7

calibration was deemed robust. XY plots of raw and fuzzy scores for the single conditions were useful for understanding the effects of different calibrations. Before the next steps: analysis of causality by means of two-step QCA The calculation of sufficiency for 10 conditions requires computer capacity beyond that of standard PCs, due to all of the logical remainders the truth table needs to process. We resorted to a two-step QCA. Following Schneider and Wagemann (2006), the conditions to be analyzed have to be distinguished in remote and proximate conditions. The distinction is based on the distance of the condition to the object of investigation, the media products. Inspection of the conditions was carried out in the exploratory study and led to different criteria along which we could differentiate the conditions between remote and proximate to the media product. Third step: analysis of necessity; four necessary conditions In the first phase of the two-step QCA the necessity of conditions was tested. Of the remote conditions, human resources and environmental orientation were necessary for success; of the proximate conditions, form and distribution were also necessary. No conditions were found necessary in the other tests (present conditions for no success, and absent conditions for success and no success). Fourth step: analysis of sufficiency; three patterns in eight paths In the second phase of a two-step QCA the two necessary remote conditions were tested with all four proximate conditions for sufficiency for success. Three patterns (in eight paths), varying in strength, emerged from the sufficiency test. One pattern signifies that the intersection of form and environmental orientation leads to success, in three different combinations with other conditions. The second pattern indicates that human resources and the structural irrelevance of form leads to success in three combinatorial paths. The final and third pattern indicates that form and human resources in two paths lead to success. Fifth step: interpretation, discussion of results; types of cases in patterns All media types are found in the three patterns, confirming the convergence of media products with regard to success factors. After describing the cases in the paths, it became clear, that general, mainstream information media and trade press showed membership the first pattern for success. The second pattern is inhabited by specialized and more ‘niche’ information products. The third pattern tends to be more one-off and entertainment products. The attentive reader will notice at this point that the phase of deepening the insights is still a work in progress. More conclusions can and will be formulated before publication.

6. The proliferation of QCA Types of QCA As a method also suitable for small samples (unlike most ‘conventional’ methods) and medium numbers of conditions (variables), QCA proliferates steadily. Thiem and Duşa (2013, pp. 1–3) sketch the spread of QCA. Applications of the method are foremost found in political science, sociology (Ragin’s original field), and international relations. In addition, QCA is used in business studies and economics, management and organization, governance and administration, legal studies and criminology, education, health research, environmental sciences, anthropology, and religion. The most frequently used variations of QCA in 239 published scientific journal articles (until 2011) are crisp-set QCA (csQCA) and fuzzy-set QCA (fsQCA). Multi-value QCA (mvQCA), a third variation, lags far behind the others. Schneider and Wagemann (2012, pp. 263–264) discuss a temporal QCA technique (tQCA) as a category that is not tracked by Thiem and Duşa (2013). The use of QCA in communication studies In communication studies, QCA is not (yet) often deployed. Büchel (2016, p. 58) lists studies by several authors that use the method. All are studies on macro and meso levels, with a limited number of cases and conditions in the field of media structures. Nguyen Vu (2010) investigates how economic factors (on market and organization levels) influence TV news content in eleven countries. The author finds that economic pressure exerts a negative influence on reporting foreign news. Also, Bruggemann and Konigslow (2013) analyze different degrees of foreign reporting and identify key conditions that explain the differences between 12 newspapers from six European countries. The editorial mission of the media outlet leads to the outcome of ‘cosmopolitan’ coverage, in combination with either being situated in a relatively small country or with having many foreign correspondents. Humprecht and Büchel explore diversity in online news reporting in a study from 2013. Elaborating on this, Humprecht (2014) deploys QCA in an investigation of the online news performance in six countries. The author distinguishes conditions on a media system level (media market strength, public broadcasting investment, journalistic professionalism, and political parallelism) and an organizational level (profit orientation, editorial mission, thematic focus), which are assumed to explain the outcome of online news performance. The findings show that the online news performance is tied to the news outlets’ capacity, and to its balancing act between profit orientation and the role of information provider. In addition, the variation in strength of dimensions of news performance between countries reflects the variation in the underlying democratic theories. Regarding political communication, Downey and Stanyer (2010) examine the personalized character of mediated political communication. The authors find two causal paths. One path

consists of the make-up of political institutions (the presence of presidential systems) combined with the irrelevance of the character of media institutions to explain personalization. The other path combines political culture and media conditions to explain personalization, irrespective of the political system being either parliamentary or presidential. In the same field, Büchel (2016) classifies election campaign coverage in TV news in six countries. The author introduces three types: journalist-, candidate-, or campaign-centered coverage. QCA is deployed to relate the types of campaign coverage (outcome) to the conditions of the media system: advertising-based vs. public service broadcasting, bias, campaign expenses, and campaign professionalization. The road to candidate-centered coverage in Switzerland and Italy can be summarized as consisting of expensive campaigns and low campaign professionalization. The paths to journalist-centered coverage are, in some European countries, paved by the democratic-corporatist media system. in the US journalist-centered coverage is caused by the expensive campaigns and high degree of professionalization. The routes to campaign-centered coverage have in common the absence of the democratic-corporatist media system and meta-framing of the ‘left-wing’ candidate (the condition ‘bias’). The aforementioned works are cross-national comparisons of factors influencing journalism, news, and political communication. In addition, within the field of media economics and management, Russi (2013) investigates the degree of competiveness in the European newspaper market as an outcome. Russi, Siegert, Gerth, and Krebs (2014) investigate the relationship of competition and financial commitment in European newspaper markets and find that the condition “high number of competitors” in combination with the condition “high competition intensity” is sufficient for financial commitment across the different investigated markets. Verhoeven, Von Rimscha, Krebs, Sommer, and Siegert (forthcoming 2016) investigate ten success factors of media brands by deploying two-step QCA, and find four necessary conditions for success (form, human resources, environmental orientation, and distribution), as well as three patterns of intersecting conditions sufficient for success. Potential of QCA deployment in communication studies The essential features of QCA seem to match the instinctive ex ante understanding of a wide range of important questions in communication studies in general, and in media structures, media economics, and media management research in particular. In the latter, QCA could be deployed at the macro level, where one looks at media systems and markets while investigating structures and ownership, concentration, globalization, regulation, etc. For example, the degree of concentration of media ownership can be an outcome for which a range of media market conditions (size, competitive constraints, distribution limitations, importance of economies of scale in the sector, domination by large advertisers, degree of convergence in production) could be investigated in a QCA with certain regulatory and/or general economic conditions. In QCA it is essential that the sets are labeled and the perfect present and absent states of conditions and outcome are formulated. As an example, for the condition ‘domination by large advertisers’, the perfect state (with value = 1) could be

phrased as ‘advertising market consists of less than 50 companies rendering more than 50% of total advertising revenue’. At the meso level QCA could serve to analyze organizations of media production and distribution. For instance, financing, strategy, marketing, and exploitation could be investigated. The degree of strategy adaptation by media companies in times of disruption can be the outcome for which causality of conditions, like various facets of the media company culture and the perception of the environment at management level, can be investigated. QCA could be applied to media performance, for instance production, products, diversity, or quality. Extents of customer based brand equity in media can be investigated as an outcome of aspects of social media use; e.g., ‘liking’ and sharing, writing and reading comments, etc. Success factors research is another example. Finally, a study like Hanitzsch et al. (2010) on reporters’ perception of influences on their work could also be carried out using QCA. The same holds for other studies on the micro level of media workers.

7. Criticism and improvements of QCA In this section the criticism of QCA is briefly discussed, and the latest enhancements and developments are introduced. Criticism of QCA QCA is a comparatively young method. Ragin (2014, p. xxix) states that “set-analytic social science is still in its infancy. The Comparative Method was but a first step on an important journey in social scientific inquiry”. QCA has been criticized over the last few years (cf. Collier, 2014; Lucas & Szatrowski, 2014; Paine, 2016). Lucas and Szatrowski have QCA generating definitely non-causal variables as causes of an outcome. A true barrage of criticism on QCA in Collier (2014) includes the extent of fuzziness of fuzzy sets, exuberant use of formal logic and set theory, lack of reflection on causal attribution, calibration being inferior to data based methods, insufficient robustness, and the contention that intersections do not equal actual interactions of conditions. In addition, QCA distracts from real case studies, and the Quine-McClusky algorithm is inappropriate. Paine (2016) claims that QCA is actually identical to regression, and that association of conditions and outcome does not equal causation. A fair share of the criticism has been adequately rebutted by demonstrating that the critics simply deploy the method incorrectly. With respect to more substantial critique, improvements have been made and continue to be developed (cf. Baumgartner, 2008, 2009; Rohlfing & Schneider, 2014; Schneider & Rohlfing, 2013; Thiem, Baumgartner, & Bol, 2016). Improvements of QCA

Scholars have identified problems with QCA without rejecting the method per se. Improvements and attempts at upgrades of QCA pertain to Standard Analysis, robustness, deployment with large samples, parameters of fit for analysis of sets with skewed membership, and deepening of insights by case inspection. Schneider and Wagemann (2012, p. 211) introduce the ‘Enhanced Standard Analysis’ that “restricts the choice of logical remainders for counterfactuals by excluding any remainder that would create untenable assumptions”. Cooper, Glaesser, and Thomson (2014), although appreciating the diagnosis by Schneider and Wagemann (2012), see the Enhanced Standard Analysis creating new problems and suggest handling it with great care. Theory-Guided Enhanced Standard Analysis (Schneider & Wagemann, p. 217) shifts the emphasis from parsimony to theoretical plausibility. In this adaptation of the method, truth table rows can be selected as the right counterfactuals, and directional expectations (of influence on outcome) can be based on conjunctions, not on single conditions. The authors state that handling logical remainders has to be guided to a much larger extent by theory. Schneider and Wagemann (2012, p. 249) summarize the problems that might occur in QCA when dealing with sets with skewed membership scores, which can lead to “flawed inferences in the analysis of sufficiency and necessity”. Schneider and Wagemann (2012) propose a formula for relevance in the analysis of necessity and an additional parameter for the analysis of sufficiency. PRI (proportional reduction of inconsistency) expresses how far a condition is simultaneously a subset of the outcome and of the negated outcome. As an approach, QCA demands thorough case inspection to select the conditions to be analyzed, and to improve the validity of results after the statistical analysis has been executed. It follows that this, in principle, limits the sample size of a research design for plain work burden reasons. Emmenegger, Schraff, and Walter (2014), however, argue that QCA is a suitable methodological choice for the analysis of large N surveys as well. The omission of one of the strengths of QCA, case-orientation, can be remedied with a new robustness test for analysis of large samples. Schneider and Wagemann (2012, pp. 305–312) propagate, next to within-case analysis of the different types, comparative inspection of pairs of cases to reveal causal mechanisms. For sufficiency of a combination of conditions, one compares one deviant and one typical case for consistency. To reveal missing conditions in a sufficiency analysis and missing paths in a solution formula, one compares a deviant case for coverage and the individually irrelevant cases in the same truth table row. With respect to the acknowledged weaknesses, QCA is steadily improving. The method continues to be developed further, and is increasingly adapted for less ‘fitting’ research designs.

8. Conclusions: QCA as method of choice The research team of the study (described in the fifth section of this article) is satisfied with choosing QCA as an analysis approach, in spite of the steep learning curve and large work

burden. The data structure in the study complicates any analysis of causality. QCA nevertheless renders plausible results, and the insights of the researchers are sturdily enhanced. A ‘quick and dirty’ regression analysis produces a model that explains 10% of the dependent variable. Before deciding to use QCA, the nature of the research questions and the research design have to be taken into consideration. The method seems most beneficiary for investigation of causal relations between four to six independent variables (conditions), one dependent variable (outcome), and as many cases as there are rows in the generated truth table; i.e., 2 to the x power (whereby x is the number of conditions). In addition, for optimal use of QCA, a few ‘nice to have’ criteria can be listed. The data show a wide spread of values on the different variables, the values are not very strongly skewed in any direction, and the investigated conditions influence the outcome in different directions (positive vs. negative). With regard to project planning, the resources that will have to be invested in using QCA are a consideration as well. Getting acquainted with QCA and the most suitable software (‘R’) takes time. In addition, QCA as an approach encompasses a large investment in inspection of cases, conditions, outcomes, truth tables, robustness tests, etc. In terms of work burden, QCA does not exactly come cheap, but the gathering of insights is worth the investment. Scholars like Schneider and Wagemann (2012) see the function of QCA foremost as a tool to gain comprehensive and valid insights in cases, data structures, and relations between the concepts of interest. Causal relations uncovered by the deployment of QCA can subsequently be turned into hypotheses, which are testable with the usual methods. QCA produces results divergent from regression analysis. If investigators expect sufficient and necessary conditions, “set-theoretic methods are a plausible choice” (Schneider & Wagemann, 2012, p. 90). QCA can be, in my modest non-statistician opinion, the right choice for almost all research questions in communication science, since context dependency, causal complexity, asymmetry, and equifinality can be presupposed for many questions. Social (and media) reality is, after all, almost always complex. It follows that deployed analysis methods first need to allow for the intricacy of realities, before initiating the process of reducing complexity and producing insights. To make QCA really worthwhile, we recommend respecting its conception and original intentions, and deploying it as a comprehensive approach, not merely as an alternative computer-driven statistical method.

References Baumgartner, M. (2008). Uncovering deterministic causal structures: A Boolean approach. Synthese, 170(1), 71–96. doi:10.1007/s11229-008-9348-0 Baumgartner, M. (2009). Inferring causal complexity. Sociological Methods & Research, 38(1), 71– 101.

Bruggemann, M., & Konigslow, K. K.-v. (2013). Explaining cosmopolitan coverage. European Journal of Communication, 28(4), 361–378. doi:10.1177/0267323113484607 Büchel, F. (2016). Media-centered reporting styles: An international comparison of election campaign coverage in TV news using qualitative comparative analysis. Collier, D. (2014). Symposium. The set-theoretic comparative method: Critical assessments and the search for alternatives. Qualitative & Multi-Method Research Newsletter. (12(1)), 1–52. Cooper, B., Glaesser, J., & Thomson, S. (2014). Schneider and Wagemann's proposed Enhanced Standard Analysis for Ragin's Qualitative Comparative Analysis: Some unresolved problems and some suggestions for addressing them. Retrieved from http://www.compasss.org/wpseries/CooperGlaesserThomson2014.pdf Downey, J., & Stanyer, J. (2010). Comparative media analysis: Why some fuzzy thinking might help. Applying fuzzy set qualitative comparative analysis to the personalization of mediated political communication. European Journal of Communication, 25(4), 331–347. doi:10.1177/0267323110384256 Emmenegger, P., Schraff, D., & Walter, A. (2014). QCA, the truth table analysis and large-N survey data: The benefits of calibration and the importance of robustness tests. Retrieved from http://www.compasss.org/wpseries/EmmeneggerSchraffWalter2014.pdf. Hanitzsch, T., Anikina, M., Berganza, R., Cangoz, I., Coman, M., Hamada, B.,. . . Kee Wang Yuen. (2010). Modeling perceived influences on journalism: Evidence from a cross-national survey of journalists. Journalism & Mass Communication Quarterly, 87(1), 5–22. doi:10.1177/107769901008700101 Humprecht, E. (2014). Shaping news performance. Comparing online news in six western democracies (Promotion). Universität Zürich, Zürich. Lucas, S. R., & Szatrowski, A. (2014). Qualitative comparative analysis in critical perspective. Sociological Methodology, 44(1), 1–79. doi:10.1177/0081175014532763 Nguyen Vu, H. N. (2010). Money matters: A cross-national study of economic influences on TV news (Dissertation). Universität Zürich, Zürich. Paine, J. (2016). Set-theoretic comparative methods: Less distinctive than claimed. Comparative Political Studies, 49(6), 703–741. doi:10.1177/0010414014564851 Ragin, C. C. (2014). The comparative method: Moving beyond qualitative and quantitative strategies: With a new introduction. Oakland, California: University of California Press. Rohlfing, I., & Schneider, C. Q. (2014). Clarifying misunderstandings moving forward: Towards standards and tools for set-theoretic methods. Qualitative & Multi-Method Research Newsletter, 12(2), 27–34. Russi, L. (2013). Der Einfluss von Wettbewerb und Marktverhalten auf die Medienperformanz. In M. Puppis, M. Künzler, & O. Jarren (Eds.), Relation: Vol. 4. Media structures and media performance / Medienstrukturen und Medienperformanz (pp. 257–279). Wien: Verlag der Österreichischen Akademie der Wissenschaften. doi:10.1553/relation4s257 Russi, L., Siegert, G., Gerth, M. A., & Krebs, I. (2014). The relationship of competition and financial commitment revisited: A fuzzy set qualitative comparative analysis in European newspaper markets. Journal of Media Economics, 27(2), 60–78. doi:10.1080/08997764.2014.903958 Schneider, C. Q., & Rohlfing, I. (2013). Combining QCA and process tracing in set-theoretic multimethod research. Sociological Methods & Research, 42(4), 559–597. doi:10.1177/0049124113481341

Schneider, C. Q., & Wagemann, C. (2006). Reducing complexity in Qualitative Comparative Analysis (QCA): Remote and proximate factors and the consolidation of democracy. European Journal of Political Research, 45(5), 751–786. doi:10.1111/j.1475-6765.2006.00635.x Schneider, C. Q., & Wagemann, C. (2010). Standards of good practice in Qualitative Comparative Analysis (QCA) and Fuzzy-Sets. Comparative Sociology, 9(3), 397–418. doi:10.1163/156913210X12493538729793 Schneider, C. Q., & Wagemann, C. (2012). Set-theoretic methods for the social sciences: A guide to qualitative comparative analysis. Strategies for Social Inquiry. Cambridge: Cambridge University Press. Thiem, A., Baumgartner, M., & Bol, D. (2016). Still lost in translation!: A correction of three misunderstandings between configurational comparativists and regressional analysts. Comparative Political Studies, 49(6), 742–774. doi:10.1177/0010414014565892 Thiem, A., & Duşa, A. (2013). Qualitative comparative analysis with R: A user's guide (Vol. 5). New York: Springer; Springer New York.