Identifying Unacceptable Attribute Levels in ...

10 downloads 128812 Views 347KB Size Report
desktop computer at home might prefer a large 17-inch ... feasible for a laptop that is a replacement for a desktop computer. Thus, depending on the intended ...
Identifying Unacceptable Attribute Levels in Preference Measurement Assessing the Methodological Differences between and Relative Performance of Common Methods By Michael Steiner, Roland Helm and Antonia Szelig

The measurement of customer preferences is a typical aspect of market research. Results can be used to segment markets or define prices. The increasing competitive pressure in many markets has led to preference measurement enjoying much attention in research and practice. However, some major factors that might distort estimates of preference measurement are still under-researched. For example, it is commonly known that unacceptable levels can distort the results of preference measurement. However, what remains unclear is how to identify unacceptable levels. In this paper, we focus on this problem. Before assessing preferences, researchers should first eliminate unacceptable attribute levels. Two types of approaches could be used to identify unacceptable attribute levels: a direct method and an indirect method. We show that the type of approach used influences the identification of unacceptable levels. When applying an indirect approach, respondents are more likely to accept a level. Furthermore, we assess the relative performance of these approaches using real purchase data (BDM mechanism) as a benchmark. The results show that indirect as well as direct methods providing consumers with information on the decision context perform well.

Prof. Dr. Michael Steiner, Assistant Professor of Marketing, University of Muenster, Am Stadtgraben 13–15, 48143 Muenster, Germany, Tel: +49 (0) 251 83 31431, Fax: +49 (0) 251 83 22903, E-mail: michael.steiner@ uni-muenster.de

110

Prof. Dr. Roland Helm, Chair of Strategic Industrial Marketing, University of Regensburg, Universitaetsstrasse 31, 93053 Regensburg, Germany, Tel: +49 (0) 941 943 5621, Fax: +49 (0) 941 943 5622, E-mail: [email protected]

MARKETING · ZFP · 34. Jg. · 2/2012 · S. 110 – 123

1. Introduction Measuring customer preferences is a typical aspect of market research. Over the past 35 years, hundreds of articles have been published on methods, such as conjoint analysis and self-explicated approaches, among others. Preference measurement is used before launching new products to gain insights into the preference structures of potential customers and, as a result, to develop products, set prices, and formulate communication strategies that meet these customer needs. Based on these results, market shares and profits can then be estimated (Green/Krieger/Wind 2001). However, unacceptable levels in the attribute-set of a preference measurement study may distort estimates (Ford et al. 1989, Gilbride/Allenby 2004). If marketing decisions are based on preference data that fail to consider that not all presented alternatives are acceptable, product managers may launch inferior products because market share predictions are distorted; they might, for example, be too optimistic. Therefore, several common preference measurement methods, such as past versions of Adaptive Conjoint Analysis (ACA) and the newly developed Adaptive Choice-Based Conjoint (ACBC), incorporate a two-step decision-making approach. In the first step, these methods seek to identify possible causes for non-compensatory decisions, eliminating unacceptable attribute levels. Several approaches have been used in research and practice to identify unacceptable levels. Unacceptable levels

Dipl.-Kffr. Antonia Szelig, Research Assistant, FriedrichSchiller-University Jena, CarlZeiss-Strasse 3, 07743 Jena, Germany, Tel: +49 (0) 36 41 94 31 10, Fax: +49 (0) 36 41 94 31 12, E-mail: [email protected]

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement

can be identified based on respondents’ decisions about the acceptance of different products (Yee et al. 2007). Alternatively, respondents could be asked to directly select the attribute levels that they perceive to be unacceptable (Louviere 1988). For these direct approaches, isolated (respondents only focus on the evaluation of the levels of one attribute) and non-isolated procedures (consumers know all attributes and their levels) as well as different types of task description have been used. The identification of unacceptable levels is often considered a major challenge when developing an approach to assess customer preferences (Wind/Mahajan 1997). For instance, some researchers assume that the above-mentioned direct self-reports on unacceptable attribute levels are unreliable and unrealistic (Scholz/Meissner/Decker 2010; Yee et al. 2007; Dorsch/Teas 1992; Green/Krieger/ Bansal 1988; Klein 1987). Recent studies and methods, therefore, often do not consider unacceptable levels at all (for example, the current version of the ACA does not allow researchers to assess unacceptable attribute levels, Sawtooth Software 2007). Other newly developed approaches, such as the ACBC (Sawtooth Software 2009), use the above-mentioned indirect approach based on the acceptance of evaluations of alternative products. Srinivasan (1988) assumes that deploying such a real product selection task would avoid the problem of incorrectly identifying an attribute level as unacceptable. However, we are not aware of any study that (1) compares both direct and indirect approaches or, moreover, (2) compares the results of these methods to real purchase decisions. In this paper, we look into these two issues. We address the first issue by means of three empirical studies (on mobile phone tariffs, student vacation jobs, and perfumes). We show that the identification of unacceptable levels depends on the method used. When applying indirect approaches, respondents consider fewer attribute levels unacceptable. The third study (on perfumes) also addresses the second issue by applying it to real purchase decisions. By using a BDM mechanism (Becker/DeGroot/Marschak 1964) as a benchmark, we gained insights into respondents’ willingness to pay (WTP) (respondents were asked to state their WTP for a product, and were told, moreover, that they would be obligated to buy the perfume at a randomly determined price if this price was less than or equal to the stated WTP, see Section 3 for a detailed description). The results show that indirect methods and direct approaches, in a non-isolated context, perform well. This result is surprising as direct approaches are often considered to lead to biased results concerning unacceptable levels (see below). However, no previous study has used real purchases as a benchmark. Since direct, non-isolated methods and indirect approaches perform equally well, we suggest applying these direct methods more often in market research studies. Evaluating direct tasks is typically considered less cognitively demanding from a respondent’s perspective, and answering such surveys takes less time. With this paper, we also try to encourage practitioners to pay more attention to the possible effects of ignoring

and/or surveying unacceptable attribute levels within preference measurement by presenting the results of a fictitious case study that demonstrates possible biases from not considering or incorrectly identifying unacceptable levels (see Appendix). This section is important as researchers often focus on the influence of unacceptable levels but do not consider biases resulting from incorrectly identified unacceptable levels. This article is organized as follows. In Section 2, we present the above-mentioned example. Furthermore, we describe results from previous research on direct and indirect approaches to surveying unacceptable attribute levels. In Section 3, we present the results of the empirical studies. We discuss our findings in Section 4.

2. Influence of unacceptable levels on preference measurement In a first step in preference measurement, researchers define a set of attributes and relevant levels. The attribute levels should cover all possible customer needs; i.e., the levels of each attribute should address all the relevant target groups of a product category in order to ensure that the results can be used to estimate the future market shares of a product innovation. However, consumer preferences might differ greatly in terms of the favorability of specific attribute levels. Purchasing a notebook is one example. Here, customer needs might differ depending on the intended use of the product. Customers who travel with a notebook might prefer a small 10-inch display, while others who use a notebook as a replacement for a desktop computer at home might prefer a large 17-inch display. A 17-inch monitor might be unacceptable for frequent travelers, while a 10-inch display might not be feasible for a laptop that is a replacement for a desktop computer. Thus, depending on the intended usage situation, either a 17-inch or a 10-inch monitor might be an acceptable or an unacceptable level for the attribute display. An attribute level is considered unacceptable if a customer always rejects an alternative, no matter how favorable the other attribute levels are. Most preference elicitation techniques, such as conjoint analysis, are based on a weighted additive utility model, which assumes that attributes evoke compensatory decision-making processes (Bettman/Luce/Payne 1998). A decision-making strategy is compensatory if a good level for one attribute can compensate for a poor level of another. Non-compensatory decision rules can distort preference measurements’ estimates (see the Appendix for an example). Two types of non-compensatory decisions are relevant. First, an attribute level is unacceptable. Thus, any alternative that shows this level is rejected by the respondent. Second, even if all levels are acceptable, some combinations of acceptable levels may cause unacceptable alternatives. Unacceptable alternatives can be considered by including a no-choice option or by using dual response choice designs within choice-based conMARKETING · ZFP · Heft 2 · 2. Quartal 2012

111

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement

joint analyses (Brazell et al. 2006). When implementing a no-choice option to the choice based conjoint analysis, respondents have the option of rejecting all alternatives from a choice set, i.e., in such a situation, no alternative is perceived to be acceptable. However, when respondents select the no-choice option, no information is obtained on the relative preference of the alternatives within the choice set, which may lead to less efficient experimental designs. Using dual response choice designs avoids this problem. Respondents are first asked to select the preferred alternative and then to evaluate whether they would accept this preferred alternative. Several other approaches also aim to identify unacceptable alternatives that cause non-compensatory decisionmaking processes within preference measurement. For example, DeSarbo et al. (1996), Gilbride/Allenby (2004), and Jedidi/Kohli (2005) propose identifying noncompensatory decisions ex post (for an extensive overview of this group of approaches, see Yee et al. 2007). In this paper, we focus on the first type of non-compensatory decisions. Our aim is to eliminate unacceptable levels in a survey as early as possible to avoid most unacceptable alternatives in the preference measurement evaluation task. Removing unacceptable levels before measuring preferences will also lead to shorter questionnaires, which might increase respondents’ motivation and also limit market research costs (Klein, 1987; Mehta et al., 1992). It is commonly accepted that consumers often apply a two-stage decision-making process when deciding on products (for an overview, see, for example, Bettman et al. 1991). They often use heuristics to form consideration sets and – in a second step – only trade off those alternatives that seem acceptable. Several preference elicitation techniques seek to mimic this type of decisionmaking behavior by incorporating a two-step approach into preference measurement. In a first step, unacceptable levels are identified in order to increase the likelihood of respondents using compensatory decision-making processes in a second step. Preferences are measured in this second step. When using a two-step approach within surveys, one aims to remove unacceptable levels at an individual level before assessing a consumer’s preferences. Several compositional (direct) as well as decompositional (indirect) methods that incorporate such a two-stage approach were developed in past research to assess customer preferences. Compositional methods include approaches such as different variants of self-explicated analysis. Conjunctive Compensatory Self-Explicated Analysis (Srinivasan 1988) is a typical approach that considers unacceptable attribute levels in a first step; customer preferences are assessed in a subsequent step. Unacceptable attribute levels are identified according to respondents’ direct evaluations, i.e., a respondent selects the attribute levels that he or she considers unacceptable. (Fig. 1(I) shows an example of such an evaluation task.)

112

MARKETING · ZFP · Heft 2 · 2. Quartal 2012

Decompositional approaches are based on different types of conjoint analysis, such as traditional approaches or choice-based methods. Several procedures, such as ACA and ACBC, consider unacceptable levels. Past versions of the ACA enabled the identification of unacceptable levels based on a direct (and isolated) evaluation (again see Fig. 1(I) for an example of such an evaluation task). ACA used an isolated approach; the levels of each attribute were evaluated separately without presenting the other attributes’ levels. Thus, respondents lack complete information when assessing the attribute levels. Such an isolated context might cause distorted results (Klein 1987; Srinivasan 1988) since respondents simply do not know if the other attributes’ levels can compensate for an unfavorable level of the attribute assessed in an evaluation task. Distorted results due to an isolated presentation format are one reason why Sawtooth skipped the option to account for unacceptable levels in its current version, i.e., it is not possible to implement a non-compensatory stage in the ACA questionnaire (Sawtooth Software 2007). Another option for avoiding biases resulting from incomplete information would be to provide respondents with all the necessary information (for such a non-isolated evaluation task, see Fig. 1(II).) ACBC is another decompositional approach that includes a stage that seeks to identify unacceptable attribute levels. The respondent is first asked to create his or her “ideal” product within a configurator (build your own task). Several alternatives are then presented in a second step (screener task). Here, unacceptable levels as well as must-have attribute levels are identified by assessing the acceptance of several products, i.e., an indirect approach is used (Sawtooth Software 2009). ACBC is thus based on indirect evaluations; unacceptable attribute levels are identified by analyzing decision patterns based on evaluations of alternatives’ acceptability [1]. Fig. 1 (III) shows an example of such an evaluation task. Several studies address the applicability of direct approaches (for an overview, see Tab. 2 and Tab. 3). Past research has shown that identifying unacceptable attribute levels based on direct evaluations seems to be problematic. Dorsch/Teas (1992) found that 44 % of respondents accepted the attribute levels that were identified as unacceptable by the decision makers themselves when presenting stimuli that show this level (also see Klein 1987). Some studies found that considering unacceptable attribute levels does not enhance predictive validity (Klein 1987; Mehta/Moore/Pavia 1992). Other studies show that assessing unacceptable attribute levels within preference measurement might even decrease validity measures, such as internal validity, i.e., the consistency of the answers in an interview (Green et al. 1988). Based on these findings, subsequent studies focused on improving the direct approach used to identify unacceptable attribute levels. Mehta et al. (1992) varied the wording of the evaluation task used to define unacceptable attribute levels. Green/ Srinivasan (1990) pointed out that the definition of unac-

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement

(I) Direct isolated approach Please mark the price per minute at which you would no longer consider the mobile phone tariff, because the price is too high and therefore completely unacceptable to you. In case you consider all prices acceptable, please mark the respective box. price per minute

0.15 EUR

0.35 EUR

0.55 EUR

0.75 EUR

all prices are acceptable to me

… is / are completely unacceptable for me.

(II) Direct non-isolated approach Mobile phone tariffs might be characterized by the following further attributes: attributes brand of the mobile phone headset

Nokia

Siemens

included

not included

Please imagine your “ideal” mobile phone tariff, which consists of the preceding attributes that can be combined absolutely free (i.e., Nokia with headset, Nokia without headset, Siemens with headset, and Siemens without headset). Please mark which prices per minute of your “ideal” mobile phone tariff are now completely unacceptable to you (you may choose multiple answers). If you consider all prices acceptable, please mark the respective box. price per minute

0.15 EUR

0.35 EUR

0.55 EUR

0.75 EUR

all prices are acceptable to me

… is / are completely unacceptable for me.

(III) Indirect approach Please mark which of the following mobile phone tariffs you would not buy (or buy). “Completely unacceptable“ means that you would definitely reject the mobile phone tariff. mobile phone tariff 1 brand of the mobile phone headset

Nokia not included

price per minute

0.55 EUR

mobile phone tariff 2

mobile phone tariff 8

brand of the mobile phone

Siemens

brand of the mobile phone

Siemens

headset

included

headset

included

price per minute

price per minute

0.35 EUR

acceptable

acceptable

completely unacceptable

completely unacceptable

0.55 EUR

acceptable

………

completely unacceptable

Fig. 1: Overview of common evaluation tasks used to identify unacceptable attribute levels

ceptable level should be clear to every decision-maker. Mehta et al. (1992) found that, when a definition of the term “unacceptable” is used in the direct evaluation task, fewer levels are identified as such. However, this finding

might not be generalizable for just any attribute set as Green et al. (1988) found little difference between the two conditions. Tab. 1 provides an overview of all direct approaches. MARKETING · ZFP · Heft 2 · 2. Quartal 2012

113

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement Variations in order to improve the evaluation task

Wording of the evaluation task

Context of the evaluation task

- Traditional approach: Respondents do not get an additional explanation of the consequences when defining an attribute level as “unacceptable” - Improvement: Respondents are provided with a definition of “unacceptable” that clarifies the consequences.

Term “unacceptable” is clearly defined

- Traditional approach: Isolated approach Respondents sequentially evaluate the attributes’ levels when assessing the levels of an attribute, the other attributes’ levels are unknown. Non-isolated approach - Improvement: Respondents sequentially evaluate the attributes’ levels when assessing the levels of an attribute, the levels of the other attributes are presented.

Previous research has only focused on direct approaches to identify unacceptable attribute-levels (see, again, Tab. 2 and Tab. 3). We are not aware of any study that considers indirect approaches to identify unacceptable levels. We therefore addressed these methods in our empirical studies. Based on findings from other research fields, we expected to find differences between direct and indirect approaches. For example, consumers’ willAuthors

No clear definition of the term “unacceptable”

Preference elicitation method

Klein (1987) Trade-off matrices

Tab. 1: Overview of possible improvements for direct evaluation tasks

ingness to pay (WTP) can be surveyed directly, by simply asking for the maximum accepted price, or indirectly, by conducting a conjoint analysis. Direct evaluations may tend to increase sensitivity to specific attributes, such as price, for example, see Bateman et al.’s (2000) study on different response modes’ effects on revealed preferences). In contrast, when applying an indirect approach, respondents may not focus on one single attribu-

Method to determine unacceptable levels

Study description

Preliminary findings

Direct evaluation & a definition of “unacceptable” is provided

- Ballpoints and calculators - 15% of choices include alternatives with an unacceptable level. as research objects. - Two choice situations (self- - Predictive validity of conjoint analysis not affected situation, gift situation), re- by eliminating alternatives with unacceptable levels (i.e., elimination has no effect on first choice presulting in four scenarios. - Respondents were provided dictions). with a definition of “unacceptable” before assessing unacceptable levels.

Green et al. Full-profile Direct evalua- Student apartments as re- - Alternative with unacceptable attribute level chosen (1988) conjoint pro- tion with two search object. in 17.3% of cases when not providing a definition of unacceptable and 14.8% when providing a deficedure groups, a defini- - Two groups to test the intion of “unacfluence of providing renition; no dramatic reduction due to the definition. ceptable” is spondents with a definition - Lower internal validity results when not using of the term “unacceptable” respondents’ ratings and adopting a zero score for provided vs. not provided alternatives with an unacceptable attribute level. - Lower predictive validity for both groups (although predictive validity is considerably lower when not providing a definition of “unacceptable”). Mehta et al. ACA (based Direct evalua- Student apartments as - Fewer unacceptable levels when providing an (1992) on paired tion with two research object. additional definition. comparisons) groups, a defini- - Two groups to test the - Alternative with an unacceptable level chosen in tion of “unacinfluence of providing re40% of the cases (44% when providing no definiceptable” is spondents with a definition tion of “unacceptable” and 35% when providing a provided vs. not of the term “unacceptable” definition); due to the experimental design, 58% of provided these cases give the respondent no choice but to choose an alternative with an unacceptable attribute level. - Determination as unacceptable depends on the context (i.e., levels of the other attributes). Tab. 2: Studies using decompositional preference elicitation techniques in a second step

114

MARKETING · ZFP · Heft 2 · 2. Quartal 2012

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement Authors

Preference elici- Method to determine tation method unacceptable levels

Study descriptions

Preliminary findings

Green (1984)

SIMALTO (simultaneous multi-attribute level trade off)

Direct evaluation & no - Only method description, no empirical findings. definition of “unacceptable”

Srinivasan (1988)

Conjunctive: compensatory self-explicated

Direct evaluation & a defi- - Assessment of real job nition of “unacceptable” is choices. provided

Bucklin/ Srinivasan (1991)

Conjunctive: compensatory self-explicated

Direct evaluation & a defi- - Coffee as research object. - Only 0.3% of the sales volume used nition of “unacceptable” is - Comparison of the results for brands with unacceptable levels. provided with self-stated purchases.

- No respondents (12 students) selected an unacceptable alternative.

Tab. 3: Studies using compositional preference elicitation techniques in a second step

te but consider several features simultaneously. The use of trade-off decisions is more likely in such decision tasks. Past research, therefore, found that consumers are more likely to accept products (Mintz et al. 2010) and are thus also more likely to accept their specific levels in such a choice context. Although there might be differences in preferences due to a direct and indirect decision task, results of both approaches might still perform well. For instance, Miller/ Hofstetter/Krohmer/Zhang (2011) assessed the applicability of direct and indirect approaches used to survey consumers’ WTP. They suggested that both direct (open ended) and indirect (choice based conjoint analysis) approaches may lead to pricing decisions that are in line with real purchase data. In line with this study, we assessed the relative performance of direct and indirect approaches by using real purchases as a benchmark. This enabled us to derive recommendations for practitioners on methods that are applicable.

3. Empirical Studies We conducted three empirical studies to test the effect of direct and indirect evaluation tasks. Respondents were asked to decide whether or not they would accept an alternative within the indirect evaluation task (see Fig. 1 (III)). As such indirect approaches mimic real choices; we expected that this presentation format would be in line with real purchase decisions. The alternatives that were evaluated by the respondents were created based on Addelman plans, which are commonly used for such experimental settings (see Addelman’s (1962) study on the construction of alternatives). For the direct evaluation task, we used all four possible types of combinations in two studies (see Tab. 1 and Fig. 1 (I) for an example of an isolated task and Fig. 1 (II) for a non-isolated task based on context without a definition of the term “unacceptable”). The evaluation task presented in Fig. 1 (I) is probably the approach most frequently used to identify unacceptable levels. It has been implemented in several methods, e.g., in previous versions of the ACA – a method widely used in market research practice (Green et al. 2001). Other methods explicitly

provide respondents with a definition of “unacceptable”. By providing such additional information, one seeks to ensure that respondents understand the consequences of indicating a level as “unacceptable”. Here, we used the following wording: “completely unacceptable means you would definitely reject the product if it has this attribute level”, which indicates the non-compensatory character of completely unacceptable attribute levels. We expected that decision makers would define fewer levels as unacceptable when stressing the non-compensatory nature of this evaluation task. Moreover, we considered variations in the decision context. As previously noted, an isolated presentation format only shows the attribute’s levels that are evaluated, i.e., the respondent does not know the other attributes’ levels and is thus unable to make any trade-off between the levels of the evaluation task and those of other attributes (see Fig. 1 (I)). A non-isolated approach provides all information (attributes and levels) considered within the latter preference elicitation task and thus enables the respondents to make a more informed decision (see Fig. 1 (II)). As decision makers are more easily able to trade-off less preferable levels with more preferable ones, we expected attribute levels to be accepted more often in a non-isolated decision context. Study 1 (mobile phone tariffs) and Study 2 (student vacation job) consisted of five experimental groups (see Tab. 4). Here, we tested how a specific method influences the identification of unacceptable levels. Study 3 (perfume) aimed to assess the relative performance of these approaches by incorporating real purchases as a benchmark. The real WTP of the respondents was surveyed using the BDM approach (Becker/DeGroot/Marschak 1964). They were asked to state their WTP for a product (a perfume that they were able to test upfront). Furthermore, they were told that they would be obligated to buy the perfume at a randomly determined price (based on a lottery) if this price was less than or equal to the stated WTP. If the randomly determined price was higher than the WTP stated, they were not able to buy the product. This approach ensures that respondents do not have any incentive to state a lower or higher price than their real WTP (Wertenbroch/Skiera 2002). We did not assess the group of direct approaches that does not exMARKETING · ZFP · Heft 2 · 2. Quartal 2012

115

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement

Study 1: Study 2: Mobile phone Vacation job tariff

Study 3: Perfume

x

x

isolated presentation, no definition of the term “unacceptable”(traditional approach)

x

x

non-isolated presentation, no definition of “unacceptable”

x

x

isolated presentation, with definition of “unacceptable”

x

x

x

non-isolated presentation, with definition of “unacceptable”

x

x

x

Direct evaluation task

Indirect evaluation task (stimulus presentation)

Real purchases (based on BDM mechanism)

plicitly define the term “unacceptable” in Study 3 as we assumed that these approaches are unlikely to improve the results compared to the group of methods that provides respondents with information on how to interpret the term “unacceptable”. Tab. 4 provides an overview of the three studies. We used a student sample as well as paper-and-pencil questionnaires. All respondents were either bachelor’s or master’s students from a German university, who were recruited during their lectures. In total, we surveyed more than 250 students for Studies 1 and 2 and 227 students for Study 3. The students were randomly assigned to the experimental groups. We used a between-subject experimental design, i.e., each respondent evaluated only one condition. Each questionnaire consisted of both research objects for Studies 1 and 2 (the order of the two studies was random; differences between the number of respondents in each study are due to incomplete questionnaires). The research objects were defined based on a pre-study in order to ensure that all respondents were familiar with the research objects. The attribute levels were also defined based on the results of a pre-study. Here, we identified typical attribute levels considered by our respondents when deciding on a mobile phone tariff or a vacation job. This approach ensures that the attributes and their specific levels match our decision-makers’ preferences. However, in order to assess the effects of different methods that aim to assess unacceptable attribute levels, we manipulated the levels for one attribute in each set. We subsequently added one attribute level that seems very favorable (e.g., an hourly wage of 10 EUR) and that almost everyone will accept as well as one level that seems very unfavorable (e.g., a very low hourly wage) and that will be rejected by almost everyone. We did not expect to find differences between the methods for these extreme attribute levels that define an attribute’s bandwidth. The levels between these extremely favorable and unfavorable attribute levels were defined according to the approach most often used in preference measure-

116

MARKETING · ZFP · Heft 2 · 2. Quartal 2012

x

x

Tab. 4: Overview of the studies and methods assessed

ment. Here we used equally spaced levels that encompass the relevant range (Darmon/Rouzi`es 1989). We expected different approaches to cause different acceptance rates for these intermediate attribute levels. In our subsequent analysis, we focused on the one attribute that was manipulated to ensure the presence of unacceptable levels. In Study 1, we therefore focused on the price per minute for mobile phone tariffs. In Study 2, we varied the level for the attribute “hourly wage” of a vacation job. Finally, in Study 3, we focused on the price of a perfume sample. In the next sections, we describe the results of the three studies. 3.1. Study 1: Mobile Phone Tariffs

The research object “mobile phone tariffs” was defined by the three attributes “brand of the mobile phone included”, “headset”, and “price per minute” (see Tab. 5). These attributes were selected based on a pre-study in order to ensure that they discriminate between the offers (we used the attributes that were most relevant). The levels were also defined based on the results of a pre-study and selected in order to cover the relevant bandwidth of levels for our sample. We used a logit analysis to test whether the evaluation task ((a) direct/indirect) and determinants of the choice context for direct approaches ((b) isolated/non-isolated as well as (c) existence or absence of a definition of the term “unacceptable”) influence the acceptance or rejection of specific attribute levels. In order to test whether the above-mentioned factors influence the identification of unacceptable levels (i.e., the acceptance rate), we ran separate logistic regressions for each level of the attribuAttributes Brand of the mobile phone Headset Price per minute (in EUR)

Attribute levels Nokia, Siemens included, not included 0.15; 0.35; 0.55; 0.75

Tab. 5: Attributes and levels assessed in Study 1

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement

te “price per minute”; while (a), (b), and (c) are independent variables, the acceptance of the respective price level is the dependent variable. We demonstrate how we defined the logit analysis based on the attribute level “0.35” (we used this approach for all price levels and also in the subsequent two studies). 1 The model is: Prob(acceptance of 0.35) = where 1 + e–Z Z is the linear combination Z = B0 + B1 · indirect presentation format + B2 · non-isolated context + B2 · definition of unacceptable is provided (1) In this example, acceptance of a price per minute of “0.35 c” is the dependent variable. We used indicator coding for the main effects, which means that one condition is defined as reference. In our study, we used the direct and isolated condition with no definition of the term “unacceptable” as reference. The independent variable dummy coding is presented in Tab. 6. Thus, no values are computed for the reference condition. A significant effect of B1 denotes that the type of presentation format (direct/indirect) influences the acceptance of the price level (here: 0.35). B2 denotes the influence of the context (isolated/non-isolated), and B3 tests whether providing a definition for the term “unacceptable” influences the acceptance of the respective price-level (B2 and B3 only consider direct approaches). As stated above, we conducted a separate logit analysis for each price-level. Based on the results of these logit analyses, we only observed a significant influence of the above-mentioned factors for the level “0.35 EUR per minute”, i.e., we found no significant influence by the fac-

tors on the acceptance of the price levels “0.15”, “0.55”, and “0.75”. Moreover, when considering the price-level “0.35”, only the type of evaluation task (B1) significantly influences acceptance/rejection (p=0.032) [2]. When applying a direct approach to identify unacceptable levels, the price of 0.35EUR was rejected significantly more often than when applying an indirect approach. Rejection rates for the manipulated attribute “price per minute” are presented in Tab. 7. Again, we did not expect to find significant differences between the extreme values. We defined the preferred and least preferred level so that most respondents would accept or reject them. Intermediate levels were defined using levels that are equally spaced. Based on the absolute values of the rejection rates presented in Tab. 7, we see that the level “0.55 EUR per minute” was set too high, as only very few respondents accepted it. We assume that this is the reason we did not find any significant influence of the independent variables on the acceptance of this price level. 3.2. Study 2: Vacation Job

In this section, we assessed preferences for the research object summer “vacation job” Respondents received a detailed job description to ensure that they were able to evaluate alternative job offers and their specific levels. Respondents were told that they would work in a hotel close to the beach in Malta. They would work for 8 hours a day for a period of three weeks, and they would get free accommodation for an extra week, which they could use for leisure. The work would involve administrative activ-

Indirect Non-isolated Definition of context “unacceptable” presentation format is provided 1

0

0

isolated presentation, no definition of the term “unacceptable” (= reference)

0

0

0

non-isolated presentation, no definition of “unacceptable”

0

1

0

isolated presentation, with definition of “unacceptable”

0

0

1

non-isolated presentation, with definition of “unacceptable”

0

1

1

Direct evaluation task

Tab. 6: Dummy coding of independent variables in a logit analysis

Indirect evaluation task (stimulus presentation)

Attribute levels (EUR per minute)

Tab. 7: Study 1 – Rejection rates for the levels of the attribute “price per minute”

Direct isolated presentation no definition (n = 42) with definition (n = 51) Direct non-isolated presen- no definition (n = 44) tation with definition (n = 46) Stimulus evaluation (n = 31)

0.15 2.4% 2.0% 4.5% 2.2% 0.0%

0.35 47.6% 37.3% 56.8% 30.4% 22.6%

0.55 92.9% 88.2% 93.2% 87.0% 83.9%

0.75 97.6% 98.0% 95.5% 95.7% 96.8%

MARKETING · ZFP · Heft 2 · 2. Quartal 2012

117

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement Attributes Catering Accommodation Free sightseeing tours Hourly wage

Attribute levels All-inclusive, not inclusive Close to the beach, not close to the beach Included, not included 10 EUR; 8 EUR; 6 EUR; 4 EUR; 2 EUR

Attributes Capacity Packaging Price

Attribute levels 1.5 ml; 3.0 ml available, not available 0.00 EUR; 0.05 EUR; 0.10 EUR; 0.15 EUR

Tab. 10: Attributes and levels assessed in Study 3

Tab. 8: Attributes and levels assessed in Study 2

ities (e.g., organizing events). Tab. 8 shows the attributes and levels assessed in Study 2. We manipulated the attribute “hourly wages”. We again tested for differences between the rejection rates for the attribute “hourly wage” due to the three above-mentioned factors (independent variables). Tab. 9 presents the rejection rates. The results of the logit analyses for each wage level reveal that only the type of presentation significantly influences rejection for the attribute levels “6 EUR”, “4 EUR”, and “2 EUR” [3]. Again, applying a direct approach to identify unacceptable levels significantly increases rejection rates (p=0.001 for the level “6 EUR”, p=0,000 for the level “4 EUR”, and p=0,006 for the level “2 EUR”). Based on our findings from Studies 1 and 2, we conclude that there is no significant difference between the approaches for attribute levels that most respondents are likely to accept or reject. Furthermore, providing a clear definition of the term “unacceptable” as well as a nonisolated evaluation task does not seem to significantly influence the acceptance or rejection of attribute levels. However, we found that the type of evaluation task has a robust effect on those levels that are not dominantly accepted or rejected, i.e., that show heterogeneous acceptance rates. When applying a direct method, significantly more levels will be rejected. In other words, fewer levels are rejected when applying an indirect method.

Direct isolated presentation Direct non-isolated presentation

10 no definition (n = 53) 0.0% with definition (n = 48) 0.0% no definition (n = 45) 0.0% with definition (n = 56) 0.0%

Stimulus evaluation (n = 49)

3.3. Study 3: Perfume

Studies 1 and 2 show that the type of decision task format matters. However, based on the results of these studies, it is not possible to make any recommendations for the relative performance of these approaches. We therefore conducted a third study. Here, we once again assessed the influence of the evaluation task (direct/indirect) and the choice context of the direct approaches (we focused on the isolated/non-isolated condition). More importantly, we additionally assessed rejection rates for alternative price levels in real purchases. We used the BDM mechanism to assess respondents’ WTP. We selected perfumes as a research object for the third study. Before assessing respondents’ preferences, we asked them to test the perfume in order to ensure that all respondents were able to evaluate the perfume. Subsequently, respondents were asked to evaluate perfume samples, which are described by the attributes and levels presented in Tab. 10. Tab. 11 presents the rejection rates for the attribute “price” of a perfume sample. Again, we first assessed the influence of the method on the acceptance/rejection of the price levels. We found no significant influence of the decision task for the attribute level “0 EUR”. However, the decision task (direct/indirect) as well as the choice context (isolated/non-isolated) significantly influenced the acceptance/rejection of the remaining price levels. The significance levels as well as the odds ratios are presented in Tab. 12. We used dummy variables with the direct isolated presentation format as a reference. The values of the odds ratio thus show that a

Attribute levels (EUR per hour) 8 6 4 2 9.4% 37.7% 84.9% 98.1% 4.2% 35.4% 83.3% 95.8% 2.2% 28.9% 80.0% 93.3% 1.8% 25.0% 71.4% 92.9%

0.0% 0.0%

4.1%

51.0% 73.5%

Tab. 9: Study 2 – Rejection rates for the levels of the attribute “hourly wage”

Attribute levels (EUR) 0.00 0.0%

0.05 50.0%

0.10 66.7%

0.15 78.6%

Direct non-isolated presentation (n = 56) Stimulus evaluation (n = 48)

1,8%

35,7%

44,6%

62,5%

8,3%

20,8%

31,3%

56,3%

Real purchase (BDM mechanism, n = 48)

8,3%

29,2%

41,7%

62,5%

Direct isolated presentation (n = 42)

118

MARKETING · ZFP · Heft 2 · 2. Quartal 2012

Tab. 11: Study 3 – Rejection rates for the levels of the attribute “price”

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement Attribute levels (EUR) 0.00 Tab. 12: Study 3 – Significance levels (p) and odds ratios (Exp(B)) for the type of evaluation task and choice context of the direct approach

Direct (0)/ Indirect (1) Isolated (0)/Non-isolated (1) -

0.05 p=0.000 Exp(B)=3.818 p=0.022 Exp(B)=2.291

0.10 p=0.000 Exp(B)=4.993 p=0.001 Exp(B)=3.410

0.15 p=0.000 Exp(B)=3.981 p=0.003 Exp(B)=3.491

Attribute levels (EUR) Tab. 13: Study 3 – Significance levels when comparing acceptance rates between methods and real purchases in a logit analysis

Direct isolated presentation Direct non-isolated presentation Stimulus evaluation

non-isolated presentation format and an indirect choice context increase respondents’ acceptance of the respective price level. Furthermore, the values of the odds ratio also indicate that respondents are more likely to accept a price level when applying an indirect approach. In contrast to Studies 1 and 2, we found that the context of the direct presentation format (isolated/non-isolated) has a significant impact on the acceptance of the product. We assume that this is due to the attribute “capacity” of the perfume sample bottle. In an isolated context, respondents may believe that all bottles are of a comparable size and thus reject higher price levels more often. In a non-isolated context, however, respondents are aware that there is also a 3.0 ml bottle, which is twice the regular capacity, and a higher price would be justified. We thus expect that an isolated presentation format is likely to influence the acceptance of a product if consumers do not expect specific levels of an attribute that may have a major influence on consumers’ preference. So far, we have focused on assessing the effect of alternative methods to survey unacceptable levels on the acceptance/rejection of these levels. We found significant differences between the approaches. However, based on these findings, no recommendations can be derived on the relative performance of these methods. We therefore surveyed respondents’ WTP for the perfume sample based on a BDM mechanism. This enabled us to test, by means of a logit analysis, whether acceptance/rejectionrates deviate significantly from a real purchase context. More specifically, we ran a separate logit analysis for each price level and each method (only the observed data for the respective method and real purchase data were jointly considered) with the respective method as the only independent variable and “real purchase” as reference. We therefore conducted (3*4=) 12 logit analyses. For example, the model for a price level of “0.15” and the direct isolated presentation is: Prob(acceptance of 0.15) = 1 where Z is the linear combination Z = B0 + B1 · 1 + e–Z direct isolated approach (2) A significant value for the independent variable (direct isolated approach) indicates relevant differences between

0.00 p=0.998 p=0.157 p=1.000

0.05 p=0.045 p=0.479 p=0.348

0.10 p=0.019 p=0.760 p=0.290

0.15 p=0.100 p=1.000 p=0.533

the respective method and the BDM mechanism (real purchase) concerning the acceptance/rejection rates of specific price levels. The results of this analysis are presented in Tab. 13. Based on these results, we conclude that – compared to real purchase data – rejection rates are significantly higher when applying a direct and isolated method to identify unacceptable levels (except for the price level “0 EUR”). The results in Tab. 11 show that rejection rates are also higher than in reality when using a direct and non-isolated approach. Furthermore – compared to real data – applying an indirect method led to lower rejection rates. However, the rejection rates from a direct and non-isolated method, and the indirect approach do not differ significantly from the rejection rates derived from the BDM mechanism (again, see Tab. 13).

4. Summary To assess customer preferences, market researchers should consider unacceptable attribute levels, which are very likely to distort estimates and thus decrease the capability of the results to correctly predict future market shares (Yee et al. 2007). In this paper – in a simple example (see Appendix) – we show that not considering unacceptable levels in preference measurement will very likely lead to distorted estimates. As a result, predictions about the future market share (and thus future sales and possible prices) might be too optimistic if the own product shows unacceptable levels or too pessimistic if competing products involve unacceptable levels. Various studies, therefore, indicate the need to identify completely unacceptable attribute levels on an individual level before measuring preferences; for an overview, see Klein (1987) or Mehta et al. (1992). However, considering too many levels completely unacceptable also distorts the estimates. On the one hand, product managers may launch products that do not achieve the target sales volume if competing products seem to be unacceptable, but consumers are actually buy them. On the other hand, marketing managers might also miss a profitable investment when a product seems unacceptable but is not. Thus, while it is important to consider unacceptable levels, the MARKETING · ZFP · Heft 2 · 2. Quartal 2012

119

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement

relative performance of the approach one uses to identify these levels should be tested. We are not aware of any study that compares the influence of alternative approaches on the identification of unacceptable levels with experimental data on real purchases. In our paper, we address two research questions: (I) does the approach used to identify unacceptable levels itself influence the acceptance/rejection of specific levels? (II) which method should be applied in order to mimic real decision making? The answer to Question (I) is clear for levels that are not clearly preferred or rejected. As a result of the three empirical studies, we are able to confirm that the evaluation task type strongly influences the results of methods that seek to identify unacceptable attribute levels. We show that using an indirect approach will reduce rejection rates. However, in order to derive recommendations for researchers and practitioners, a relative performance measure should be used (Research question (II)). Our results indicate that two types of approaches seem favorable: indirect methods and direct methods based on a non-isolated decision context. Both perform well when comparing acceptance/rejection rates of attribute levels that are surveyed based on these approaches and real purchases (based on a BDM mechanism). There are two reasons why it seems favorable to apply a direct approach that informs respondents about all other attributes and levels: First, it is often considered less demanding to directly assess different attributes’ levels and, second, it is faster than evaluating multi-attributive alternatives from a respondents’ perspective. By applying these direct approaches, interview time as well as respondents’ cognitive depletion could be limited without sacrificing performance. Direct and isolated methods show inferior results. This finding is interesting as this approach has been very popular in the past (e.g., when applying previous versions of the commonly used ACA). We therefore conclude that respondents should be informed about all attributes and levels before trying to identify unacceptable levels. Direct and non-isolated as well as indirect methods share the characteristic that respondents know all relevant attributes and levels. We assume that they are therefore able to more easily identify levels that evoke trade-off decisions or non-compensatory choices respectively. These findings are relevant for scholars and market researchers. Scholars should be aware of these results when developing new approaches for measuring customer preferences. Market researchers should be aware that market share predictions are distorted (too optimistic or too pessimistic) when one neglects possible or falsely identifies too many unacceptable levels. To date, little research has been conducted to test differences induced by the methods for identifying unacceptable levels. This is the first study that uses a benchmark

120

MARKETING · ZFP · Heft 2 · 2. Quartal 2012

(real purchases) to evaluate existing approaches (the study on perfume samples reported in this paper). In this study, we assessed unacceptable levels for a low-priced product based on a student sample. Future research should study both additional product categories and higher price-levels. Moreover, future research could assess differences between student and non-student samples. Based on the research finding from this study but also from previous research, we expect to find significant differences between the approaches used to identify unacceptable levels in these conditions. Most importantly, future research should assess possible explanations for these differences. For example, different types of approaches could evoke the use of different decision-making strategies. Comments [1] Furthermore, the ACBC also considers “must have” levels, i.e., all alternatives are rejected if they do not show a specific level. Here, all other levels are defined as unacceptable. [2] We also tested the interaction effect between isolated/non-iso+ lated and no/with definition. This interaction effect did not [3] have a significant influence.

Appendix The Managerial Relevance of identifying unacceptable levels in preference measurement – a simple example The following example provides more insights into how ignoring or incorrectly identifying too many unacceptable levels influences estimates of preference measurement. In this context, two types of mistakes should be considered: (1) unacceptable levels are not identified and (2) too many levels are considered unacceptable. In this simple, hypothetical example, we assess preferences for notebooks that can be described by the three attributes: display size (10 inch, 13 inch, 15 inch, and 17 inch), processor (entry-level, multimedia, and high-speed), and price (300 USD, 500 USD, and 700 USD). We focus on the preferences of five respondents and demonstrate the influence of one unacceptable level. Here, we assume that a 17-inch display will be an unacceptable level for those respondents who want to be mobile. The “true” preferences of the five respondents (r1 to r5) are presented in Tab. A1. Tab. A1 shows that a 17-inch laptop is unacceptable for Respondents 1 and 2, i.e., any product that shows this attribute level will be rejected no matter how favorable all the other attribute levels are. For Respondents 3 and 4, a 17-inch display is not preferable; however, this level is also not completely unacceptable in the sense that this inferior level cannot be compensated. In contrast to the others, Respondent 5 prefers a 17-inch laptop. Based on Addelman plans (Addelman, 1962), 16 alternatives can be constructed that are evaluated by the respondents within the conjoint analysis task. Here, consumers are asked to evaluate several alternatives on an 11-point

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement

Tab. A1: True preferences of five respondents (scale: high values denote a high utility and low values little utility caused by an attribute level; utility values were scaled in a manner that the alternative with the highest value gets a value of 10 and the least preferred alternative a value of zero)

Attribute Display 10-inch 13-inch 15-inch 17-inch Processor entry-level multimedia high-speed Price 700 USD 500 USD 300 USD

Respondents r2 r3 0 0 1 1 2 2 unacceptable 0 0 0 2 2 5 6 0 0 1 1 3 2

r1 0 1 2 unacceptable 0 2 6 0 1 2

r4 0 1 2 0 0 2 5 0 1 3

r5 0 1 2 3 0 2 5 0 1 2

Respondents

Tab. A2: Evaluations when not considering unacceptable levels

Product

Display

Processor

Price

r1

r2

r3

r4

r5

1

13-inch

multimedia processor

700 USD

3

3

3

3

3

2

13-inch

entry-level processor

500 USD

2

2

2

2

2

3

17-inch

entry-level processor

700 USD

0

0

0

0

3

4

15-inch

multimedia processor

700 USD

4

4

4

4

4

5

15-inch

entry-level processor

300 USD

4

5

4

5

4

6

10-inch

high-speed processor

700 USD

6

5

6

5

5

7

10-inch

entry-level processor

700 USD

0

0

0

0

0

8

10-inch

entry-level processor

500 USD

1

1

1

1

1

9

15-inch

entry-level processor

700 USD

2

2

2

2

2

10

10-inch

multi-media processor

300 USD

4

5

4

5

4

11

17-inch

multimedia processor

500 USD

0

0

3

3

6

12

17-inch

entry-level processor

300 USD

0

0

2

3

5

13 14

13-inch 15-inch

entry-level processor high-speed processor

700 USD 500 USD

1 9

1 8

1 9

1 8

1 8

15

17-inch

high-speed processor

700 USD

0

0

6

5

8

16

13-inch

high-speed processor

300 USD

9

9

9

9

8

rating scale (with scale points between 0 and 10). When not considering unacceptable levels, this will lead to the evaluations presented in Tab. A2. Please note that Respondents 1 and 2 rated all alternatives that show a 17-inch display with “0” In this example, all respondents provide 100 % consistent answers, i.e., lower values of the internal validity indicate that the compensatory approach (the linear utility function) will lead to (at least to some extent) distorted results. The part worths for the attribute levels can be derived based on a regression analysis. Tab. A3 presents the results. In this simple example, compared to the “real” preferences presented in Tab. 14, the estimated part worths of most levels are distorted for Respondents 1 and 2 due to the unacceptable level of the 17-inch display. For exam-

ple, Respondent 1’s real preference for a high-speed processor is 6, the estimated preference is 4.5. Thus, the unacceptable level of a 17-inch display influences the other attribute levels’ values. Furthermore, unacceptable levels also decrease the values for the internal validity (here, we presented the adjusted R2). If unacceptable levels are identified upfront, all alternatives described by these levels are excluded from the conjoint analysis’s evaluation task. However, a method may identify too many levels as unacceptable. In our next example, a 17-inch display is mistakenly defined as unacceptable for Respondents 3 and 4. In the conjoint analysis task, for Respondents 1 to 4, all alternatives that show a 17-inch display are removed; therefore, no utility values are estimated for this level. The estimated part worths are presented in Tab. A4. MARKETING · ZFP · Heft 2 · 2. Quartal 2012

121

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement

r1

r2

Respondents r3

r4

r5

0.805

0.845

1

1

1

0.38 0.00 1.00 2.00 -2.75 0.00 1.50 4.75 0.00 1.00 2.25

0.38 0.00 1.00 2.00 -2.75 0.00 1.63 4.13 0.00 0.88 2.88

0.00 0.00 1.00 2.00 0.00 0.00 2.00 6.00 0.00 1.00 2.00

0.00 0.00 1.00 2.00 0.00 0.00 2.00 5.00 0.00 1.00 3.00

0.00 0.00 1.00 2.00 3.00 0.00 2.00 5.00 0.00 1.00 2.00

Attribute Adjusted R

2

Constant term Display 10-inch* 13-inch 15-inch 17-inch Processor entry-level* multimedia high-speed Price 700 USD* 500 USD 300 USD

Attribute Adjusted R

2

Constant term Display 10-inch* 13-inch 15-inch 17-inch Processor entry-level* multimedia high-speed Price 700 USD* 500 USD 300 USD

Display Competitor 1 15-inch Competitor 2 13-inch Competitor 3 10-inch Own offer 17-inch

r1

r2

Respondents r3

1

1

1

1

1

0 0 1 2

0 0 1 2

0 0 1 2

0 0 1 2

0 2 6 0 1 2

0 2 5 0 1 3

0 2 6 0 1 2

0 2 5 0 1 3

0 0 1 2 3 0 2 5 0 1 2

Processor multi-media multi-media entry level high-speed

Price 500 USD 500 USD 300 USD 700 USD

MARKETING · ZFP · Heft 2 · 2. Quartal 2012

r5

No unac- Too many Correctly identiceptable unacceptable fied unacceptlevels able levels levels 31% 48% 34% 25% 19% 28% 17% 23% 17% 27% 10% 21%

In this example, considering unacceptable levels does not distort the estimated part worths, nor does it negatively influence the internal validity (the values for the adjusted R2). However, if too many attribute levels are identified as unacceptable, the estimated marked shares are influenced. Tab. A5 provides an overview of estimates for four products’ market shares. The values are based on the BTL rule (Bradley-Terry-Luce rule). When applying the BTL rule, each of the products in the market simulation is assigned a probability of being chosen by the decision maker. The probability of selecting one alternative is de-

122

r4

Tab. A3: Estimated part worths when neglecting unacceptable levels; * denotes the reference level for each attribute

Tab. A4: Estimated part worths when directly assessing unacceptable levels; * denotes the reference level for each attribute

Tab. A5: Estimated market shares for a market with four products

rived from the product’s utility and denotes the proportional share of the respondent’s total utility across all products. The results show that the market share for the own product (here, a laptop with a 17-inch display) will be overestimated if no unacceptable levels are considered. If too many levels are identified as unacceptable, the results will also be distorted. In this example, the market share is (heavily) underestimated. We therefore conclude that neglecting or identifying too many unacceptable levels in preference measurement will heavily influence preference measurement estimates.

Steiner/Helm/Szelig, Identifying Unacceptable Attribute Levels in Preference Measurement

References Addelman, S. (1962): Orthogonal main-effect plans for asymmetrical factorial experiments, in Technometrics, Vol. 4, No. 1, pp. 21–46. Bateman, I./Munro, A./Rhodes, B./Starmer, C./Sugden, R. (2000): A test of the theory of reference-dependent preferences, in Kahneman, D./Tversky, A. (Eds.): Choices, values, and frames, pp. 180–201, Cambridge: Cambridge University Press. Becker, G. M./DeGroot, M. H./Marschak, J. (1964): Measuring Utility by a Single-Response Sequential Method, in Behavioral Science, Vol. 9, No. 3, pp. 226–232. Bettman, J. R./Luce, M.-F./Payne, J. W. (1998): Constructive consumer choice processes, in Journal of Consumer Research, Vol. 25, No. 3, pp. 187–217. Bucklin, R. E./Srinivasan, V. (1991): Determining interbrand substitutability through survey measurement of consumer preference structures, in Journal of Marketing Research, Vol. 28, No. 1, pp. 58–71. Brazell, J. D./Diener, C. G./Karniouchina, E./Moore, W. L./S´everin, V./Uldry, P.-F. (2006): The no-choice option and dual reponse choice designs, in Marketing Letters, Vol. 17, No. 4, pp. 255–268. Darmon, R. Y./Rouzi`es, D. (1989): Assessing conjoint analysis internal validity: The effect of various continuous attribute level spacings, in International Journal of Research in Marketing, Vol. 6, No. 1, pp. 35–44. DeSarbo, W. S./Lehmann, D. R./Carpenter, G./Sinha, I. (1996): A stochastic multidimensional unfolding approach for representing phased decision outcomes, in Psychometrica, Vol. 61, No. 3, pp. 485–508. Dorsch, M. J./Teas, K. R. (1992): A test of the convergent validity of self-explicated and decompositional conjoint measurement, in Journal of the Academy of Marketing Science, Vol. 20, No. 1, pp. 37–48. Ford, K. J./Schmitt, N./Schechtman, S. L./Hults, B. M./Doherty M. L. (1989): Process tracing methods: Contributions, problems, and neglected research questions, in Organizational Behavior and Human Decision Processes, Vol. 43, No. 1, pp. 75–117. Gilbride, T./Allenby, G. M. (2004): A choice model with conjunctive, disjunctive, and compensatory screening rules, in Marketing Science, Vol. 23, No. 3, pp. 391–406. Green, P. E. (1984): Hybrid models for conjoint analysis: An expository review, in Journal of Marketing Research, Vol. 21, No. 2, pp. 155–169. Green, P. E./Srinivasan V. (1990): Conjoint analysis in marketing: New developments with implications for research and practice, in Journal of Marketing, Vol. 54, No. 4, pp. 3–19. Green, P. E./Krieger, A. M./Bansal, P. (1988): Completely unacceptable levels in conjoint analysis: A cautionary note, in Journal of Marketing Research, Vol. 25, No. 3, pp. 293–300. Green, P. E./Krieger, A. M./Wind, Y. J. (2001): Thirty years of conjoint analysis: Reflections and prospects, in Interfaces, Vol. 31, No. 3, pp. 56–73.

Jedidi, K./Kohli J. (2005): Probabilistic subset-conjunctive models for heterogeneous consumers, in Journal of Marketing Research, Vol. 42, No. 4, pp. 483–494. Klein, N. M. (1987): Assessing unacceptable attribute levels in conjoint analysis, in Advances in Consumer Research, Vol. 14, No. 1, pp. 154–158. Kohli, R./Jedidi K. (2007): Representation and inference of lexicographic preference models and their variants, in Marketing Science, Vol. 26, No. 3, pp. 380–399. Louviere, J. J. (1988): Analyzing decision making: Metric conjoint analysis, in Lewis-Beck M. (Ed.): Quantitative applications in the social sciences: No. 67, Iowa: Sage University Paper Series. Mehta, R./Moore, W. L./Pavia, T. M. (1992): An examination of the use of unacceptable levels in conjoint analysis, in Journal of Consumer Research, Vol. 19, No. 3, pp. 470–476. Miller, K. M./Hofstetter, R./Krohmer, H./Zhang, Z. J. (2011): How should Consumers’ Willingness to Pay be measured? – An Empirical Comparison of State-of-the-Art Approaches, in Journal of Marketing Research, Vol. 48, No. 1, pp. 172–184. Mintz, O./Currim, I. S./Jeliazkov, I. (2010): Consumer search and propensity to buy, Working Paper, University of California, Irvine. Sawtooth Software (2007): The ACA/Web v6.0 Technical paper, in Sawtooth Software Technical Paper Series. Sawtooth Software (2009): ACBC – Technical Paper, Sawtooth Software Technical Paper Series. Scholz, S. W./Meissner, M./Decker R. (2010): Measuring consumer preferences for complex products: A compositional approach based on paired comparisons, in Journal of Marketing Research, Vol. 47, No. 4, pp. 685–698. Srinivasan, V. (1988): A conjunctive-compensatory approach to the self-explication of multiattributed preferences, in Decision Sciences, Vol. 19, No. 2, pp. 295–305. Wertenbroch, K./Skiera, B. (2002): Measuring Consumers’ Willingness to Pay at the Point of Purchase, in Journal of Marketing Research, Vol. 39, No. 2, pp. 228–241. Wind, J./Mahajan V. (1997): Issues and opportunities in new product development: An introduction to the special issue, in Journal of Marketing Research, Vol. 34, No. 1, pp. 1–12. Yee, M./Dahan, E./Hauser, J. R./Orlin, J. B. (2007): Greedoidbased noncompensatory inference, in Marketing Science, Vol. 26, No. 4, pp. 532–549.

Keywords preference measurement, unacceptable attribute levels, conjoint analysis, biases in preference measurement, heterogeneous preferences

MARKETING · ZFP · Heft 2 · 2. Quartal 2012

123

Suggest Documents