understanding mail survey response behavior - Semantic Scholar

12 downloads 238686 Views 2MB Size Report
response rate was first-class postage as compared to business reply, and even ..... .101 .001 .100 .200 .300 .400 .500 .600 .700 .800. Number of. Correlations. 1. 5. 32. 84 ... (.850). But this outlier effect was based on a small number of subjects.
UNDERSTANDING MAIL SURVEY RESPONSE BEHAVIOR A META-ANALYSIS FRANCIS J. YAMMARINO STEVEN J. SKINNER TERRY L CHILDERS

Abstract A meta-analysis of prior studies of techniques designed to induce mail survey response rates was conducted. Research encompassing 184 effects (study outcomes) in 115 studies (articles) for 17 predictors of response rate was examined. The average effect size across all manipulations was r = .065, indicating an average increase of about 6.5 percent in response rates for manipulations. Effect sizes for specific predictors and two potential moderators of effects were examined. Results indicated that repeated contacts in the form of preliminary notification and follow-ups, appeals, inclusion of a return envelope, postage, and monetary incentives, were effective in increasing survey response rates. Significant effect sizes for the predictors ranged from an increase in response of 2 percent to 31 percent. Implications of the results for the conduct of mail surveys and future research on mail survey response behavior are discussed. Researchers have amassed myriad techniques to stimulate mail survey response rates, reduce item omission, speed up response, and reduce response bias. Examples include the use of preliminary notification, follow-up, sponsorship, appeals, postage, personalization, incentives, anonymity, prior commitment, and techniques affecting questionnaire Correspondence regarding this article should be addressed to FRANCIS J. YAMMARINO, School of Management and Center for Leadership Studies, State University of New York at Binghamton, P.O. Box 6000, Binghamton, NY 13902-6000. STEVEN J. SKINNER is at the College of Business and Economics, University of Kentucky, Lexington, KY. TERRY L. CHILDERS is at the Carlson School of Management, University of Minnesota, Minneapolis, MN. This research was supported in part by a New Faculty Development Award to F. J. Yammarino from the NYS/UUP Professional Development and Quality of Working Life Committee. The views expressed are ours. We gratefully acknowledge the helpful comments provided by the anonymous reviewers on an earlier version of this article. Public Opinion Quarterly Volume 55:613-639 © 1991 by the American Association for Public Opinion Research All rights nsscrved. 0033-362X/91/5504-0004$02.50

614

Yammarino, Skinner, and Childers

appearance. These methods represent manipulations or predictors which, at least in some instances, have served to facilitate survey participation, A comprehensive list and review of studies related to the effectiveness of these methods can be found in the work of Duncan (1979), Houston and Ford (1976), Kanuk and Berenson (1975), and Linsky (1975). Linsky (1975), for example, found strong evidence that preliminary notification, follow-ups, "high powered" postage, sponsorship, small monetary incentives, and nonmonetary incentives were very effective for, or had positive effects on, increasing response rates. Other manipulations, according to Linsky (1975), are equivocal or interact with additional factors to affect response rates. Likewise, Kanuk and Berenson (1975) determined that monetary incentives (especially small ones), follow-up contacts, and perhaps preliminary notification, were the only techniques that had strong empirical evidence for effectiveness in increasing response rates. Other techniques showed no consistent effect. In a recent qualitative review, Harvey (1987) found that follow-ups, preliminary notification, stamped reply envelopes, and monetary incentives were important factors for increasing response rates. Harvey (1987) also determined that a number of other techniques such as personalization, length, sponsorship, and appeals had a more ambiguous effect on response rates. Although useful, traditional qualitative reviews are limited in that they do not explicitly consider differences in studies due to sample size, sample composition, experimental executions, and effect sizes (Fox, Crask, and Kim 1988). Recently, quantitative reviews such as meta-analysis, designed to address these limitations, have emerged. Several of these meta-analyses provide important background for the present study and are summarized in table 1. In an early comprehensive review of the mail survey literature, Heberlein and Baumgartner (1978) examined nine factors affecting initial and final survey response rates across 98 studies. They were able to explain 51 percent of the variance in final response through the number of respondent contacts (preliminary and follow-up) and the saliency of the survey topic. Additionally, Heberlein and Baumgartner (1978) found that sponsorship, length of the questionnaire, type of population surveyed, incentives, and postage (metered or franked) were also significant determinants of response (see table 1). Goyder (1982) replicated the Heberlein and Baumgartner (1978) study by conducting a meta-analysis on 145 citations. His findings were largely consistent with the predictive model of Heberlein and Baumgartner, finding that type of population, number of contacts, sponsorship, and

A Meta-analysis of Mail Surveys

615

monetary incentives were most important in predicting final survey response rate. In another quantitative review of 93 studies for all types of surveys (personal, telephone, and mail), Yu and Cooper (1983) examined 15 factors reported potentially to affect survey participation rates. They found, across method of contact, that response rates were improved by incentives (prepaid and promised), nonmonetary premiums and rewards, and by increasing the amount of monetary rewards. Other techniques that increased response were preliminary notification, foot-inthe-door approaches, personalization, and follow-up contacts (table 1). In a recent meta-analysis. Fox, Crask, and Kim (1988) examined 82 studies with respect to eight factors that have been associated across studies with higher response rates to mail surveys. In their analysis, prenotification, follow-ups, and stamped return postage were found to be significant determinants of mail survey response rates (table 1). Additionally, sponsorship and small monetary incentives enhanced response. Two other more focused quantitative reviews examined the effects of personalization (Worthen and Valcarce 1985) and several types of return postage (Armstrong and Lusk 1987). Worthen and Valcarce (1985) reviewed 26 studies and concluded that the effect size for personalization from previous research was small, and in a follow-up experiment found no effects on response rate due to cover letter personalization. Armstrong and Lusk (1987) conducted their meta-analysis on 34 studies and concluded that the only effect of consequence on response rate was first-class postage as compared to business reply, and even that effect was weak (table 1). In synthesizing across the prior meta-analyses, a subset of the mail survey response determinants appears to be consistently predictive of response rates. Drawing from the above studies, repeated contacts, whether they be in the form of preliminary notification of a survey or a survey follow-up, appear to have a considerable positive impact on response rates (Fox, Crask, and Kim 1988; Goyder 1982; Heberlein and Baumgartner 1978; Yu and Cooper 1983). Another factor that also has been reported to have a consistent positive effect is the use of monetary incentives (Fox, Crask, and Kim 1988; Goyder 1982; Heberlein and Baumgartner 1978; Yu and Cooper 1983). In addition, the sponsor of a survey, although less under the control of the researcher, has been a key determinant of survey response (Fox, Crask, and Kim 1988; Goyder 1982; Heberlein and Baumgartner 1978). Finally, the inclusion of and type of outgoing and return postage has been observed consistently to affect response rates (Armstrong and Lusk 1987; Fox, Crask, and Kim 1988).

« * XXX

X X X X C

so

00

X ON

*

*

js

*

*

*

xxxxxxxx

C

* * *

3

:X X X

XXX

::22^

x x x x x x x x

1 CO

CO

•o

z

6 6

Z 2 616

si 11 a; b

00 -< 0^ Pk 0k

X

X :X X X

XXX

X :X X

X X X X X :X

—< 2

P

4>

pi I Z-o

617

618

Yammarino, Skinner, and Childers

The present study complements and extends previous work, including the meta-analyses of Armstrong and Lusk (1987), Fox, Crask, and Kim (1988), Goyder (1982), Heberlein and Baumgartner (1978), Worthen and Valcarce (1985), and Yu and Cooper (1983), in several ways. First, a meta-analysis is used to assess more factors (17; see table 2) which influence response rate than were examined by previous researchers. As summarized in table 1, the present research found several of these factors to have a significant effect on survey response (see details below). Second, a larger number of study outcomes (184 effects in 115 articles) than in most prior research are included in the current meta-analysis. Third, different from previous work, two moderator variables (year of publication and type of sample) which may have affected the results of previous studies are examined. Given previous work (Harvey 1987; Heberlein and Baumgartner 1978; Yu and Cooper 1983), the publication year of the study was included as a type of cohort effect because of the changes in respondents' habits and journal editors' preferences over time, and to reflect any potential accumulation of survey administration knowledge. The type of subject sampled (e.g., consumer, educational, industrial) was included as a potential influence on response rates given work and comments on subjects and target populations (Goyder 1982; Harvey 1987; Heberlein and Baumgartner 1978; Kanuk and Berenson 1975; Linsky 1975). In these ways, the purpose of this study was to enhance understanding of mail survey response behavior by presenting a more comprehensive meta-analysis investigation of factors that affect mail survey response rates.

Meta-analysis Method To accomplish this purpose, a quantitative review of research design effects on mail survey response rates was conducted using metaanalysis procedures described by Glass (1977), Hunter and Schmidt (1990), and Hunter, Schmidt, and Jackson (1982). The procedures were used to determine mean effect sizes for factors which affect response rate and whether these effects are the same for different target audiences and whether the effects have changed over time. Also, based on these procedures, a determination can be made whether variation in results across samples (studies) was due to methodological problems and statistical artifacts (e.g., sampling error) as compared to actual substantive differences in subpopulation correlations of response rates and design effects identified by underlying moderator variables (e.g., target groups or time periods) (see details below). Response rate, the dependent variable, was defined as response as a percent of size of

A Meta-analysis of Mail Surveys

619

contacted sample (Yu and Cooper 1983). The independent variables were the manipulated factors designed to affect response rate. STUDY INCLUSION AND CODING

Individual studies (articles) that were included in the meta-analysis were identified in several ways. First, reference lists of several previously published review articles on mail surveys were examined (e.g.. Fox, Crask, and Kim 1988; Harvey 1987; Heberlein and Baumgartner 1978; Kanuk and Berenson 1975; Linsky 1975; Yu and Cooper 1983). Second, a multi-abstracting services computer research was conducted for all available years in the data bases. These included ABI/Inform (1971+), ERIC (1966+ ), Social SciSearch (1972+ ), Sociological Abstracts (1963+ ), and Psylnfo (1967+ ). Key words used in the search included "mail surveys," "response rates," and each of these terms combined with the names and synonyms for the various factors listed in table 1. Third, for recent work (1978+ ), a manual search of 10 journals that have a history of publishing mail survey studies was performed (i.e., American Sociological Review, Journal of the Academy of Marketing Science, Journal of Advertising Research, Journal of the American Statistical Association, Journal of Applied Psychology, Journal of Marketing, Journal of Marketing Research, Journal of the Market Research Society, Psychological Reports, and Public Opinion Quarterly). The manual search uncovered three articles not identified from previously published reference lists or from the computer search. To be included in the meta-analysis, studies must have manipulated a factor that infiuences response rates and reported response rates for the different experimental conditions. If more than one factor was manipulated in a study, each individual effect (study outcome) was included as a separate "data point" for the meta-analysis. Over 25 journals from disciplines such as business, education, marketing, political science, psychology, sociology, and statistics were represented. The 10 journals that yielded the most articles for the meta-analysis were: Public Opinion Quarterly, 27 articles; Journal of Marketing Research, IS SirticlQs; Journal of Applied Psychology, 11 Sirticles; Journal of Marketing and Journal of the Market Research Society, 8 articles each; Journal of Advertising Research, 1 articles; American Sociological Review and Journal of the Academy of Marketing Science, 4 articles each; Journal of the American Statistical Association and Psychological Reports, 3 articles each. The remaining journals produced 1 or 2 articles for the meta-analysis. The time period was from 1940 to January 1988, and 115 articles yielding 184 data points (effects) for 17 different predictors of response rate were included. Although many

620

Yamnnarino, Skinner, and Childers

articles yielded multiple data points, the effects were always for different predictors of response rate. Thus, for any given predictor, only one data point per study was included in the meta-analysis. The articles included in the meta-analysis are listed in the Appendix, and the 17 manipulated factors of interest are identified in table 2. All information relevant to the meta-analysis was coded from the articles by four trained coders. Data from each article were coded by two individuals independently. The coders then compared their information. In cases of disagreement, a third coder was asked to code the article in question independently and then all three coders met to determine a consensus coding for the article. Although inter-rater reliability statistics were not calculated, there were very few instances of nonagreement between raters. Response rate, the factor that was manipulated (in general, a "yes/ no" code was used), sample size, statistics, effect sizes, and direction of relationships were coded. The two moderator variables also were coded. To ensure an adequate number of data points for meta-analysis for the predictors of response rates under several conditions of the moderators, publication years were grouped as 1940-70, 1971-75, and 1976-87, while sample type was classified to contrast consumers versus various institutional groups (see details below). META-ANALYSIS CALCULATIONS

To distinguish between artifactual (e.g., sampling error) and substantive sources of variation, the procedures developed by Glass (1977), Hunter and Schmidt (1990), and Hunter, Schmidt, and Jackson (1982) were combined in a computer program specifically designed for the meta-analysis in this study. The work of Glass (1977, p. 374) was used to convert study statistics (e.g., t, F, X^, contingency table, MannWhitney U) to uncorrected product-moment correlations. Measurement error (reliability) and range restriction affecting these correlations were generally not issues of concern in the present study because the variables involved were actual counts of mailings and responses. Except for potential coding and typographical errors, therefore, reliability should be 1.00 and the range should be completely unrestricted. The frequency-weighted mean correlation (r) across studies, the best estimate of the population correlation (p), and the frequency-weighted average squared error (5^^), the corresponding uncorrected or observed variance of correlations across studies, were then computed using the work of Hunter and Schmidt (1990) and Hunter, Schmidt, and Jackson (1982). Because the observed or uncorrected total variance (5^^) can be confounded by variation in sample correlations produced by sampling error, it was corrected by removing sampling error

A Meta-analysis of Mail Surveys

621

variance as described by Hunter and Schmidt (1990) and Hunter, Schmidt, and Jackson (1982) to derive the best estimate of the population variance (cr^p). To test the significance of the variance corrected for sampling error or unexplained variance, a x^ statistic was used (Hunter and Schmidt 1990; Hunter, Schmidt, and Jackson 1982). Because of very high statistical power, however, even very small amounts of variation across studies will be statistically significant. That is, variation may be of minimal magnitude and significant, but when the x^ is nonsignificant, strong support is provided for the lack of actual variation across studies. As suggested by Hunter and Schmidt (1990) and Hunter, Schmidt, and Jackson (1982), when the unexplained (corrected) variance is small, inconsistent results across studies are due mainly to statistical artifacts, and subpopulation correlations do not differ. Thus, it is not necessary to research for moderator variables. In contrast, when the corrected (unexplained) variance is large, inconsistent results across studies are not artifactual, and they recommend examination of moderator variables (year of publication and type of sample in the present study) to identify subpopulations with different correlational values. As a general rule, a moderator search should be initiated if the ratio of unexplained (corrected) to total variance is greater than 25 percent, or in contrast, if the ratio of explained (sampling error) to total variance is less than 75 percent. Regardless of other indicators (e.g., statistical significance of mean correlations), this 75 percent rule is used to determine whether a moderator search is warranted (Hunter and Schmidt 1990; Hunter, Schmidt, and Jackson 1982). The above calculations and procedures are then repeated within each condition (level) of the potential moderator variables. If (1) the frequency-weighted mean correlation varies from subgroup to subgroup and (2) the average corrected variance for the subgroups is lower than the corrected variance for the data set as a whole, then there is evidence for a moderated effect. Cle£irly, after examining one moderator, the unexplained (corrected) variance may still be greater than zero for each level of a moderator. This suggests that the remaining variance is due to additional moderators or statistical artifacts.

Results OVERALL RESULTS

The findings of meta-analysis for each of the research design effects (predictors) on mail survey response rates are presented in table 2.

c c .2 -3 c3 "5. > X

s ^ vi r~i fs fM

fc .2

o •=

88

S§88

263 434 173 235 152

850

255 015

047

217 122 .061



.051 .029 197

•o • * vO fNi r^

319 145 074 222

o o o

o o o o

.014 .016 .028 .029

o

850 446

o o o o o

o o

00 —



990

o o o

028 .025 049 075 023

^ fN

117 117 244

>O rn ijD Os Tt ^

.217 106

c/5 .a

1

o d 1/3

. "^ -a "^

1i

OS +->

c8 "5 ..2

1 ^ «S .5

! 11 = §

z

.800

1 5 32 84 40 12 7 2

.5 2.7

1

17.4 45.7 21.7 6.5 3.8 1.1

.5

For each predictor shown in the first column of the table, the following information is presented in columns 2-9: frequency-weighted mean correlation; 95 percent confidence interval for the correlation; corrected (unexplained) variance (i.e., total minus sampling error variance); x^ test of corrected (unexplained) variance; number of correlations (effects); range of the correlations (effects); total sample size; uncorrected total or observed variance in the sample correlations; and variance explained by sampling error (i.e., ratio of sampling error to total variance). Results collapsed across all design effects are shown in the first row of table 2; that is, regardless of the manipulation in a study, the effect (correlation) was .065, which was not statistically significant (95 percent confidence interval includes zero). The range of the correlations in this case was -.217 to .850. The 184 correlations were distributed as shown in table 3. With the exception of one outlier, the correlations were generally well dispersed throughout the range and somewhat normally distributed. As indicated by the remaining data in the first row of table 2, subpopulation correlations should be examined, and the logical starting point is with the 17 different manipulations also identified in table 2. Inspection of the distributions of the correlations for each of the 17 manipulations shown in table 2 revealed two somewhat interesting results. First, for incentives greater than $.50 and less than or equal to $1.00, the frequency-weighted mean correlation of .119 was ob-

624

Yammarino, Skinner, and Childers

tained from two significant effects (.434 and .308) and two nonsignificant effects (.025 and .043). This "split" distribution and small number of correlations may thus indicate an unstable finding. Second, for anonymity, the frequency-weighted mean correlation of .021 was obtained from 10 effects within the - .047 to .089 range and one outlier effect (.850). But this outlier effect was based on a small number of subjects (300 out of 74,259), so the overall effect size was not inflated artificially. For the remaining manipulations, the effects were fairly well distributed across each of the ranges reported in table 2. Of the 17 correlations, only the correlations for appeals (.047), reply/return envelope (.079), incentives of $.50 or less (.184), incentives greater than $1.00 (. 122), and survey length greater than four pages ( - .078) differed significantly from zero (95 percent C.I.). For appeals and survey length greater than four pages, x^ tests of corrected (unexplained) variance were not significant, and more than 75 percent of total (uncorrected) variance was explained by sampling error variance. A moderator search for these two predictors of response rate was not necessary—the correlations for appeals and survey length greater than four pages were significant and derived from one population. Although x^ tests for special delivery/air mail return postage, incentives greater than $1.00, and deadlines were not significant, the ratio of sampling error to total variance for each is less than 75 percent. For the remaining manipulations identified in table 2, x^ tests were significant, and less than 75 percent of total variance was explained by sampling error variance. Therefore, except for appeals and survey length greater than four pages, the sample correlations were derived from more than one population, and moderators need to be investigated. MODERATOR RESULTS

Subpopulation correlations were investigated for two potential moderators—year of publication and type of sample—of the relationship between research design effects and mail survey response rates. Frequencies of correlations across all predictors of response rate for different levels of the two moderators were examined. The final categorization of year of publication was 1940-70 (62 correlations), 1971-75 (55 correlations), and 1976-87 (67 correlations). Type of sample was originally coded as consumer (KM correlations), educational (35 correlations), industrial (19 correlations), health care (10 correlations), governmental (4 correlations), and other institutional (8 correlations) groups. The final categorization of type of sample was consumer and institutional groups (108 and 76 correlations, respectively). These final categorizations were chosen to ensure an adequate number of correla-

A Meta-analysis of Mail Surveys

625

tions per cell for meta-analysis of individual predictors as well as to identify two independent moderators. Nonsignificant x^ results (2.116) indicated that the frequencies of correlations did not differ by cell, and the two moderator coding schemes, year of publication and type of sample, were independent of one another (r = .093, n.s.; phi = .107). Thus, two separate ways of specifying subpopulations were identified. Moderator analyses were conducted for all predictors of response rates for which variance explained by sampling error was less than 75 percent and which had sufficient numbers of correlations (three or more) in cells based on time periods or types of samples to perform meta-analysis. To conserve space, only results of comparisons using the previously described criteria for determining a significant moderator are presented in table 4. The same type of information about correlations and variances as displayed in table 2 is shown in table 4 for predictors of mail survey response rates by year of publication and by type of sample moderator. For four predictors of response rate (preliminary notification, reply/ return envelope, stamped/metered postage, and personalization), all three levels of the year of publication moderator were represented. In only two cases, however, did there seem to be a moderated effect. For preliminary notification, the subgroup correlations varied from .010 (1971-75) to .285 (1976-87), the latter one was significantly different from zero, and the average corrected variance in the subgroups was .014 as compared to a corrected variance of .023 for the data as a whole. Although variance explained by sampling error for the subgroups did not reach 75 percent (suggesting the presence of additional moderators), year of publication did appear to moderate the preliminary notification-response rate relationship. For stamped/metered return postage, the subgroup correlations ranged from - .008 (1971-75) to .066 (1940-70), all three differed significantly from zero, the average corrected variance in the subgroups was zero as compared to a corrected variance of .001 for the data as a whole, and variance explained by samphng error was 100 pecent in all subgroups. Year of publication, therefore, did appear to be a significant moderator of the stamped/ metered return postage-response rate relationship. For eight design effects on response rates (preliminary notification, follow-ups/repeated contacts, stamped/metered postage, personalization, incentives less than or equal to $.50, nonmonetary incentives, survey appearance, and anonymity), both levels of the type of sample moderator were represented. In only two cases, however, did there seem to be a moderated effect. For follow-ups/repeated contacts, the subgroup correlations were .073 (consumer groups) and .306 (institutional groups), the latter one was significantly different from zero, and the average corrected variance in the subgroups was .002 as compared

fc .2

88

5 a CO . a

o o —

o o o

Pi

o "w

Z fc

o U

Con

u B

— Q o ©

.2

O\ — 00 Tf

— m

o o\ r-; ^

m CM • o

\O

"i

• u,

fli

C .M C

C

E 3

3

1^ .3

Bl

/-•

.S

ft

55

Con Insti

r" ^\ ^ ^^

tio

J2 ^ -^ u 1 I 1. a «

V

a.

A Meta-analysis of Mail Surveys

627

to a corrected variance of .015 for the data as a whole. Although variance explained by sampling error for the subgroups did not reach 75 percent (suggesting the presence of additional moderators), type of sample did appear to moderate the follow-ups/repeated contactsresponse rate relationship. For stampedlmetered return postage, the subgroup correlations were .009 (consumer groups) and .061 (institutional groups), the latter one was significantly different from zero, and the average corrected variance in the subgroups was .000 as compared to a corrected variance of .001 for the data as a whole. In the institutional (but not in the consumer) subgroup, the variance explained by sampling error exceeded 75 percent. Although weak, type of sample did seem to moderate the stamped/metered return postage-response rate relationship.

Discussion KEY RESULTS AND IMPLICATIONS

Four key findings can be identified from the meta-analysis results reported in this study. First, the relationships between appeals and response rate, and survey length greater than four pages and response rate, were statistically significant and derived from one population; that is, the relationships are generalizable across the studies investigated. Thus, in designing mail surveys, regardless of the target population, researchers that use a cover letter that includes appeals and a survey of less than four pages should be able to increase their response rates. Second, the associations between reply/return envelope and response rate, incentives less than or equal to $.50 and response rate, and incentives greater than $1.00 and response rate, were statistically significant but derived from more than one population; that is, the relationships are situation specific and there is a need to examine potential moderators. Because the moderators in this study were not significant in the cases of these three predictors, to increase response rates by using one of these factors, researchers must consider other variables that may influence individuals' willingness to respond (e.g., situational factors, personality characteristics). Third, for the remaining predictor-response rate relationships investigated, the correlations were not statistically significant, and subgroup correlations identified by moderators need to be examined. Thus, without consideration of other variables that may influence individuals' response tendencies, these factors do not appear to enhance response. In this study, two potential moderators were examined.

628

Yammarino, Skinner, and Childers

The fourth finding pertains to the two moderators, year of publication and type of sample. In terms of the meta-analysis results for the year of publication moderator, several key findings can be specified. First, the relationship between preliminary notification and response rate varied across time periods, and for more recent times (1976-87), the correlation was statistically significant. Preliminary notification has been most effective in recent time periods, suggesting that respondents' changing attitudes are involved. Current researchers should therefore continue to use preliminary notification to enhance response rates. Although time or cohort influences appear to be significant, the results also suggest other moderators, and thus subgroups, could be affecting this relationship. Second, the association between stamped/ metered return postage and response rate also varied with time, and all three correlations were statistically significant. Moreover, the results suggest that searching for additional moderators of this relationship may not be necessary. Third, for the remaining predictor-response rate relationships, year of publication was not concluded to be a significant moderator. Several key findings also can be specified for the meta-analysis results for the type of sample moderator. First, the association between follow-ups/repeated contacts and response rate differed for samples of consumer groups and institutional groups, and the correlation for the latter was statistically significant. Thus follow-ups/repeated contacts seemed to have a greater effect on institutional than consumer groups' response rates, and the designers of mail surveys for these different target groups should be aware of such differential responses. For both subgroups, however, additional moderators may be influencing this association. Second, the relationship between stamped/metered return postage and response rate differed (albeit weakly) for samples of consumer and institutional groups, and again, the latter correlation was statistically significant. There seemed to be a greater effect on the response rates of institutional as compared to consumer groups. However, for the consumer groups, at least, additional moderators may be operating. Third, for the remaining predictor-response rate relationships, type of sample was not concluded to be a significant moderator. The statistically significant effects in this study were moderate in magnitude. The practical implications of these findings can be illustrated in terms of a binomial effect size display (BESD; Rosenthal 1984; Wolf 1986). The predictor with the largest frequency-weighted mean correlation with response rate was incentives less than or equal to $.50 (r = .184). Using this incentive would increase mail survey response rates approximately 18,4 percent, or from a response rate of about 40.8 percent to 59.2 percent (see Rosenthal 1984, p. 131). Incentives of $1.00 or more would improve response rates 12.2 percent.

A Meta-analysis of Mail Surveys

629

This is consistent with the findings of Goyder (1982), Heberlein and Baumgartner (1978), and Yu and Cooper (1983). Preliminary notification for studies conducted between 1976 and 1987 and follow-ups for institutional respondent groups were also significantly related to response rates. All previous meta-analyses, with the exception of Armstrong and Lusk (1987), who did not examine any type of contact, report similar findings. The present results suggest that preliminary notification (1976-87) increased response rate by 28.5 percent, while follow-ups for institutional groups increased response rate by 30.6 percent. Like Fox, Crask, and Kim (1988) and Armstrong and Lusk (1987), in the current study, postage was an important predictor of response rate. For all time periods, stamped/mete red return postage was significantly related to response rate. Most recently (1976-87), stampedmetered postage increased response rate 2.4 percent, while in the past (1940-70), the enhanced effect on response rates was even greater (6.6 percent). The negative effect obtained for the period 1971-75, however, may be a statistical anomaly due to the small number of observed correlations (5). On the other hand, the trend over the 47-year time period indicates that the affect of postage is declining and may signal an expectation on the part of respondents that postage is a survey requirement and its absence may only tend to deter rather than motivate response. Moreover, for institutional samples, stamped/metered return postage increased response rates about 6.1 percent. Again, however, this result is based on a small number of correlations (5). Compatible with the work of Heberlein and Baumgartner (1978), results from the current study indicated that response rates would be reduced by 7.8 percent for survey lengths of greater than four pages. In contrast to Yu and Cooper (1983), appeals (4.7 pecent) and reply/ return envelope (7.9 percent) had a positive effect on response rates in the present investigation. Overall, the average effect size across all predictors of response rate (r = .065) in the current study would indicate an increase in response rates of about 6.5 percent, or from 46.75 percent to 53.25 percent (see Rosenthal 1984, p. 131). Two current findings were not consistent with previous research. WhUe Fox, Crask, and Kim (1988), Goyder (1982), and Heberlein and Baumgartner (1978) reported a significant effect for sponsorship, the present results did not support this finding. One reason for this difference may stem from the level of aggregation of studies used in the present study. By aggregating groups within the institutional category, subpopulation effects may have been obscured. For instance, Heberlein and Baumgartner (1978) found that government sponsorship increased response, while this study was unable to investigate this issue because only six data points were available for analysis. Also,

630

Yammarino, Skinner, and Childers

Yu and Cooper (1983) reported a significant relationship between nonmonetary incentives and response rate. Nonmonetary incentives were not a significant predictor of response rates in the present study, although the overall correlation did indicate the presence of a positive effect on response. This result as well as those discussed earlier may diverge from the present results, in part due to the inclusion of both more recent studies and a greater number of effect sizes than in past work. LIMITATIONS AND FUTURE WORK

Nevertheless, the results of this investigation are generally compatible with and extend those of prior review (e.g., Harvey 1987; Kanuk and Berenson 1975; Linsky 1975) and meta-analysis (e.g.. Fox, Crask, and Kim 1988; Goyder 1982; Heberlein and Baumgartner 1978; Yu and Cooper 1983) articles about mail surveys. The extensions involve the clarification of prior work by examining a greater number of factors that influence response rate through a large number of study outcomes and the investigation of moderators. The current study, however, is not without limitations, and thus, several directions for future research can be suggested. First, the moderators that were chosen for investigation did not clarify the relationships as much as had been expected. Perhaps this was due to a poor coding scheme or perhaps the choice of moderators which were too global in nature. For instance, to ensure an adequate number of correlations for meta-analyses of individual predictors, it was necessary to combine educational, industrial, health care, governmental, and other institutional samples into one subgroup. This combination may have obscured or perhaps cancelled important differences in results among the groups. With a few exceptions, predictor-response rate relationships appear to be moderated by other variables. Future work concerning moderators might focus on characteristics of respondents (e.g., personality, prior knowledge, involvement, expertise), additional details about samples (e.g., academic, government, general public), and the situations (e.g., work vs. home or on-site intercepts) in which surveys are administered and/or completed. Also, results from census pretesting (Riche 1987) indicate there are significant geographic differences in response rates due to regional as well as metropolitan versus rural respondents. These results as well as those from BLS surveys (Riche 1987) and the Goyder (1982) study indicate that other potential moderators to be examined may relate to cultural and subcultural (e.g., blacks, Hispanics) differences in response to surveys. Second, the predictors of response rate were investigated in a limited way in this study; in general, a simple "yes/no" coding scheme was

A Meta-analysis of Mail Surveys

631

used. Along the lines of the work of Worthen and Valcarce (1985) and Armstrong and Lusk (1987), future studies could investigate each predictor in this study in greater detail as well as focus on other and multiple predictors of response rate simultaneously. Likewise, some of the findings from the current research may be unstable due to the small number of effect sizes that were available to be synthesized (e.g., appeals, deadlines, incentives greater than $.50 and less than or equal to $1.00). As such, additional primary research on these predictors of response rate is necessary. In these ways, more comprehensive meta-analyses could be conducted in future work. Third, because the 184 data points (study outcomes) for the metaanalysis were obtained from 115 studies (articles), an average of 1.6 per study, there is nonindependence of study outcomes in several cases. For any given predictor of response rate, however, only one effect size per study was used in the meta-analyses. Although nonindependence may not be of concern when an individual factor influencing response rate is examined, it affects the overall meta-analysis encompassing all predictors. To address this issue, the average correlation per study across factors manipulated in that study was used in the overall meta-analysis. In future work, various statistical adjustments to the correlations or limiting the number of correlations included from any one study could be investigated as alternatives to the approach used here. Likewise, whether more than one manipulation (e.g., personalization and return postage) for the same subjects in one study reduces or enhances response rates could be investigated in future work. Fourth, a "file drawer problem" may exist in that inclusion of missing studies could alter the results of this investigation (Rosenthal 1979). The inclusion of unpublished studies, especially in cases where results are based on only a few published studies or the confidence intervals are close to zero, could affect the findings reported here. In future investigations, researchers might include both published and unpublished work in meta-analyses to obtain more comprehensive findings. Fifth, although guided by prior theoretical work, this investigation did not present a complete conceptualization or test of how and why people respond to mail surveys. Given the results of this investigation, future work could focus on testing, with meta-analysis as well as new data and experimental studies, a comprehensive model of mail survey response behavior. Candidates for the comprehensive theory include such social psychological theories as attribution and self-perception theory (cf. Hansen 1980; Reingen and Kernan 1977, 1979) and dissonance theory (Furse and Stewart 1982; Hackler and Bourgette 1973) as well as the equity/exchange model (Childers 1986; Childers and Skinner 1985). In the latter, the three constructs of cooperation, com-

632

Yannmarino, Skinner, and Childers

mitment, and trust are integrated to provide an explanation for the impact of such significant effects as preliminary notification, followups, and appeals (increasing respondent outcomes) and postage/return envelope and questionnaire length (reducing respondent inputs) as well as incentives and sponsorship (increasing respondent trust). Despite some limitations, the current research incorporated several aspects of a properly conducted meta-analysis as identified by Bangert-Drowns (1986) and Bullock and Svyantek (1985). Moreover, meta-analysis procedures of Glass (1977), Hunter and Schmidt (1990), and Hunter, Schmidt, and Jackson (1982) were combined in this study. The results of this research, in conjunction with those from the suggested future research efforts, should clarify understanding of mail survey response behavior and, thus, permit the design of surveys which increase response rates. The current investigation was intended as a step in that direction.

Appendix Studies Included in Meta-anaiysis Allen, C. T., C. D. Schewe, and G. Wijk. 1980. "More on Self-perception Theory's Foot Technique in the Pre-call/Mail Survey Setting." Journal of Marketing Research 17:498-501. Anderson, J. F., and D. R. Berdie. 1975. "Effects on Response Rates of Formal and Informal Questionnaire Follow-up Techniques." Journal of Applied Psychology 60(2):255-57. Andreasen, A. R. 1970. "Personalizing Mail Questionnaire Correspondence." Public Opinion Quarterly 34:273-77. Armstrong, J. S. 1975. "Monetary Incentives in Mail Surveys." Public Opinion Quarterly 39:111-16. . 1979. "Advocacy and Objectivity in Science." Management Science 25:423-28. Ash, P., and E. Abramson. 1952. "The Effect of Anonymity on AttitudeQuestionnaire Response." Journal of Abnormal and Social Psychology 47:722-23. Berdie, D. R. 1973. "Questionnaire Length and Response Rate." Journal of Applied Psychology 58(2):278-80. Berry, S., and D. Kanouse. 1985. "Physician Response to a Mailed Survey: An Experiment in Timing of Payment." Public Opinion Quarterly 51:102-14. Blass-Wilhelms, W. 1982. "Der einjftuss der frankierungsart auf den riicklauf von antwortkarten." Zeitschrift fur Soziolgie 11:64-68. Blumenfeld, W. S. 1973. "Effect of Appearance of Correspondence on Response Rate to a Mail Questionnaire Survey." Psychological Reports 32(1): 178.

A Meta-analysis of Mail Surveys

633

Brennan, R. D. 1958. "Trading Stamps as an Incentive in Mail Surveys." Journal of Marketing 22(3): 306-7. Brook, L. L. 1978. "The Effect of Different Postage Combinations on Response Levels and Speed of Reply." Journal of the Market Research Society 20:238-44. Brown, M. L. 1%5. "Use of a Postcard Query in Mail Surveys." Public Opinion Quarterly 29:635-37. Brunrer, A. G., and S. J. Carroll, Jr. 1967. "The Effect of Prior Telephone Appointments on Completion Rates and Response Content." Public Opinion Quarterly 3H4):652-54. Carpenter, E. H. 1974. "Personalizing Mail Surveys: A Replication and Reassessment." Public Opinion Quarterly 38:614-20. Champion, D. J., and A. M. Sear. 1969. "Questionnaire Response Rate: A Methodological Analysis." Social Forces 47:335-39. Childers, T. L., and O. C. Ferrell. 1979. "Response Rates and Perceived Questionnaire Length in Mail Surveys." Journal of Marketing Research 16:429-31. Childers, T. L., and S. Skinner. 1985. "Theoretical and Empirical Issues in the Identification of Survey Respondents." Journal of the Market Research Society 27(l):39-53. Chromy, J. R., and D. G. Horvitz. 1978. "The Use of Monetary Incentives in National Assessment Household Surveys." Journal of the American Statistical Association 73:473-78. Clausen, J. A., and R. N. Ford. 1947. "Controlling Bias in Mail Questionnaires." Journal of the American Statistical Association 42:497-511. Cook, J., and N. Schoeps. 1985. "Program Response to Mail Surveys as a Function of Monetary Incentives." Psychological Reports 57:366. Corey, S. 1937. "Signed vs. Unsigned Attitude Questionnaires." Journal of Educational Psychology 28:144-48. Cox, E. P. Ill, W. T. Anderson, Jr., and D. G. Fulcher. 1974. "Reappraising Mail Survey Response Rates." Journal of Marketing Research 11:413-17. Dillman, D. A., and J. H. Frey. 1973. "Contribution of Personalization to Mail Questionnaire Response as an Element of a Previously Tested Method." Journal of Applied Psychology 59(3):297-301. Dillman, D. A., J. G. Gallegos, and J. H. Frey. 1976. "Reducing Refusal Rates for Telephone Reviews." Public Opinion Quarterly 40:66-78. Dommeyer, C. J. 1985. "Does Response to an Offer of Mail Survey Results with Questionnaire Interest?" Journal of the Market Research Society 27(l):27-38. Down, P., and J. Kerr. 1986. "Recent Evidence on the Relationship between Anonymity and Response Variables for Mail Surveys." Journal of the Academy of Marketing Science 14:72-82. Erdos, P. L., and J. Regier. 1977. "Visible vs. Disguised Keying on Questionnaires." Journal of Marketing Research 17:125-32. Etzel, M. J., and B. J. Walker. 1974. "Effects of Alternative Follow-up Procedures on Mail Survey Response Rates." Journal of Applied Psychology 59(2):219-21.

634

Yammarino, Skinner, and Childers

Ferrell, O. C , T. L. Childers, and R. Ruekert. 1984. "Effects of Situational Factors on Mail Survey Response." In Educators' Conference Proceedings, ed. R. W. Balk et al., pp. 364-67. Chicago: American Marketing Association. Ferris, A. L. 1951. "A Note on Stimulating Response to Questionnaires." American Sociological Review 16:247-49. Finn, D. W. 1983. "Response Speeds, Functions, and Predictability in Mail Surveys." Journal of the Academy of Marketing Science 11:61-70. Ford, N. M. 1967. "The Advance Letter in Mail Surveys." Journal of Marketing Research 4:202-4. . 1%8. "Questionnaire Appearance and Response Rates in Mail Surveys." Journal of Advertising Research 8:43-45. Francel, E. G. 1966. "Mail-administered Questionnaires: A Success Story." Journal of Marketing Research 3:89-92. Frankel, L. R. 1960. "How Incentives and Subsamples Affect the Precision of Mail Surveys." Journal of Advertising Research 1:1-5. Frazier, G., and K. Bird. 1958. "Increasing the Response of a Mail Questionnaire." Journal of Marketing 22:186—87.

Friedman, H. H., and A. J. San Augustine. 1979. "The Effects of a Monetary Incentive and the Ethnicity of the Sponsor's Signature on the Rate and Quality of Response to a Mail Survey." Journal of the Academy of Marketing Science7(2):95-lOl. Fuller, C. 1974. "Effect of Anonymity on Return Rate and Response Bias in a Mail Survey." Journal of Applied Psychology 59(3):292-%. Furst, L. G., and W. P. Blitchington. 1979. "The Use of a Descriptive Cover Letter and Secretary Pre-letter to Increase Response Rate in a Mailed Survey." Personnel Psychology 32(1): 155-60. Futrell, C M . , and J. Swan. 1977. "Anonymity and Response by Salespeople to a Mail Questionnaire." Journal of Marketing Research 14:611-16. Gilsan, G., and J. L. Grimm. 1982. "Improving Response Rate in an Industrial Setting: Will Traditional Variables Work?" Southern Marketing Association Proceedings 20:265-68. Godwin, K. R. 1979. "The Consequences of Large Monetary Incentives in Mail Surveys of Elites." Public Opinion Quarterly 43(3):378-87. Goodman, C. S. 1961. Cited by Armstrong, J. S., and E. J. Lusk. 1987. "Return Postage in Mail Surveys: A Meta-analysis." Public Opinion Quarterly 51:233-48. Goodstadt, M. S., L. Chung, R. Kronitz, and G. Cook. 1977. "Mail Survey Response Rates: Their Manipulation and Impact." Journal of Marketing Research 14:391-95. GuUahom, J., and J. GuUahorn. 1963. "An Investigation of the Effects of Three Factors on Response to Mail Questionnaires." Public Opinion Quarterly 27:294-%. Hackler, J. C , and P. Bourgette. 1973. "Dollars, Dissonance, and Survey Returns." Public Opinion Quarterly 37:276-81. Hammond, E. C. 1959. "Inhalation in Relation to Type and Amount of Smoking." Journal of the American Statistical Association 54:35-49.

A Meta-analysis of Mail Surveys

635

Hancock, J. W. 1940. "An Experimental Study of Four Methods of Measuring Unit Costs of Obtaining Attitude toward the Retail Store." Journal of Applied Psychology 24:213-30. Hansen, R. A. 1980. "A Self-perception Interpretation of the Effect of Monetary and Nonmonetary Incentives on Mail Survey Response Behavior." Journal of Marketing Research 17:77-83. Hansen, R. A., and L. M. Robinson. 1980. "Testing the Effectiveness of Alternative Foot-in-the-Door Manipulations." Journal of Marketing Research 17:359-64. Hams, J. R., and H. J. Guffey. 1978. "Questionnaire Returns: Stamps versus Business Reply Envelopes Revisited."Jowrna/ of Marketing Research 15:290-93. Harvey, L. 1986. "A Research Note on the Impact of Class-of-Mail on Response Rates to Mailed Questionnaires." Journal of the Market Research Society 29:299-300. Hawes, J., and G. E. Kiser. 1981. "Additional Findings on the Effectiveness of Three Response-Inducement Techniques for a Mail Survey of a Commercial Population." Southern Marketing Association Proceedings 19:263-65. Hawkins, D. J. 1979. "The Impact of Sponsor Identification and Direct Disclosure of Respondent Rights on the Quantity and Quality of Mail Survey Data." Journal of Business 52(4):577-90. Heaton, E. E., Jr. 1965. "Increasing Mail Questionnaire Returns with a Preliminary Letter." Journal of Advertising Research 5:36-39. Hendrick, C , R. Borden, M. Giesen, E. J. Murray, and B. A. Seyfried. 1972. "Effectiveness of Ingratiation Tactics in a Cover Letter on Mail Questionnaire Response." Psychonomic Science 26(6):349-51. Hensley, W. E. 1974. "Increasing Response Rate by Choice of Postage Stamps." Public Opinion Quarterly 38:280-83. Hewett, W. C. 1974. "How Different Combinations of Postage on Outgoing and Return Envelopes Affect Questionnaire Returns." Journal of the Market Research Society 16:49-50. Hinrichs, J. R. 1975. "Factors Related to Survey Response Rates." Journal of Applied Psychology 60(2):249-51. Horowitz, J. L., and W. E. Sedlacek. 1974. "Initial Returns on Mail Questionnaires: A Literature Review and Research Note." Research in Higher Education 2:361-67. Houston, M. J., and R. W. Jefferson. 1975. "The Negative Effects of Personalization on Response Patterns in Mail Surveys." Journal of Marketing Research 12:114-17. Huck, S. W., and E. M. Gleason. 1974. "Using Monetary Inducements to Increase Response Rates from Mailed Surveys." Journal of Applied Psychology 59(2):222-25. Jones, W. 1979. "Generalizing Mail Survey Inducement Methods." Public Opinion Quarterly 43( 1): 102-11. Kephart, W. M., and M. Bressler. 1958. "Increasing the Responses to Mail Questionnaires." Public Opinion Quarterly 22:123-32. Kerin, R. A. 1983. Cited by Armstrong, J. S., and E. J. Lusk. 1987. "Return

636

Yammariffio, Skinner, and Childers

Postage in Mail Surveys: A Meta-analysis." Public Opinion Quarterly 51:233-48. Kerin, R. A., and M. G. Harvey. 1976. "Methodological Considerations in Corporate Mail Surveys: A Research Note." Journal of Business Research 4:277-81. Keman, J. B. 1971. "Are 'Bulk-Rate Occupants' Really Unresponsive?" Public Opinion Quarterly 35:420-22. Kimball, A. E. 1961. "Increasing the Rate of Return in Mail Surveys." Journal of Marketing 25:63-65. Labrecque, D. P. 1978. "A Response Rate Experiment Using Mail Questionnaires." Journal of Marketing 42:82-83. Longworth, D. S. 1953. "Use of a Mailed Questionnaire." American Sociological Review 18:310-13. Lusk, E. J., and K. S. Armstrong. 1982. Cited by Armstrong, J. S., and E. J. Lusk. 1987. "Return Postage in Mail Surveys: A Meta-analysis." Public Opinion Quarterly 51:233-48. McCrohan, K. F., and L. S. Lowe. 1981. "A Cost/Benefit Approach to Postage Used on Mail Questionnaires." Journal of Marketing 45:130-33. McDaniel, S. W., and C. P. Rao. 1981. "An Investigation of Respondent Anonymity's Effect on Mailed Questionnaire Response Rate and Quality." Journal of Marketing Research Society 23(3): 150-59. McGinnis, M. A., and C. J. HoUon. 1977. "Mail Survey Response Rate and Bias: The Effect of Home versus Work Address." Journal of Marketing Research 14:383-84. Martin, J. D., and J. P. McConnell. 1970. "Mail Questionnaire Response Induction: The Effect of Four Variables on the Response of a Random Sample to a Difficult Questionnaire." Social Science Quarterly 51:409-14. Mason, W. S., R. J. Dressel, and R. K. Bain. 1961. "An Experimental Study of Factors Affecting Response to a Mail Survey of Beginning Teachers." Public Opinion Quarterly 25:296-99. Matteson, M. T. 1974. "Type of Transmittal Letter and Questionnaire Color as Two Variables Influencing Response Rates in a Mail Survey." Journal of Applied Psychology 59:535-36. Moore, C. C. 1941. "Increasing the Returns from Questionnaires." Journal of Educational Research 35:138-41. Myers, J. H., and A. F. Haug. 1%9. "How a Preliminary Letter Affects Mail Survey Returns and Costs." Journal of Advertising Research 9:37-39. Newman, S. W. 1%2. "Differences between Early and Later Respondents to a Mailed Survey." Journal of Advertising Research 2:37-39. O'Connor, P. J., G. Sullivan, and W. Jones. 1981. "An Evaluation of the Characteristics of Response Quality Induced by Follow-up Survey Methods." Working Paper, University of Kentucky. Parsons, R. J., and T. S. Medford. 1972. "The Effect of Advance Notice in Mail Surveys of Homogeneous Groups." Public Opinion Quarterly 36:258-59. Perry, N. 1974. "Postage Combinations in Postal Questionnaire Surveys: Another View." Journal of the Market Research Society 16:199-210.

A Meta-analysis of Mail Surveys

637

Peterson, R. A. 1975. "An Experimental Investigation of Mail Survey Responses." Journal of Business Research 3(3): 199-210. Pressley, M. M. 1978. "Care Needed When Selecting Response Inducements in Mail Surveys of Commercial Populations." Journal of the Academy of Marketing Science 6(4):336-43. Pressley, M. M.., and W. L. TuUar. 1984. "Postage as a Response Inducing Factor in Mail Surveys of Commercial Populations." Working Paper, College of Business, University of New Orleans, New Orleans, LA. Price, D. O. 1950. "On the Use of Stamped Return Envelopes with Mail Questionnaires." American Sociological Review 15:672-73. Pucel, D. J., H. F. Nelson, and D. N. Wheeler. 1971. "Questionnaire Followup Returns as a Function of Incentives and Responder Characteristics." Vocational Guidance Quarterly 19:188-93. Roberts, R. E., O. F. McCrory, and R. N. Forthofer. 1978. "Further Evidence on Using a Deadline to Stimulate Responses to a Mail Survey." Public Opinion Quarterly 42:407-10. Robertson, D. H., and D. N. Bellenger. 1978. "A New Method of Increasing Mail Survey Responses: Contributions to Charity." Journal of Marketing Research 15:632-33. Robin, D. P., and C. G. Walters. 1976. "The Effect of Return Rate of Messages Explaining Monetary Incentives in Mail Questionnaire Studies." Journal of Business Communication 13(3):49-54. Robinson, R. A., and P. Agisim. 1951. "Making Mail Surveys More Reliable." Journal of Marketing 15:415-24. Roeher, G. A. 1963. "Effective Techniques in Increasing Response to Mailed Questionnaires." Public Opinion Quarterly 27:299-302. Schewe, C. D., and N. G. Cournoyer. 1976. "Prepaid vs. Promised Monetary Incentives to Questionnaire Response: Further Evidence." Public Opinion Quarterly 40{l):l05-7. Sheth, J., and A. M. Roscoe, Jr. 1975. "Impact of Questionnaire Length, Follow-up Methods, and Geographical Location on Response Rate to a Mail Survey." Journal of Applied Psychology 60(2):252-54. Simon, R. 1%7. "Responses to Personal and Form Letters in Mail Surveys." Journal of Advertising Research 7:28-30. Sletto, R. F. 1940. "Pretesting of Questionnaires." American Sociological Review 5:193-200. Stafford, J. E. 1966. "Influence of Preliminary Contact on Mail Returns." Journal of Marketing Research 3:410-11. Stevens, R. E. 1975. "Does Precoding Mail Questionnaires Affect Response Rates?" Public Opinion Quarterly 38(4):621-22. Triball, K. 1984. Cited by Armstrong, J. S., and E. J. Lusk. 1987. "Return Postage in Mail Surveys: A Meta-analysis." Public Opinion Quarterly 51:233-48. Veiga, J. F. 1974. "Getting the Mail Questionnaire Returned: Some Practical Research Considerations." Journal of Applied Psychology 59(2):21718. Venne, R. V. 1974. "Direct Mail Announcement of Agricultural Publications."

638

Yammwino, Skinner, and Childers

Bulletin 21. Department of Agriculture Journalism, College of Agriculture, University of Wisconsin—Madison. Vocino, T. 1977. "Three Variables in Stimulating Responses to Mailed Questionnaires." Journal of Marketing 41:76-77. Watson, J. J. 1%5. "Improving the Response Rate in Mail Research." Journal of Advertising Research 5:45-50. Weilbacher, W., and H. R. Walsh. 1952. "Mail Questionnaires and the Personalized Letter of Transmittal." Journal of Marketing 16:331-36. Whitmore, W. J. 1976. "Mail Survey Premiums and Response Bias." Journal of Marketing Research 13:46-50. Wildman, R. C. 1977. "Effects of Anonymity and Social Setting on Survey Responses." Public Opinion Quarterly 41:74-79. Wiseman, F. 1972. "Methodological Bias in Public Opinion Surveys." Public Opinion Quarterly 36:105-8. . 1973. "Factor Interaction Effects in Mail Survey Response Rates." Journal of Marketing Research 10:330-33. Wolfe, A. C. 1979. Cited by Armstrong, J. S., and E. J. Lusk. 1987. "Return Postage in Mail Surveys: A Meta-analysis." Public Opinion Quarterly 51:233-48. Wolfe, A. C , and B. R. Trieman. 1979. "Postage Types and Response Rates in Mail Surveys." Journal of Advanced Research 1:12-18. Worthern, B. R., and R. W. Valcarce. 1985. "Relative Effectiveness of Personalized and Form Covering Letters in Initial and Follow-up Mail Surveys." Psychological Reports 57:735-44. Wotruba, T. R. 1%6. "Monetary Inducements and Mail Questionnaire Response." Journal of Marketing Research 3:398-400. Wynn, G., and S. W. McDaniel. 1985. "The Effect of Alternative Foot-in-theDoor Manipulations on Mailed Questionnaires Response Rate and Quality." Journal of the Market Research Society 27:15-26.

References Armstrong, J. S., and E. J. Lusk. 1987. "Return Postage in Mail Surveys: A Meta-analysis." Public Opinion Quarterly 51:233-48. Bangert-Drowns, R. L. 1986. "Review of Developments in Meta-analytic Methods. Psychological Bulletin 99:388-99. Bullock, R. J., and D. J. Svyantek. 1985. "Analyzing Meta-analysis: Potential Problems, an Unsuccessful Replication, and Evaluation Criteria." Journal of Applied Psychology 70:108-15. Childers, T. L. 1986. Conceptualizing Mail Survey Response Behavior. Working Paper, University of Minnesota. Childers, T. L., and S. J. Skinner. 1985. "Theoretical and Empirical Issues in the Identification of Survey Respondents." Journal of the Market Research Society 27:39-53. Duncan, W. J. 1979. "Mail Questionnaires in Survey Research: A Review of Response Inducement Techniques." Journal of Management 5(1): 39-55. Fox, R. J., M. R. Crask, and J. Kim. 1988. "Mail Survey Response Rate: A Meta-analysis of Selected Techniques for Inducing Response." Public Opinion Quarterly 52:467-91.

A Meta-analysis of Mail Surveys

639

Furse, D. H., and D. W. Stewart. 1982. "Monetary Incentives versus Promised Contribution of Charity: New Evidence on Mail Survey Response." Journal of Marketing Research 14:375-80. Glass, G. V. 1977. "Integrating Findings: The Meta-analysis of Research." Review of Research in Education 5:351—79.

Goyder, J. C. 1982. "Further Evidence on Factors Affecting Response Rates to Mailed Questionnaires." American Sociological Review 47:550-53. Hackler, J. C , and P. Bourgette. 1973. "Dollars, Dissonance, and Survey Returns." Public Opinion Quarterly 37:276-81. Hansen, R. A. 1980. "A Self-perception Interpretation of the Effect of Monetary and Non-monetary Incentives on Mail Survey Respondent Behavior." Journal of Marketing Research 17:77-83. Harvey, L. 1987. "Factors Affecting Response Rates to Mailed Questionnaires: A Comprehensive Literature Review." Journal of the Market Research Society 29(3):341-53. Heberlein, T. A., and R. Baumgartner. 1978. "Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature." American Sociological Review 43:447-62. Houston, M. J., and N. M. Ford. 1976. "Broadening the Scope of Methodological Research on Mail Surveys." Journal of Marketing Research 13:397-403. Hunter, J. E., and F. L. Schmidt. 1990. Methods of Meta-analysis: Correcting Error and Bias in Research Findings. Newbury Park, CA: Sage. Hunter, J. E., F. L. Schmidt, and G. B. Jackson. 1982. Meta-analysis: Cumulating Research Findings across Studies. Beverly Hills, CA: Sage. Kanuk, L., and C. Berenson. 1975. "Mail Surveys and Response Rates: A Literature Review." Journal of Marketing Research 12:440-53. Linsky, A. S. 1975. "Stimulating Responses to Mailed Questionnaires: A Review." Public Opinion Quarterly 39:82-101. Reingen, P. H., and J. B. Kernan. 1977. "Compliance with an Interview Request: A Foot-in-the-Door, Self-perception Interpretation." Journal of Marketing Research 14:365-69. . 1979. "More Evidence on Interpersonal Yielding." Journal of Marketing Research 16:588-93. Riche, M. F. 1987. "Who Says Yes." American Demographics, p. 8. Rosenthal, R. 1979. "The 'File Drawer Problem' and Tolerance for Null Results." Psychological Bulletin 86:638-41. . 1984. Meta-analytic Procedures for Social Research. Beverly Hills, CA: Sage. Wolf, F. M. 1986. Meta-analysis: Quantitative Methods for Research Synthesis. Beverly Hills, CA: Sage. Worthen, B. R., and R. W. Valcarce. 1985. "Relative Effectiveness of Personalized and Form Covering Letters in Initial and Follow-up Mail Surveys." Psychological Reports 57:735-44. Yu, J., and H. Cooper. 1983. "A Quantitative Review of Research Design Effects on Response Rates to Questionnaires." Journal of Marketing Research 20:36-44.

Suggest Documents