Credibility of Repeated Statements: Memory for Trivia - APA PsycNET

0 downloads 0 Views 79KB Size Report
jects' detection of the fact that a statement is repeated; that is, statements that are judged to be ... Two results supported the ... truth is inferred from some memorially based ... ported that subjects are likely to judge incor- ... Such an account raises questions regarding .... reach consensus on at least 24 statements from each.
Psychological Bulletin 2013, Vol. 139, No. 3, 730 –734

© 2013 American Psychological Association 0033-2909/13/$12.00 DOI: 10.1037/a0030447

REPLY

Meta-Analytic Estimates Predict the Effectiveness of Emotion Regulation Strategies in the “Real World”: Reply to Augustine and Hemenover (2013) Eleanor Miles, Paschal Sheeran, and Thomas L. Webb

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

University of Sheffield Augustine and Hemenover (2013) were right to state that meta-analyses should be accurate and generalizable. However, we disagree that our meta-analysis of emotion regulation strategies (Webb, Miles, & Sheeran, 2012) fell short in these respects. Augustine and Hemenover’s concerns appear to have accrued from misunderstandings of our inclusion criteria or from disagreements with methodological decisions that are crucial to the validity of meta-analysis. This response clarifies the bases of these decisions and discusses implications for the accuracy and validity of meta-analyses. Furthermore, we show that our findings are consistent with theoretical predictions and previous reviews, and we present new evidence that the effect sizes that we obtained are generalizable. In particular, we demonstrate that our estimates of the effectiveness of emotion regulation strategies reveal how well these strategies predict important emotional outcomes over 1 year. Keywords: emotion regulation, reappraisal, suppression, distraction, concentration

additional effect size could have been computed. However, our inclusion criteria clearly stated that “a measure of emotion needed to be taken” (p. 783). The cognitive and performance outcomes identified by Augustine and Hemenover (e.g., endorsing negative and positive words as descriptive of the self, Moulds, Kandris, & Williams, 2007; memory for positive and negative words, Rusting & DeHart, 2000) were deliberately excluded from our effect size calculations because they were not measures of emotion (i.e., although these outcomes may be influenced by emotions, they do not themselves qualify as emotional responses). This issue is conceptually related to Augustine and Hemenover’s (2013) second concern: our decision to compute effect sizes only for the emotion that was targeted by the regulation strategy. We took this decision in order to ensure a fair test of respective emotion regulation strategies. In the same way that one would not expect a behavior-change intervention geared at promoting condom use to also help participants to quit smoking, it seems misguided to judge an anger regulation strategy as less effective because it did not also decrease anxiety (Quartana & Burns, 2007). This decision was justified not only on the grounds of providing a fair test but also on empirical grounds; excluding nontarget effects did not influence effect size estimates, as we stated in our meta-analysis (see Webb et al., 2012, footnote 4). Mauss and Robinson (2009) concluded that “there is no ‘gold standard’ measure of emotional responding. Rather, experiential, physiological, and behavioral measures are all relevant to understanding emotion” (p. 209). Whereas Augustine and Hemenover’s (2009) meta-analysis focused on self-report measures of emotion, we attempted to include all self-report, behavioral, and physiological measures reported by the primary studies that concerned emotional outcomes that were targeted by the regulation strategy. In our view, these decisions made for more accurate estimates of

Let us begin by thanking Augustine and Hemenover (2013) for systematically revisiting a number of the decisions taken in our meta-analysis (Webb, Miles, & Sheeran, 2012). The issues that they raised have important implications for the accuracy and generalizability of meta-analyses in this area, and so we are grateful for the opportunity to discuss these. We flatly refute the notion, however, that our meta-analysis is inconsistent, biased, or inaccurate in any of the ways that Augustine and Hemenover (2013) proposed. In this response, we try to identify the misunderstandings that gave rise to their concerns and clarify the bases for key decisions made in our meta-analysis. We first address Augustine and Hemenover’s comments regarding the accuracy, comprehensiveness, and meaningfulness of our meta-analysis before turning to an empirical examination of the generalizability of our findings.

How Accurate Are Our Effect Sizes? The method section of our original article was necessarily brief. Unfortunately, the need for brevity seems to have led to misunderstandings by Augustine and Hemenover (2013), which they have interpreted as errors, omissions, or bias on our part. First, Augustine and Hemenover asserted that over one third of the effect sizes in our meta-analysis were drawn from studies for which an

Eleanor Miles, Paschal Sheeran, and Thomas L. Webb, Department of Psychology, University of Sheffield, Sheffield, England. Correspondence concerning this article should be addressed to Eleanor Miles, Paschal Sheeran, or Thomas L. Webb, Department of Psychology, University of Sheffield, Western Bank, Sheffield, S10 2TN, United Kingdom. E-mail: [email protected] or [email protected] or [email protected] 730

PREDICTIVE VALIDITY OF EFFECT SIZES

the effectiveness of emotion regulation strategies than would have been afforded by more diffuse or restrictive inclusion criteria.

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

How Comprehensive Was Our Review? Augustine and Hemenover (2013) expressed three concerns about the comprehensiveness of our review. They suggested that (a) we omitted comparisons from primary studies that should have been included, (b) we failed to include studies and articles that they identified, and (c) our findings are biased by the inclusion of data from clinical samples that should have been excluded. First, the comparisons that we are assumed to have omitted relate either to additional comparisons between regulation strategies or to alternative emotion inductions. Most of these comparisons did not meet our inclusion criteria, as they did not involve a manipulation of attentional deployment, cognitive change, or response modulation. Other comparisons were excluded so as not to violate the assumption of independence that is central to the validity of meta-analysis (Hunter & Schmidt, 1990). Unlike Augustine and Hemenover (2009, p. 1197), we did not include separate effect sizes for every comparison from a single study, in order to ensure that particular studies did not exert a disproportionate influence in computations of the average effect size. The Cochrane Handbook for systematic reviews (Higgins & Green, 2011) states that the same participants may contribute to multiple comparisons if, and only if, respective sample sizes are corrected; that is, adjusted to make certain that participants are not double-counted. We followed this procedure in our meta-analysis (Webb et al., 2012, p. 784). However, in some studies, the same participants contributed to multiple conditions relating to the same comparison. Where possible, we combined these conditions to create a single comparison (i.e., if two groups were given the same emotion induction and regulation strategy, then they were combined). Where conditions could not meaningfully be combined (e.g., Kim & Hamann, 2007), we selected only one of the comparisons. Selection decisions were consistent across studies (i.e., we always selected the condition in which emotion was downregulated, or a negative affect induction was used). Although these decisions led to a loss of information, they were based on authoritative guidance (Higgins & Green, 2011) and were essential in order to ensure that the independence assumption was not violated. Including all possible comparisons, as Augustine and Hemenover (2013) proposed, would have compromised the accuracy of the effect sizes obtained in our meta-analysis. Second, Augustine and Hemenover (2013) specified several studies that, in their view, should have been included in our meta-analysis. In fact, most of these studies did not meet our inclusion criteria (e.g., Augustine & Hemenover, 2008, did not manipulate emotion regulation using a form of attentional deployment, cognitive change, or response modulation). Similarly, several studies of expressive writing were excluded because the experimental and control groups did not undergo an equivalent emotion induction and thus could not legitimately be compared (e.g., when an experimental group wrote about an emotional event but the control group did not process any emotional event; e.g., Pennebaker & Francis, 1996). However, we appreciate their drawing our attention to the five studies that did not appear in our literature search1 because they contained none of the search terms that we employed (e.g., because the nonstandard term “affect

731

repair” was used in the reports instead of the term “emotion regulation”; Hemenover, Augustine, Shulman, Tran, & Bartlett, 2008).2 This gap in our computerized literature search serves to underline the importance of consistent and standardized use of terminology in research on emotion regulation (see Michie & Abraham, 2008, for discussion). Third, Augustine and Hemenover (2013) suggested that, although clinical samples were expressly excluded from our review, some effect sizes were garnered from studies that used an extremegroups or clinical-control design. In fact, several studies reported data from both clinical samples and nonclinical control groups (e.g., Watkins & Moulds, 2005), and so it was possible to include data from respective control groups in our meta-analysis. Rather than describe the sample selected in each particular study in this reply, we re-analyzed our data to assess the impact of these studies on the observed effect sizes for the four main emotion regulation strategies in our meta-analysis. Excluding the studies that Augustine and Hemenover identified as problematic had a negligible impact (change in d⫹ ⬍ 0.02). A final issue concerning comprehensiveness has to do with Augustine and Hemenover’s (2013) characterization of our metaanalyses as “broad” in contrast to their “focused” meta-analysis (Augustine & Hemenover, 2009). However, we did not have “the express purpose of being as broad as possible,” as Augustine and Hemenover (2013 p. 726) claimed. Our stated aim was to undertake a comprehensive test of the effectiveness of strategies from the process model of emotion regulation (i.e., attentional deployment, cognitive change, and response modulation). Augustine and Hemenover contended that focused analyses are more representative of the effect in question. This contention is difficult is to sustain, however, given that Augustine and Hemenover’s metaanalysis used only 75 effect sizes to evaluate the effectiveness of 17 different emotion regulation strategies, and 10 strategies were evaluated on the basis of only one or two studies. We believe that our evaluation of the effectiveness of three strategies using 306 effect sizes should be viewed as a comprehensive test of the effectiveness of a focused set of emotion regulation strategies, namely, those described by the process model of emotion regulation (Gross, 1998a, 1998b).

How Meaningful Are Our Effect Sizes? Augustine and Hemenover (2013) claimed that effect sizes based on comparisons between emotion regulation and control conditions should be viewed with caution because studies vary in their choice of control conditions. Their preferred method was to calculate effect sizes based on the difference between preregulation and postregulation measures of emotion (Augustine & Hemenover, 2009). However, preregulation measures of emotion are unnecessary if participants are randomly assigned to conditions 1 To ensure that the omission of these studies did not influence our effect sizes, we computed an additional nine effect sizes and reanalyzed our data. The average effect sizes for the four main emotion regulation strategies in our meta-analysis were very similar when these additional comparisons were included (change in d⫹ ⬍ .01). 2 The term “affect repair” returned only three psychology articles in a Web of Knowledge search on August 1, 2012 (Augustine & Hemenover, 2008, 2009; Hemenover et al., 2008), whereas the term “emotion regulation” returned 2,770 psychology articles.

MILES, SHEERAN, AND WEBB

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

732

that receive equivalent emotion inductions (as was the case in the studies that we included), because preregulation emotional responses can be assumed to be equivalent, regardless of the nature or effectiveness of the emotion induction. We did, however, investigate how the choice of control conditions influenced effect sizes associated with different emotion regulation strategies. Three control conditions were found to significantly influence effect sizes in our meta-analysis (specifically, giving no instructions or asking participants not to regulate in a specific manner decreased the observed effect size, whereas asking participants to experience the emotion naturally increased the observed effect size). When these comparisons were excluded, effect sizes for the four main emotion regulation strategies in our meta-analysis were very similar (average change in d⫹ ⫽ .08). Thus, we feel that our method of calculating effect sizes is both conceptually and empirically sound. The meaningfulness of effect sizes derived from pre-post regulation attempts compared to independent groups designs can also be established on theoretical grounds. Specifically, the effect sizes observed in our review are in line with theoretical predictions derived the process model of emotion regulation. Reappraisal and distraction were similarly effective strategies; and these strategies were more effective than suppression, whereas rumination had a negative effect on emotional outcomes (see Table 1). Augustine and Hemenover’s (2009) findings, on the other hand, are at odds with the process model’s predictions: In their review, suppression was by far the most effective way to regulate affect (d⫹ ⫽ 2.02), reappraising emotions was no more effective than control conditions (d⫹ ⫽ 0.65 vs. d⫹ ⫽ 0.64), and rumination proved to be an effective emotion regulation strategy (d⫹ ⫽ 0.31). Our findings also concur with the effect sizes observed in an independent meta-analysis concerning the effects of emotion regulation strategies on symptoms of psychopathology (Aldao, Nolen-Hoeksema, & Schweizer, 2010). In sum, there are several grounds for thinking that our approach produced effect sizes that meaningfully synthesize the effectiveness of different emotion regulation strategies.

How Generalizable Are Our Effect Sizes? If our meta-analysis is accurate, comprehensive, and meaningful, as we claim, then the findings should be generalizable. That is, based on the rationale that improved emotional outcomes should occur when participants frequently use more effective regulation

strategies and infrequently use less effective strategies, the effect sizes observed in our review should predict consequential outcomes in an independent test. We undertook this test via a prospective survey over 1 year. At baseline, participants (N ⫽ 81) reported how often they used emotion regulation strategies from the process model (i.e., distraction, concentration, reappraisal, and suppression) using 1–5 scales (not at all to a great deal). One year later, participants completed several measures of emotional wellbeing and regulation success, namely, the Positive and Negative Affect Schedule (PANAS: Watson, Clark, & Tellegen, 1988), the Hospital Anxiety and Depression Scale (HADS: Zigmond & Snaith, 1983), the Subjective Wellbeing and Life Satisfaction Scale (SWLS: Diener, Emmons, Larsen, & Griffin, 1985), and a three-item self-report measure of emotion regulation success (Niven, Totterdell, Miles, Webb, & Sheeran, 2012). Participants also nominated a close friend who independently rated participants’ emotion regulation success using Niven et al.’s (2012) measure. To evaluate the predictive validity of our effect sizes, we weighted the frequency with which participants reported using each regulation strategy by the effect sizes obtained in our metaanalysis (e.g., ratings of how frequently suppression was used were multiplied by 0.16). We then computed a total score by summing the weighted ratings for distraction, rumination, reappraisal, and suppression such that higher scores indicate greater use of the more effective strategies specified by our meta-analysis. We also computed a score weighted by the effect sizes from Augustine and Hemenover’s (2009) meta-analysis, using the same method. Finally, regression analyses were conducted to determine whether these weighted usage ratings would predict 1-year outcomes, over and above the unweighted ratings. As can be seen in Table 2, merely using the four emotion regulation strategies more frequently did not predict outcomes. However, when frequency of strategy use was weighted by the effectiveness of respective strategies specified by our metaanalysis, there were significant associations with improved emotional well-being, self-reported success at emotion regulation, and independent ratings of participants’ success at emotion regulation at 1-year follow-up. In contrast, Augustine and Hemenover’s (2013) weightings predicted none of the emotional outcomes and offered significantly weaker prediction of self-reported and peerrated emotional control compared to our weightings (both ps ⬍

Table 1 Effect Sizes for Process Model Strategies in Three Meta-Analyses of Emotion Regulation Effectiveness

Strategy

Augustine and Hemenover (2009)

Aldao, Nolen-Hoeksema, and Schweizer (2010)

Webb, Miles, and Sheeran (2012)

Reappraisal Suppression Rumination Distraction

0.65 (k ⫽ 2) 2.02 (k ⫽ 2) 0.31 (k ⫽ 11) 0.46 (k ⫽ 26)

0.14 (k ⫽ 15) ⫺0.34 (k ⫽ 51) ⫺0.49 (k ⫽ 89) n/a

0.36 (k ⫽ 99) 0.16 (k ⫽ 102) ⫺0.26 (k ⫽ 113) 0.27 (k ⫽ 102)

Note. The dependent variables are self-reported emotion in Augustine and Hemenover (2009); indices of psychopathology (reverse-scored for presentation purposes) in Aldao et al. (2010); and self-reported, behavioral, and physiological measures of emotion in Webb et al. (2012). Numbers in brackets indicate the number of independent tests (k) used to compute the effect size. n/a ⫽ not available.

PREDICTIVE VALIDITY OF EFFECT SIZES

733

Table 2 Regression of Emotional Outcomes at 1-Year Follow-Up on Strategy Use and Strategy Use Weighted by Meta-Analytic Estimates of Strategy Effectiveness Augustine and Hemenover (2009) weighting

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Step/Predictors Negative affect (PANAS) 1. ER strategy use 2. Weighted strategy use Depression (HADS) 1. ER strategy use 2. Weighted strategy use Anxiety (HADS) 1. ER strategy use 2. Weighted strategy use Positive affect (PANAS) 1. ER strategy use 2. Weighted strategy use Life satisfaction (SWLS) 1. ER strategy use 2. Weighted strategy use Self-rated emotion control 1. ER strategy use 2. Weighted strategy use Peer-rated emotion control 1. ER strategy use 2. Weighted strategy use

Webb, Miles, and Sheeran (2012) weighting

␤1

␤2

⌬R

␤1

␤2

⌬R2

⫺0.05

0.08 ⫺0.15

0.00 0.00

⫺0.05

0.14 ⫺0.40ⴱⴱⴱ

0.00 0.13ⴱⴱⴱ

⫺0.11

⫺0.16 0.06

0.01 0.00

⫺0.11

0.06 ⫺0.36ⴱⴱ

0.01 0.10ⴱⴱ

0.06

0.16 ⫺0.11

0.00 0.00

0.06

0.21† ⫺0.32ⴱ

0.00 0.08ⴱ

0.23ⴱ

0.42ⴱ ⫺0.22

0.05ⴱ 0.02

0.23ⴱ

0.13 0.22†

0.05ⴱ 0.04†

0.03

0.04 ⫺0.01

0.00 0.00

0.03

⫺0.07 0.22†

0.00 0.04†

0.04

⫺0.26 0.36†

0.00 0.04†

0.04

⫺0.16 0.43ⴱⴱⴱ

0.00 0.15ⴱⴱⴱ

0.17

⫺0.20 0.44ⴱ

0.03 0.06ⴱ

0.17

⫺0.10 0.56ⴱⴱⴱ

0.03 0.25ⴱⴱⴱ

2

Note. ␤1 ⫽ beta at Step 1; ␤2 ⫽ beta at Step 2; PANAS ⫽ Positive and Negative Affect Schedule (Watson, Clark, & Tellegen, 1988); ER ⫽ emotion regulation; HADS ⫽ Hospital Anxiety and Depression Scale (Zigmond & Snaith, 1983); SWLS ⫽ Subjective Wellbeing and Life Satisfaction Scale (Diener, Emmons, Larsen, & Griffin, 1985). Self- and peer-rated emotion control were measured using Niven et al.’s (2012) scale. The table depicts results of regression analyses in which self-reported use of distraction, concentration, reappraisal, and suppression were used to predict outcomes at Step 1 and use of the same strategies weighted by their respective effect sizes from Augustine and Hemenover (2009) and Webb et al. (2012) at Step 2. † p ⬍ .10. ⴱ p ⬍ .05. ⴱⴱ p ⬍ .01. ⴱⴱⴱ p ⬍ .001.

.04). Thus, we contend that findings from our meta-analysis are generalizable: Our effect sizes indicate how well emotion regulation strategies predict important “real-world” emotional outcomes 1 year later.

Summary and Conclusion The aim of our meta-analysis was to investigate the effectiveness of strategies derived from the process model of emotion regulation. We adopted a comprehensive and methodologically rigorous approach to extracting effect sizes that generated accurate and meaningful estimates of the effectiveness of different strategies. In support of this contention, our effect sizes are consistent with theoretical predictions and previous research and are predictive of emotional outcomes in the “real world.” Many potential moderators of the effectiveness of emotion regulation strategies remain to be investigated, including the role of individual differences, habits, and interpersonal processes. Metaanalysis has the potential to illuminate these avenues of investigation, and we hope that this reply will prove useful to future researchers who take on this task.

References Aldao, A., Nolen-Hoeksema, S., & Schweizer, S. (2010). Emotion regulation strategies across psychopathology: A meta-analytic review. Clinical Psychology Review, 30, 217–237. doi:10.1016/j.cpr.2009.11.004 Augustine, A. A., & Hemenover, S. H. (2008). Extraversion and the consequences of social interaction on affect repair. Personality and Individual Differences, 44, 1151–1161. doi:10.1016/j.paid.2007.11.009

Augustine, A. A., & Hemenover, S. H. (2009). On the relative effectiveness of affect regulation strategies: A meta-analysis. Cognition & Emotion, 23, 1181–1220. doi:10.1080/02699930802396556 Augustine, A. A., & Hemenover, S. H. (2013). Accuracy and generalizability in summaries of affect regulation strategies: Comment on Webb, Miles, and Sheeran (2012). Psychological Bulletin, 139, 725–729. doi: 10.1037/a0030026 Diener, E., Emmons, R. A., Larsen, R. J., & Griffin, S. (1985). The Satisfaction With Life Scale. Journal of Personality Assessment, 49, 71–75. doi:10.1207/s15327752jpa4901_13 Gross, J. J. (1998a). Antecedent- and response-focused emotion regulation: Divergent consequences for experience, expression, and physiology. Journal of Personality and Social Psychology, 74, 224 –237. doi: 10.1037/0022-3514.74.1.224 Gross, J. J. (1998b). The emerging field of emotion regulation: An integrative review. Review of General Psychology, 2, 271–299. doi:10.1037/ 1089-2680.2.3.271 Hemenover, S. H., Augustine, A. A., Shulman, T., Tran, T. Q., & Barlett, C. P. (2008). Individual differences in negative affect repair. Emotion, 8, 468 – 478. doi:10.1037/1528-3542.8.4.468 Higgins, J. P. T., & Green, S. (Eds.). (2011). Cochrane handbook for systematic reviews of interventions (Version 5.1.0). Retrieved from http://www.cochrane-handbook.org Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage. Kim, S. H., & Hamann, S. (2007). Neural correlates of positive and negative emotion regulation. Journal of Cognitive Neuroscience, 19, 776 –798. doi:10.1162/jocn.2007.19.5.776 Mauss, I. B., & Robinson, M. D. (2009). Measures of emotion: A review. Cognition & Emotion, 23, 209 –237. doi:10.1080/02699930802204677

This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

734

MILES, SHEERAN, AND WEBB

Michie, S., & Abraham, C. (2008). Advancing the science of behaviour change: A plea for scientific reporting. Addiction, 103, 1409 –1410. doi:10.1111/j.1360-0443.2008.02291.x Moulds, M. L., Kandris, E., & Williams, A. D. (2007). The impact of rumination on memory for self-referent material. Memory, 15, 814 – 821. doi:10.1080/09658210701725831 Niven, K., Totterdell, P., Miles, E., Webb, T. L., & Sheeran, P. (2012). Achieving the same for less: Improving mood depletes blood glucose for people with poor (but not good) emotion control. Cognition & Emotion. Advance online publication. doi:10.1080/02699931.2012.679916 Pennebaker, J. W., & Francis, M. E. (1996). Cognitive, emotional, and language processes in disclosure. Cognition & Emotion, 10, 601– 626. doi:10.1080/026999396380079 Quartana, P. J., & Burns, J. W. (2007). Painful consequences of anger suppression. Emotion, 7, 400 – 414. doi:10.1037/1528-3542.7.2.400 Rusting, C. L., & DeHart, T. (2000). Retrieving positive memories to regulate negative mood: Consequences for mood-congruent memory. Journal of Personality and Social Psychology, 78, 737–752. doi: 10.1037/0022-3514.78.4.737

Watkins, E., & Moulds, M. (2005). Distinct modes of ruminative selffocus: Impact of abstract versus concrete rumination on problem solving in depression. Emotion, 5, 319 –328. doi:10.1037/1528-3542.5.3.319 Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of the brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54, 1063–1070. doi:10.1037/0022-3514.54.6.1063 Webb, T. L., Miles, E., & Sheeran, P. (2012). Dealing with feeling: A meta-analysis of the effectiveness of strategies derived from the process model of emotion regulation. Psychological Bulletin, 138, 775– 808. doi:10.1037/a0027600 Zigmond, A. S., & Snaith, R. P. (1983). The Hospital Anxiety and Depression Scale. Acta Psychiatrica Scandinavica, 67, 361–370. doi: 10.1111/j.1600-0447.1983.tb09716.x

Received August 21, 2012 Accepted August 27, 2012 䡲

E-Mail Notification of Your Latest Issue Online! Would you like to know when the next issue of your favorite APA journal will be available online? This service is now available to you. Sign up at http://notify.apa.org/ and you will be notified by e-mail when issues of interest to you become available!