Measuring Internal Consistency of Community Engagement using the APLHA option of PROC CORR Renee Gennarelli, Melody S. Goodman, PhD Division of Public Health Sciences, Department of Surgery, and Alvin J. Siteman Cancer Center Washington University School of Medicine, St. Louis, MO, USA
ABSTRACT Comprehensive cancer centers have been mandated to engage communities in their work; thus, measurement of community engagement in projects is a priority area. Siteman Cancer Center’s Program for the Elimination of Cancer Disparities (PECaD) projects seek to align with 11 engagement principles (EP) that have been previously developed in the literature. Fifty community members participating in a PECaD Pilot project were administered a survey with 56 questions on community engagement to evaluate how well the project aligns with EPs. Each EP has 3-6 questions to measure it; Cronbach’s alpha is used to examine the internal consistency of items to measure a single EP. PROC CORR with ALPHA option is used to calculate Cronbach’s alpha for questions that have been identified as relating to the same EP. In addition to calculating an overall Cronbach’s alpha, this also calculates values based on all items with each item removed. This allows for identifying items that have a lack of internal consistency with the other items and thus should be removed from the assessment. Using alpha>0.70 as an acceptable level, lack of internal consistency was found for 6 of the 16 EP’s examined. Four of the quality scale item groups showed internal consistency, compared with 6 of the quantity scale item groups. Using SAS® to determine Cronbach’s Alpha is a useful tool to assess reliability and develop appropriate measures for assessing community engagement.
INTRODUCTION Comprehensive cancer centers have been mandated to engage communities in their work. Thus, the measurement of community engagement in projects supported by cancer centers is a priority area. This paper will discuss how to examine internal consistency in order to develop measures that accurately determine how well projects align with specified engagement principles and will show that using SAS® to determine Cronbach’s Alpha is a useful way to assess reliability and develop internally consistent measures for assessing community engagement. 1
Community engagement is a powerful vehicle for bringing about changes that can improve community health ; engaging community members in the research process is often the missing link to improving the quality and 2,3 outcomes of health promotion activities, disease prevention initiatives, and research studies. Community engagement requires a long term process that builds trust, values contributions of all stakeholders, and generates 4 a collaborative framework. Engaging marginalized communities to collaborate with researchers helps to 5,6 successfully address health concerns unique to those communities. Siteman Cancer Center’s Program for the Elimination of Cancer Disparities (PECaD) works with communities to reduce health disparities through outreach, education, and training. PECaD projects seek to align with 11 Engagement principles (EP) for community-based participatory research (CBPR) that have been previously 7,8,9 developed in the literature. The 11 CBPR Engagement Principles (EP) utilized by PECaD are: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
Focus on local relevance and determinants of health Acknowledge the community Disseminate findings and knowledge gained to all partners Seek and use the input of community partners Involve a cyclical and iterative process in pursuit of objectives Foster co-learning, capacity building, and co-benefit for all partners Build on strengths and resources within the community Facilitate collaborative, equitable partnerships Integrate and achieve a balance of all partners Involve all partners in the dissemination process Plan for a long-term process and commitments
1
Community-Based Participatory Research (CBPR) engages communities as partners in the research process. Those who participate in CBPR share their knowledge and experience to help identify key problems and develop 10 culturally sensitive research questions. The Community Research Fellows Training (CRFT) program is a current PECaD pilot project that focuses on enhancing the infrastructure for CBPR and seeks to promote the role of racial, ethnic, and other underserved populations in the research enterprise. This will be done by increasing CBPR between researchers at Washington University in St. Louis, community-based organizations, and community health workers to address health disparities in the St. Louis metropolitan area. Fifty community members were selected to participate in this 15-week long training program.
METHODS The 50 fellows were administered a baseline assessment survey with 50 questions on community engagement pertaining to specific EPs to evaluate how well the project aligns with the principles. If it is found that the questions align well with the 11 principles, this survey will be disseminated as a standardized tool to all PECaD projects as a way of measuring community engagement within each project. The CRFT fellows will serve as the sample for this pilot study on quantitatively measuring community engagement. The CRFT baseline assessment included questions on demographic information, desired CRFT outcomes, knowledge of research terms, medical trust, and community engagement. For the purpose of this study, only the questions on community engagement were used for the internal consistency analysis. These 50 items were subcomponents of questions 47-50 and each pertained to a specific EP. In addition, the questions had Likert scale response options with some questions measuring quality and others measuring quantity. Each Engagement Principle is measured by 3-6 quality items and a corresponding 3-6 quantity items. 18 of the quality questions had response options of:1=Strongly Disagree, 2=Disagree, 3=Neither agree or Disagree, 4=Agree, 5=Strongly Agree. The remaining 7 quality questions had response options of: 1=Very Poorly, 2=Poorly, 3=Adequately, 4=Well, 5=Very Well. The 25 quantity questions had response options of: 1=Never, 2=Rarely, 3=Somewhat, 4=Often, 5=Always. INSTRUMENT EVALUATION The internal consistency of an instrument reflects the amount of correlation among the items that make up the measurement. Cronbach’s Alpha is used to assess reliability by measuring the degree to which different items are correlated and measure a single engagement principle. This, in turn, measures the internal consistency of the items. For this study, it is of interest to calculate Cronbach’s alpha for each set of community engagement questions that are meant to measure the same Engagement Principle. In general, Cronbach’s Alpha is a function of the average pairwise correlation among items and the number of items in the analysis. While there is no definitive value for showing internal consistency using the Cronbach’s Alpha statistic, it is widely accepted that in 11 the early stages of validation research α should exceed 0.70 .
WHAT YOU NEED TO KNOW ABOUT PROC CORR SAS® can easily calculate the Cronbach’s Alpha statistic using the ALPHA option of PROC CORR.
proc corr data=data_name alpha nomiss; var var1 var2 var3; run; The NOMISS option should be used to exclude observations with missing values from the calculation. This is necessary because, by default, PROC CORR uses pairwise deletion for missing values, which could lead to the correlation statistics being calculated from different numbers of observations. The NOMISS option, on the other hand, implements listwise deletion when missing variables are present which excludes any observation with a missing value from the analysis. This ensures that there is the same number of observations across all variables included in the calculation. In addition to properly handling missing values, it is also important to ensure that all items in the analysis have the same scale. PROC CORR assumes that all items are scored in the same direction. Therefore, items may need to be reverse-scored or rescaled before analysis. For example, if one item uses a 1=Strongly Disagree…5=Strongly
2
Agree and another item uses 1=Strongly Agree…5=Strongly Disagree, one of the items will need to be reversescored. As with most statistical calculations, larger sample sizes lead to a more precise Cronbach’s Alpha estimate. However, previous studies have shown that sample sizes of n=30 are just about as precise as a larger sample of n=200 when the number of items used to calculate α is at least 5. Additionally, α calculations including at least 2 12 items can be sufficient when n=30 if the average intercorrelation among items is at least 0.70. This is a particular strength of Cronbach’s Alpha, especially for early stage validation research where sample sizes are usually small. Cronbach’s Alpha estimates with small sample sizes can be further validated with 95% confidence limits for the alpha statistic. While confidence limits are not a built-in option for the ALPHA option of PROC CORR, many 13 macros exist that will output confidence limits for Cronbach’s Alpha.
INTERPRETING PROC CORR OUTPUT SAS® outputs Cronbach’s Alpha statistics for the original distribution of the items (Raw Variables) and a standardized distribution (Std. Variables). For the standardized output, each item is transformed to a mean of 0 and a standard deviation of 1. The standardized output is more relevant when items have different response options. Results are similar for the raw and standardized variables when all items have the same response options. For the purpose of this study, we will use the standardized output due to the two possible response scales for the quality-based questions. The internal consistency for a single item as compared to all other items in a group can also be calculated using Cronbach’s Alpha. This is done by comparing the Cronbach’s Alpha value based on all items in the analysis against the value based on all items with one item removed. The “Correlation with Total” column within the “Cronbach Coefficient Alpha with Deleted Variable” table gives the item-total correlation coefficient for each specific item in the analysis. Item-total correlation is the correlation between a single item and the total score based on all other items. The Pearson correlation coefficient is used to measure each item-total correlation. Example 1: Questions 47g, 47h & 47q are quality scale items that were developed to measure EP 3: Disseminate findings and knowledge gained to all partners. Cronbach’s Alpha was calculated for these items to measure internal consistency. proc corr data=baselineclean alpha nomiss; var quest_47g quest_47h quest_47q; run; Cronbach Coefficient Alpha Variables Alpha Raw 0.845122 Standardized 0.843521 Figure 1.0
Cronbach Coefficient Alpha with Deleted Variable Deleted Raw Variables Standardized Variables Variable Correlation Alpha Correlation Alpha with Total with Total quest_47g 0.813733 0.684890 0.799779 0.691834 quest_47h 0.787781 0.709097 0.781867 0.710115 quest_47q 0.560766 0.914794 0.561449 0.917745 Figure 1.1
As seen in Figure 1.0 the overall standardized Cronbach’s Alpha is .843, but the value increases to .9177 when question 47q is deleted (Figure 1.1). This indicates stronger internal consistency among the two items (47g &
3
47h) than all three. However, the original value of .843 is well above the desired threshold of .70 which suggests that there would still be sufficient internal consistency if question 47q remained in the measure. It is up to the researcher’s judgement to decide if an overall alpha increase from .843 to .9177 is significant enough to warrant removal of the question. In this case, it can also be seen that question 47q has an item-total correlation of .561 with the other two items which is somewhat lower than the other 2 item-total correlations and might further suggest a lack of internal consistency. If Cronbach’s Alpha drastically decreases when a variable is deleted, then it can be concluded that the item has strong internal consistency with the other items. In this case, it can be concluded that the item should stay in the questionnaire. If Cronbach’s Alpha significantly increases when an item is removed, it can be concluded that the item lacks internal consistency with the other items in the analysis and thus, should probably be removed from the questionnaire. However, it is important to note that items that lack internal consistency should be removed in a stepwise fashion because all item-total correlation coefficients are affected when an item is removed. If more than one item in an analysis appears to lack internal consistency, only the least consistent item (the item that causes the alpha to increase the most when it is deleted) should be removed. The analysis should then be repeated with the remaining items. If the second item still shows a lack of internal consistency, then it would be appropriate to eliminate that item and once again rerun the analysis. Once stepwise deletion is complete, the remaining items can be said to have strong internal consistency. As seen in example 1, all items met the .70 threshold, it can be said that questions 47g, 47h & 47q all reliably measure EP 3 and would be appropriate items to include in the standardized tool to measure community engagement. RESULTS Due to time constraints when developing the baseline assessment used for this analysis, questions were not developed to measure EP 7, EP 10, and EP 11. Questions pertaining to these principles will be developed before the next stage of validation testing. Additionally, as previously mentioned, each of the 8 EPs measured had both quality-based and quantity-based questions. Therefore, 16 total Cronbach Alpha calculations were performed; 8 for the quality-based questions and 8 for the quantity-based questions. Lack of internal consistency was found for at least one of the questions in 6 of the 16 EP question groups when using an acceptance level of α>.70. All of the questions pertaining to Engagement Principles 1,3,4 & 5 were found to be internally consistent. The questions pertaining to the remaining 4 EPs, however, showed a lack of internal consistency. Some examples, along with further interpretations of the output from PROC CORR are below. EP 2: Acknowledge the community Cronbach Coefficient Alpha Variables Alpha Raw 0.683646 Standardized 0.693268 Figure 2.0
Cronbach Coefficient Alpha with Deleted Variable Deleted Raw Variables Standardized Variables Variable Correlation Alpha Correlation Alpha with Total with Total quest_47d 0.551241 0.598152 0.628391 0.582482 quest_47e 0.464718 0.632920 0.517466 0.621541 quest_47f 0.451782 0.632614 0.520512 0.620497 quest_48j 0.124240 0.727400 0.101343 0.749692 quest_48k 0.425549 0.644014 0.372120 0.669503 quest_48l 0.525059 0.600466 0.446179 0.645512 Figure 2.1
4
Figure 2.0 shows that the standardized Cronbach’s Alpha statistic for the quality scale items pertaining to EP 2 is slightly less than the desired α>.70 level. Removing question 48j increases α to .75, according to Figure 2.1. While the increase is slight, the extremely low item-total correlation for question 48j further supports the decision to remove the item from the measure as it is evident that it is not internally consistent with the other items assessing EP 2. EP 6: Foster co-learning, capacity building, and co-benefit for all partners. Cronbach Coefficient Alpha Variables Alpha Raw 0.306600 Standardized 0.409308 Figure 3.0
Cronbach Coefficient Alpha with Deleted Variable Deleted Raw Variables Standardized Variables Variable Correlation Alpha Correlation Alpha with Total with Total quest_47i 0.383355 -.232172 0.508642 -.257660 quest_48b -.065321 0.810631 -.070006 0.817743 quest_47j 0.332702 -.028687 0.411414 -.029718 Figure 3.1
Figure 3.0 shows that the standardized Cronbach’s Alpha statistic for the quality scale items pertaining to EP 6 is drastically lower than the desired α>.70 level which indicates a severe lack of consistency among the 3 questions. However, as can be seen in Figure 3.1, removing question 48b increases α to .82, which would then show internal consistency among the two remaining items. The negative item-total correlation for question 48b further justifies its removal. Extreme results such as these might warrant further investigation to understand how to formulate better items to assess EP 6. When the analysis is rerun with question 48b removed, the item-total correlations for the remaining two items also increases (Figure 4.1) along with the standardized alpha (Figure 4.0). Cronbach Coefficient Alpha Variables Alpha Raw 0.810631 Standardized 0.817743 Figure 4.0
Cronbach Coefficient Alpha with Deleted Variable Deleted Raw Variables Standardized Variables Variable Correlation Alpha Correlation Alpha with Total with Total quest_47i 0.691679 . 0.691679 . quest_47j 0.691679 . 0.691679 . Figure 4.1
DISCUSSION A lack of internal consistency was also found for the quantity scale items pertaining to EP 6 as well as both the quantity and quality scale items for EP 8 and the quality scale items for EP 9. Four of the quality scale item groups showed internal consistency, compared with 6 of the quantity scale item groups. This suggests that questions with quantity related response options might more accurately measure the Engagement Principles or, at the very least, offer clearer response options. However, it is important to note that all quantity scale items had the same response scale, while the quality scale items had two possible response scales, which could also have led to stronger internal consistency for the quantity scale items.
5
The questions that were found to lack internal consistency will still be included in the next round of testing which will be given as part of a mid-training evaluation for the fellows in the CRFT program and members of the CRFT Community Advisory Board (CAB). After this second round of evaluation is complete, questions that still lack internal consistency will be removed from the measure. Once the draft items are finalized, the engagement principle assessment questions will be disseminated to other PECaD projects for further validation testing.
CONCLUSIONS Using SAS® to determine Cronbach’s Alpha is a useful way to assess reliability and develop appropriate quantitative measures for assessing community engagement. The PROC CORR procedure allows for easy and quick analysis of the internal consistency of survey questions. Validation of surveys is important to ensure accurate results when performing program evaluations. The ALPHA option of PROC CORR is a reliable way to measure internal consistency that can be applied to many different fields beyond the scope of community engagement that this paper focuses on.
REFERENCES 1.
2. 3. 4.
5.
6. 7. 8. 9. 10. 11. 12. 13.
Fawcett SB, Paine-Andrews a, Francisco VT, et al. Using empowerment theory in collaborative partnerships for community health and development. American journal of community psychology. 1995;23(5):677–97. Available at: http://www.ncbi.nlm.nih.gov/pubmed/8851345. Minkler M. Ethical challenges for the “outside” researcher in community-based participatory research. Health Education & Behavior. 2004;31(6):684. Minkler ME, Wallerstein NE. Community based participatory research for health. Jossey-Bass; 2003. Butterfoss FD, Francisco VT. Evaluating community partnerships and coalitions with practitioners in mind. Health promotion practice. 2004;5(2):108–14. Available at: http://www.ncbi.nlm.nih.gov/pubmed/15090164. Accessed September 23, 2011. Israel BA, Krieger J, Vlahov D, et al. Challenges and facilitating factors in sustaining community-based participatory research partnerships: lessons learned from the Detroit, New York City and Seattle Urban Research Centers. Journal of Urban Health. 2006;83(6):1022–1040. Baker, EL, White, LE, Litchtveld M. Reducing health disparities through community-based research. Public Health Reports. 2001;116:517–519. Israel, B.A., Schulz, A.J., Parker, E.A., Becker,A.B. (1998). Review of Community-Based Research: Assessing Partnership Approaches to Improve Public Health. Annual Review of Public Health 19: 173-202. Israel, B. A., Eng, E., Schulz, A. J., and Parker, E. A.(2005). Methods in Community- Based Participatory Research for Health. San Francisco, CA: Jossey-Bass. Minkler, M. and Wallerstein, N. eds. (2008).Community-Based Participatory Research for Health: From nd Process to Outcomes (2 edition). San Francisco, CA: Jossey-Bass. Minkler M, Glover Blackwell A, Thompson M, Tamir H. Community-based participatory research: implications for public health funding. Am J Public Health. 2003;93(8):1210–1213. doi: 10.2105/AJPH.93.8.1210. nd Nunnally, J. C. (1978). Psychometric theory (2 ed.). New York: McGraw-Hill. Iacobucci, D. and Duhachek, A. (2003). Advancing alpha: Measuring reliability with confidence. Journal of Consumer Psychology, 13: 478–487. Kromrey, J.D., Romano, J. & Hibbard, S.(2008), “Alpha_CI: A SAS@ macro for Computing Confidence Intervals for Coefficient alpha”, SAS Global Forum, Paper 230-2008. Available at: http://www2.sas.com/proceedings/forum2008/230-2008.pdf.
ACKNOWLEDGMENTS SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are registered trademarks or trademarks of their respective companies. The work of Dr. Goodman is supported by National Institutes of Health, National Cancer Institute grant U54CA153460 and Washington University School of Medicine Faculty Diversity Scholars Program.
6
CONTACT INFORMATION Your comments and questions are valued and encouraged. Contact the authors at: Renee Gennarelli Division of Public Health Sciences Department of Surgery Washington University in St. Louis School of Medicine 660 S. Euclid Avenue, Campus Box 8100 St. Louis, MO 63110 Phone: 314-747-2385 E-mail:
[email protected]
Melody S. Goodman, MS, PhD Division of Public Health Sciences Department of Surgery Washington University in St. Louis School of Medicine 660 S. Euclid Avenue, Campus Box 8100 St. Louis, MO 63110 Phone: 314-362-1183 E-mail:
[email protected]
************************************************
7