schools in the Los Angeles area, using a diagnostic decision-making approach.2. Students completed an anonymous self-report survey which asked themto.
Letters to the Editor
ate public health and safety activities despite limited resources. Z Gary M. Goldbaun, MD, MPH David Frinel Requests for reprints should be sent to Gary M. Goldbaum, MD, MPH, AIDS Prevention Project, 1116 Summit Avenue, Suite 200, Seattle, WA 98101.
References 1. Greenland S, Robins JM. Estimation of a common effect parameter from sparse follow-up data. Biometrics. 1985;41:55-8. 2. Vogt BM, Sorenson JH. Evacuation in Emergencies: An Annotated Guide to Research. Oak Ridge, TN: Oak Ridge National Laboratories; 1986. Report No. ORNLIM10277.
Music Preference as a Diagnostic Indicator of Adolescent Drug Use Brown and Hendee' suggest that music preference is a sign ofthe mental health of adolescents and that certain music preferences may be symptoms of immersion into a drug-using subculture. We examined the ability of musical preferences to indicate drug use exposure in an exhaustive sample of 758 7th- and 10th-grade adolescents from four randomly selected urban and rural public junior high and high schools in the Los Angeles area, using a diagnostic decision-making approach.2 Students completed an anonymous self-report survey which asked them to list, using an open-ended format, the musical groups that they and their group of friends were most likely to listen to. These names were coded into eight different musical styles with 90% interrater agreement. Another item instructed them to indicate on a check list which of seven different substances they or their friends might use. The drugs checked on the list were coded as seven binary items ("exposed" or not). For each combination of music style and drug type, sensitivity was calculated as the proportion of those exposed to the drug who preferred the particular music style. False positive rates were calculated as the proportion of those not exposed to the drug who also preferred that particular music style. Sensitivity and false positive values were then compared with the base rate (overall) popularity of each music style to determine significant deviations from chance. In general, we found music preference to be diagnostically weak: the sensitivity rates were quite low overall (>.50), and the popularity of particular music styles was quite similar among nondrug-exposed and drug-exposed adoles124 American Journal of Public Health
cents. Pop/dance, the most popular music style (48% response), had a higher than expected false positive rate for cigarettes (.56), but lower than chance sensitivity for smokeless tobacco (.36) and marijuana (.39). Modem rock, the second most popular style (41% response), had a low false positive rate for alcohol (.32). Hard rock had an elevated sensitivity for smokeless tobacco (.23). Rap music showed elevated sensitivity to crack (.36), classic rock to cocaine (.28), punk rock to other hard drugs (.09), and heavy metal to all seven drugs (range .17-.32). False positive rates for heavy metal preference were lower than chance for all but crack exposure
(range .05-.09). The efficiency of using music preference as a diagnostic tool can only be determined by specific weightings of the error rates. Health providers that overgeneralize the statistical associations of music with drug exposure do so at the risk of their credibilitywith non-drug-using adolescents and others. Health professionals wanting to screen adolescents for substance use should attempt to use more valid and reliable methods available such as established self-report screening surveys and biologic assays.3 [1
Clyde W. Dent, PhD Jon Gaif Steve Sussman, PhD Alan W. Stacy, PhD Dee Burton, PhD Brian R. Flay, PhD Clyde W. Dent, Jon Galaif, Steve Sussman, and Alan W. Stacey are with the Institute for Health Promotion and Disease Prevention, University of Southern California. Dee Burton and Brian R. Flay are with the Prevention Research Center, School of Public Health, University of Illinois at Chicago. Requests for reprints should be sent to Clyde W. Dent, PhD, Institute for Health Promotion and Disease Prevention Research, University of Southern California, 1000 S Fremont Avenue, Suite 641, Alhambra, CA 91803-1358.
Acknowledgment This research was supported by Grant No. CA44907 from the National Cancer Institute.
References 1. Brown EF, Hendee WR. Adolescents and their music. JAAL4. 1989;262:1659-1663. 2. Wilson JMG, Jungner F. Princ4ples and Practice of Screening for Disease. Geneva, Switzerland, World Health Organization,
1968. Public Health Papers No. 34. 3. Klitzner M, Swartz RH, Gruenwald P, Blasinsky M. Screening for risk factors for adolescent alcohol and drug use. A^JDC.
1987;141:45-49.
VA Mortality Reporting for World War H Army Veterans Mortality follow-up studies of World War II veteran cohorts provide a valuable opportunity to study the natural history of disease and the effects of selected military occupational exposures. However, Department of Veterans Affairs (VA) records can be used for veteran death reporting only if their completeness is carefully ascertained. There have been recent studies concerning the completeness of VA death reporting for Vietnam era veterans,1-3 but the last corresponding study for World War II veterans was conducted more than 2 decades ago.4 Therefore, to determine the current completeness of VA mortality reporting on World War II veterans, a new study was needed. We began such a study by selecting an initial random sample of 269 deaths of men born in the years 1917 to 1927 from a source independent ofthe current VA death reporting system, namely a roster of 1980 Texas state deaths used in an earlier study.l Veteran status was determined in an independent process, which yielded a sample of 198 known World War II veterans, out of which those with army service, 132, were selected to be sent to the VA. The VA registers veteran deaths on its BIRLS (Beneficiary Identification and Records Locator Subsystem) file. Of the 132 deaths sent to BIRLS, 122 were found as deaths on BIRLS, all showing the correct date of death. The 10 remaining records included those not found at BIRLS (3), those found at BIRLS with the wrong death date (1), and those shown alive on BIRLS (6). Thus, 92.4% of a sample of independently ascertained World War II veteran deaths were correctly reported to the VA. It should be noted that this rate is slightly lower than that found 20 years ago (98%),4 but slightly higher than that for recent studies of Vietnam era veterans.1,2 However, because of the small size of our sample these differences could be attributed to sampling error. If a 92% rate of mortality ascertainment were considered satisfactory, or if statistical comparisons are made between groups whose composition was such that no differences in VA mortality ascertainment would be suspected, then VA records might be successfully used by themselves in mortality follow-up studies of World War II armyveterans. More likely, other mortality reporting sources would
January 1992, Vol. 82, No. 1