integrating technology and reading instruction with ...

3 downloads 0 Views 632KB Size Report
Nov 4, 2011 - 1991; Paul, Stallman, & O'Rourke, 1990;. Silverman-Dresner & Guilfoyle, 1972;. Walter, 1978), comprehension of ques- tions (e.g., Andrews ...
Access Provided by Utah State University Libraries at 11/04/11 12:02AM GMT

18047-AAD156.1 4/4/11 1:19 PM Page 56

INTEGRATING TECHNOLOGY AND READING INSTRUCTION WITH CHILDREN WHO ARE DEAF OR HARD OF HEARING: THE EFFECTIVENESS OF THE CORNERSTONES PROJECT

I

YE WANG AND PETER V. PAUL

WANG IS AN ASSISTANT PROFESSOR, DEPARTMENT OF COMMUNICATION SCIENCES AND DISORDERS, MISSOURI STATE UNIVERSITY, SPRINGFIELD. PAUL IS A PROFESSOR, SCHOOL OF TEACHING AND LEARNING, COLLEGE OF EDUCATION AND HUMAN ECOLOGY, THE OHIO STATE UNIVERSITY, COLUMBUS. HE IS ALSO EDITOR OF THE AMERICAN ANNALS OF THE DEAF.

Note: This article was evaluated under the editorship of Donald F. Moores.

between the Cornerstones approach—a literaturebased, technology-infused literacy project—and an instructional method designated the Typical approach, a mixed-method design was used to answer three research questions: (a) Will children who are deaf or hard of hearing demonstrate differences in beginning reading skills as measured by three outcome variables: Identification of Words in Print (or Word Identification), Word Knowledge, and Story Comprehension? (b) Are there carryover effects from the Cornerstones approach to the use of the Typical approach in subsequent experiments? (c) What is the feasibility of using the Cornerstones approach for literacy instruction? There were significant differences between the Typical and Cornerstones approaches in Word Identification and Story Comprehension in Experiments 1 and 2, though none in Word Knowledge or Story Comprehension in Experiment 3. Teacher feedback provided some evidence for the feasibility of using Cornerstones in the classroom.

N A C O M PA R I S O N

Historically, the improvement of reading skills in deaf and hard of hearing children has presented challenges to educators and researchers (King & Quigley, 1985; Paul, 1998, 2009; Schirmer, 2000; Trezek, Wang, & Paul, 2010). Much of the research has been focused on a few variables within a specified domain such as the text (e.g., word identification, vocabulary knowledge, syntax), reader (e.g., prior knowledge, metacognition), task (e.g., type of assessment used), and context (e.g., purpose of reading), or the interactions of two or more of these domains. Knowledge of literacy variables has been advanced in areas such as

phonemic awareness and phonics (e.g., Conrad, 1979; Dyer, MacSweeney, Szczerbinski, Green, & Campbell, 2003; LaSasso, Crain, & Leybaert , 2003; Nielsen & Luetke-Stahlman, 2002; Trezek & Wang, 2006; Trezek, Wang, Woods, Gampp, & Paul, 2007; Wang, Trezek, Luckner, & Paul, 2008), vocabulary (e.g., Fischler, 1985; MacGinitie, 1969; Paul, 1996; Paul & Gustafson, 1991; Paul, Stallman, & O’Rourke, 1990; Silverman-Dresner & Guilfoyle, 1972; Walter, 1978), comprehension of questions (e.g., Andrews, Winograd, & DeVille, 1994; Jackson, Paul, & Smith, 1997; Schirmer & Winter, 1993), and metacognition (e.g., Andrews & Mason,

56

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 57

1991; Banner & Wang, 2009; Davey, 1987; Ewoldt, 1986; Strassman, 1992). Nevertheless, there is a need to implement and assess the merits of using this knowledge to improve beginning literacy skills. In essence, there is a need for intervention research, a form of applied research with a focus on evaluating literacy instructional practices. In a perusal of the literature on children with typical hearing, it is not difficult to encounter many studies, especially those involving specific “reading programs” such as Reading Recovery and Reading One-One (see reviews in Snow, Burns, & Griffin, 1998; Trezek et al., 2010). This line of research has yielded a preponderant amount of findings with respect to effective practices as well as a deeper understanding of basic processes of literacy (see, e.g., Barr, Kamil, Mosenthal, & Pearson, 1991; Cain & Oakhill, 2007; Israel & Duffy, 2009; Kamil, Mosenthal, Pearson, & Barr, 2000; Pearson, Barr, Kamil, & Mosenthal, 1984; Ruddell & Unrau, 2004). The situation is quite different with respect to reading research, specifically instructional or intervention research, on deaf and hard of hearing students (see, e.g., reviews in King & Quigley, 1985; Paul, 1998, 2009; Schirmer & McGough, 2005; Trezek et al., 2010). There are several reasons for the dearth of intervention research, ranging from the incidence of hearing impairment to the difficulty involved in conducting and designing such research in educational settings with intact classrooms. Some of these difficulties are not unique to deafness; indeed, they can be found in investigations among other populations (see, e.g., discussions in Ruddell & Unrau, 2004; Snow et al., 1998; Snowling & Hulme, 2005). There are some data on the use of language and literacy approaches and

programs with deaf and hard of hearing children such as the Language Experience Approach, whole language, and Reading Milestones (see, e.g., King, 1984; King & Quigley, 1985; LaSasso & Mobley, 1997; see also reviews in Paul, 1998; Schirmer, 2000; Trezek et al., 2010). However, there does not seem to be any intervention research on the effectiveness of programs or instructional practices involving the use of group designs (see reviews in Luckner, Sebald, Cooney, Young, & Goodwin, 2005/2006; Trezek et al., 2010). Although it is possible to find reports on the importance and value of using technology in conjunction with literacy activities, there does not seem to be much published research on the effects of technology-infused literacy practices (see reviews in Schirmer & McGough, 2005; Trezek et al., 2010). In short, there is a need to evaluate a technology-infused instructional program that incorporates what is known about effective practices involving specific components such as word knowledge and comprehension, including oral or through-the-air story comprehension. The purpose of the present study was to evaluate the effectiveness of the Cornerstones approach, a literaturebased, technology-infused literacy project. A mixed-method research design (quasi-experimental and qualitative) was used to answer three research questions: (a) Will children who are deaf or hard of hearing demonstrate differences in beginning reading skills as measured by three outcome variables: Identification of Words in Print (or Word Identification), Word Knowledge, and Story Comprehension? (b) Are there carryover effects from the Cornerstones approach to the use of the Typical approach in subsequent experiments? (c) What is the feasibility of using the Cornerstones approach for literacy instruction?

Method

Participants and Setting Participants included five teachers with 22 students from New England in a diversity of program types, which included one oral, two simultaneous speaking and signing (i.e., Total Communication), and two American Sign Language–English bilingual/bicultural (bi-bi) classrooms.

Teacher Participants All five teachers were state-certified Caucasian hearing females, three of whom had 18 years of teaching experience, and two of whom had 2 years of teaching experience.

Student Participants The student participants had an age range of 7–11 years, and their hearing losses range from mild to profound. Prior to the students’ participation in the Cornerstones research project, their reading abilities were assessed by the Cornerstones staff using the Classroom Reading Inventory (Silvaroli, 1975). Reading levels ranged from pre-reading to third grade. Of the 22 student participants, 16 were Caucasian, 5 were Hispanic, and 1 was Asian. One was diagnosed as having attention deficit disorder, one had cerebral palsy, and one was receiving occupational and physical therapy. Seven students had either parents or siblings (or both) with hearing loss, and 15 students had no reported deaf individuals in their family besides themselves (see Table 1). The study used convenience sampling; thus, students were located in five intact classrooms.

Materials The Cornerstones Project received funding from the U.S. Department of Education through the Steppingstones of Technology Innovation for Students With Disabilities Program

57

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 58

EFFECTIVENESS OF THE CORNERSTONES PROJECT Table 1

Student Participants’ Demographic Information ID

Sex

Ethnicity

A1

M

C

Age 9

IQ Average

Additional

Age at

Degree of

Reading

Hearing status of

disabilities

onset

hearing impairment

grade level

parents or siblings

Birth

Severe to profound

None

3.0

Hearing parents,

Program TC

hard of hearing brother A2

M

H

10

Average

None

Birth

Severe

3.0

to high A3

F

A4

M

C H

11 11

Deaf parents,

TC

3 deaf brothers

Low

Attention

average

deficit disorder

18 mos.

Profound

2.0

All hearing

TC

Low average

None

Birth

Mild

N/A

All hearing

TC

Low average

None

Birth

Severe

N/A

All hearing

TC

Superior

Requires physical/

Birth

Profound

N/A

All hearing

Bi-bi

to average A5

M

H

11

B1

M

C

7

occupational therapy B2

F

C

8

N/A

N/A

Infancy

Profound

N/A

All hearing

Bi-bi

B3

M

C

7

High

None

4 mos.

Profound

N/A

All hearing

Bi-bi

B4

M

C

7

N/A

N/A

Birth

Severe to profound

N/A

All hearing

Bi-bi

C1

M

H

8

N/A

N/A

Birth

Hard of hearing

2.0

All hearing

Bi-bi

C2

F

A

8

N/A

Cerebral palsy

Birth

Can hear with

1.5

All hearing

Bi-bi

C3

F

C

7

N/A

None

Birth

N/A

1.0–1.5

All hearing

Bi-bi

C4

F

C

8

N/A

None

4 yrs.

Sensorineural

2.0

All hearing

Bi-bi

D1

M

C

7

Superior

N/A

Birth

Profound

2.3

Deaf parents,

TC

auditory trainer on

hearing sister D2

F

C

7

N/A

N/A

Birth

Profound

N/A

Hearing parents,

TC

deaf brother D3

F

H

7

N/A

N/A

Birth

Profound

N/A

Deaf parents, deaf

TC

sister, hearing brother D4

F

C

7

N/A

N/A

Birth

Profound

N/A

Deaf parents,

TC

hearing brother D5

M

C

7

N/A

N/A

Birth

Profound

N/A

All hearing

TC

E1

M

C

7

Average

None

14 mos.

Severe

2.4

All hearing

Oral

E2

F

C

9

Average

None

9 mos.

Profound

1.6

Deaf parents

Oral

E3

F

C

8

Superior

None

4.5 yrs.

Severe to

3.0

All hearing

Oral

E4

F

C

7

Average

None

12 mos.

1.8

All hearing

Oral

moderately severe Profound

Notes. C = Caucasian. H = Hispanic. A = Asian. TC = Total Communication. Bi-bi = bilingual-bicultural.

and from the Carl and Ruth Shapiro Family National Center for Accessible Media, which is based in Boston at the public broadcasting station WGBH. The technology-infused Cornerstones approach focuses on a deep understanding of a story through word study, with practice and skills in word recognition, and on the development of background knowledge, for the purpose of improving facility with or

accessibility to written English. Cornerstones’ literacy objectives are for students to (a) recognize a body of vocabulary words in print, (b) learn about words conceptually and understand multiple aspects (e.g., meanings, nuances) of each word, and (c) increase background knowledge to facilitate comprehension of written materials, particularly in a story comprehension mode.

Each Cornerstones unit was built around an animated story taken from the public television literacy series Between the Lions. Lesson guides offered a sequence of instruction that made it possible for teachers to focus their language arts activities on the Cornerstones story during a daily 2-hour block over several days. All Cornerstones materials (except the television program) including lesson

58

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 59

guides, digitized videos, supporting interactive activities and materials, and lesson plans, were freely available on the Web (http://pbskids.org/lions/ cornerstones/) and could be accessed in the classroom or the home. Table 2 describes a characteristic Cornerstones lesson for a day. Six features define the Cornerstones approach: (a) a video-based story from public television’s literacy series Between the Lions with verbatim and edited captions; (b) the use of technology to adapt stories to students’ communication modes, such as animated stories in American Sign Language, Signing Exact English, and Cued Speech/Language, as well as an online hypertext storybook; (c) clearly defined literacy objectives with a number of words (N = 20) for children to learn in depth (more than one dimension of each word), and 10 comprehension aspects, divided over six or seven lessons in each unit; (d) integration of research-based instructional practices; (e) a sequence of lessons that requires focused use in a consecutive time frame (e.g., 2 hours every day for 6 days and, in one case, 8 days); (f) the use of technology to present materials in interactive formats such as games, story maps, graphical organizers, character templates, and a large bank of clip art.

Design and Procedure The research design for the present study incorporated both quantitative and qualitative aspects. The quantitative aspect entailed the use of a group analysis of the students’ performance on tests involving identification of words in print, word knowledge, and story comprehension. The qualitative aspect included analyses of classroom observation checklists, informal teacher debriefings, and teacher focus group interviews to gather informa-

Table 2

A Characteristic Cornerstones Lesson I. Everyday routine 1. Reviewing words and concepts from previous lessons 2. Building word knowledge of the day’s words through a variety of word study techniques 3. Read-aloud of the story (print or video), with think-aloud comments and questions 4. Guided reading or shared reading some or all of the story, with questions and responses 5. Writing tasks II. Other activities that might be included in a given day 1. Retelling or role-playing of the whole story or a segment 2. Independent reading of the day’s segment 3. Classroom language games 4. Lessons incorporating Dolch sight vocabulary words found in the story 5. Independent or small group activities to reinforce lessons 6. Read-aloud a different book with the same theme or concepts as those in the Cornerstones story

tion about the implementation of Cornerstones, its feasibility for use in literacy instruction, and its effect on the approach that was typically used in the selected classrooms.

Quantitative Design The project utilized an alternating treatment design. Two instructional methods, Cornerstones and Typical, were applied with each study participant. That is, each teacher conducted both Cornerstones and Typical lessons, and each student participated in both the Cornerstones and Typical interventions. For stories presented with the Typical approach, teachers were not given any specific guidelines or materials to use during their lessons. Teachers were asked to present the stories to their students in whatever manner was typical of their teaching techniques and styles. Teachers did not know the nature of the assessments until after all testing was completed. Each classroom had one observer to document the Typical lessons (see below, under “Qualitative Design”). The amount of time spent teaching a story, using a Typical approach, varied from teacher to teacher. Some teachers

spent more or less time depending on the length or genre of the story. Some teachers focused on vocabulary and required their students to read the text independently, whereas others chose only to read the story aloud with no expectation of students being able to understand and read the printed text on their own. The stories in each experiment were judged to be equivalent in genre and textual difficulty. Equivalency was determined through ratings by the project staff, who used criteria such as total number of words, sentence length, syntax, number of probable unknown words, and difficulty of the concepts. There were six stories/units: three Typical and three Cornerstones (see Table 3). The order of treatments was alternated over three experiments and counterbalanced with a 1-week break between the two interventions in each experiment: In Experiment 1, the Typical intervention came first, then the Cornerstones intervention. In Experiment 2, the Cornerstones intervention came first, then the Typical intervention. In Experiment 3, the Typical intervention came first, then 59

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 60

EFFECTIVENESS OF THE CORNERSTONES PROJECT Table 3

Selection of Stories Title

Author

Reading grade level a

Experiment 1 (fable) Typical 1 Cornerstones 1

The Hen and the Apple Tree The Fox and the Crow

Arnold Lobel Aesop

2.5 2.6

Experiment 2 (story with repetitive language) Typical 2 Cornerstones 2

King Bidgood’s in the Bathtub Joseph Had a Little Overcoat

Audrey Wood Simms Taback

2.1 2.0

Mark Teague Doreen Cronin

3.1 2.1

Experiment 3 (story centered on humorous/talking animals) Typical 3 Pigsty Cornerstones 3 Click, Clack, Moo: Cows That Type a The

reading grade level was determined with the Flesch-Kincaid procedure.

the Cornerstones intervention (see Table 4 for the research design). Teachers were not exposed to Cornerstones until after they had taught their first Typical unit. Therefore, the difference between the Cornerstones and Typical interventions for Experiment 1 was observable. In subsequent experiments, project staff expected teachers to carry over some of the principles of Cornerstones into their Typical lessons. If this occurred as expected, it might mitigate the differences between the two interventions, but would be evidence that teachers

considered Cornerstones to have some appeal or effectiveness in regard to student learning. Because Cornerstones principles require more preparation and instructional time, teachers would not implement these principles in their Typical lessons unless they felt that the extra time would be beneficial. Understanding that carryover would be inevitable, the research team frequently observed instruction to document implementation. In addition, teachers prepared daily logs of their instruction and reflected on student engagement and learning. This design

permitted the evaluation of the feasibility of the Cornerstones approach from one important perspective—that of the teachers involved.

Qualitative Design The qualitative design triangulation for the present study had three major components: observations, teacher debriefings, and teacher focus group interview. One observer was assigned to each classroom, observing every day during each intervention (i.e., Cornerstones and Typical). The main goal of the observation was to document the

Table 4

Quantitative Research Design Experiment 1

Experiment 2

Experiment 3

Pretest: WP, WK, Comp

Pretest: WP, WK, Comp

Pretest: WP, WK, Comp

Intervention 1 Typical (story 1)

Intervention 3 Cornerstones (story 3)

Intervention 5 Typical (story 5)

Posttest: WP, WK, Comp

Posttest: WP, WK, Comp

Posttest: WP, WK, Comp

(1-week break)

(1-week break)

(1-week break)

Pretest: WP, WK, Comp

Pretest: WP, WK, Comp

Pretest: WP, WK, Comp

Intervention 2 Cornerstones (story 2)

Intervention 4 Typical (story 4)

Intervention 6 Cornerstones (story 6)

Posttest: WP, WK, Comp

Posttest: WP, WK, Comp

Posttest: WP, WK, Comp

Notes. WP, Identification of Words in Print (Word Identification). WK, Word Knowledge. Comp, Story Comprehension.

60

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 61

manner in which the intervention was implemented. The observers used a checklist containing the elements of both interventions to note which lessons the teacher included that day, the instructional practice, the materials used, and the amount of time spent on each task. The checklist included the list of target words with dimensions (e.g., multiple meanings, nuances) and comprehension aspects (e.g., levels of questions). The observers also collected descriptive information about the classroom environment, the children’s use of materials, the interaction between the teacher and the students, and other information. At the completion of each intervention, the project staff met with teachers individually to obtain information on the use and feasibility of the Cornerstones approach and, based on observation data, to ensure that each teacher adhered, as much as possible, to the developed protocol for Cornerstones. A structured interview format was used to focus on each component of Cornerstones, its ease of use for the teachers, and its value to the students. The project staff were interested in documenting the carryover effects and in recording teachers’ perceptions regarding the task of proceeding with the subsequent Typical interventions in the second and third experiments after being exposed to the Cornerstones intervention. The existence of carryover effects would indicate a few contributions of the Cornerstones approach. Focus groups were held with participating teachers before and after the project. Prior to attending the focus group interview at the end of the project, teachers were asked to respond to a set of written questions. The questions were open ended and worded so as not to lead the interviewee. For instance, instead of asking them to tell what they liked about Cornerstones, the questions encouraged teachers to

reflect: “Briefly describe the key features of Cornerstones for another teacher who has never heard of it.” “Describe your impressions of Cornerstones.” “What would other teachers have to know to implement Cornerstones?” Such questions allowed individuals both to describe and to evaluate the approach from their standpoints. Project consultants developed questions, and a WGBH staffer who was experienced in moderating focus groups led the group.

Assessment Pretests and posttests were administered to the students before and after each intervention; thus, there were six pretest and six posttest results for each student. The composition of each test was influenced by the content of the story. Therefore, there were two different but equivalent sets of pretests and posttests within each experiment, one set per story. For each story, the pretest and posttest were identical. All students were exposed to the same interventions and participated in all experiments. The dependent or outcome variables were Identification of Words in Print (also referred to as Word Identification), Word Knowledge, and Story Comprehension. These outcomes were organized in terms of increasing complexity, with Word Identification preceding Word Knowledge, which, in turn, preceded Story Comprehension. It was assumed that students progressed from being able to identify words to gaining a more in-depth understanding of them, which in turn led to comprehending more about the stories they were reading or that were being read to them. Students were pulled out of class and tested individually. All assessments were conducted face-to-face, by two project examiners, one of whom was a deaf adult, in the communica-

tion mode most comfortable for the student. The examiner, who was hearing, tested the students in the oral and Total Communication programs, whereas the deaf examiner (who had speechreading and signing skills) tested the students in the bi-bi programs. Both examiners practiced and worked together to ensure the fidelity of assessment administration (which was evaluated prior to the experiments by project staff, who did not participate in the assessment). The examiners assessed students on two measures through pre- and posttests, Word Identification and Word Knowledge, and via posttests only for Story Comprehension (unless students had read the story before). The administrations of the tests were videotaped; this made it possible to capture both the examiner’s questions and remarks and the student’s responses. The tapes were then transcribed. (Sign language or signing without voice was translated into English first.) Project staff double-checked the transcriptions against the videos. Two project staff who had not administered the tests rated the students’ responses individually based on the transcripts. Interobserver agreement (IOA) was conducted on 100% of the assessments that included Word Identification, Word Knowledge, and Story Comprehension. IOA ranged between 95% and 100%. Disagreements were resolved through discussion.

Word Identification Identification of words in print, also called word identification, word recognition, word attack, word analysis, decoding, or single-word reading, refers to a student’s ability to identify a printed word by saying or pronouncing it. For deaf and hard of hearing students, identification can refer to the use of speech or signs (Paul, 2009). Thus, for Word Identification

61

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 62

EFFECTIVENESS OF THE CORNERSTONES PROJECT purposes, we documented the students’ performance with respect to pronouncing and/or signing the word independently. Twenty words were used in each story. Students were shown a series of 35 words (20 of which were target words and 15 nontarget easy-to-recognize vocabulary words). The nontarget words were added to develop students’ confidence and ensure that they recognized several words on each page. The easy-to-recognize nontarget words included some, not, come, my, apple, eat, can, see, yes, look, boy, house, car, red, cat, up, one, but, get, dog, blue, day, mom, girl, toy, school, green, big, bus, and jump. There were seven pages, with five words on a page. The words were in a large-print, child-friendly font. Students were shown one page at a time and asked to stamp, circle, or put a sticker on those words they immediately recognized. Students were then asked to read aloud the words they had stamped, circled, or stickered, one by one, orally and/or in sign. Fingerspelling was acceptable if the student seemed to immediately recognize the word. If the student appeared to copy the letters one by one, relatively slowly, this was not considered an acceptable response, and the child did not receive credit for identifying that word. Fingerspelling was permitted because some words, such as crow, do not have a specific sign. Praise and encouragement were provided throughout the test. Analysis of Word Identification was straightforward. Credit was given for signing and/or pronouncing the target words independently on the pretest and posttest. The students gained 2 points for each successful identification of a target word.

Word Knowledge Word knowledge, also referred to lexical knowledge, vocabulary knowl-

edge, and word comprehension, entails comprehending the meaning of a word. The Word Knowledge task had two aspects, one for the target words the student accurately identified in the Word Identification pretest and a separate task for those target words the student missed. Twenty individual target vocabulary cards were displayed, one at a time, in random order. The examiner tracked the words the child had identified correctly and incorrectly during the Word Identification subtest. When a student accurately identified a target word on the Word Identification subtest, the examiner would simply point to it and ask two questions: “Can you tell me about this word?” and “What else do you know about this word?” The examiner would probe further to ensure that the student had brought up everything he or she wanted to say about that particular word. When a student was unable to identify a target word, the examiner would point to it and read aloud and/or sign or fingerspell it. The examiner would do this twice. The examiner would then ask the same questions listed above, prompting the student to tell as much as he or she could pertaining to that target word. The order of the words for the Word Knowledge subtest was the same for all students, regardless of whether or not they had identified a target word. For Word Knowledge, students’ responses were scored according to the following scale: no points were awarded for nonresponses or irrelevant responses; 1 point for features, characteristics, habits, examples, or sentences associated with a specific meaning with little elaboration, or the copying of a sentence from the story with the word in it plus some elaboration (though mere copying did not earn a point); and 2 points for each different meaning or definition of the

word. If the meaning or definition was implicit via the use of sentences plus description or the use of synonyms, then 2 points were awarded. One additional point was awarded for each additional dimension of the word. There were no limitations on how many points a student could gain on a single word for Word Knowledge.

Story Comprehension Short-answer questions were used in the project to determine whether students could recall and make inferences from a text they had read. Comprehension was ascertained using two levels of questions: literal and inferential. To assign the status of literal or inferential to questions, we employed the paradigm discussed by Pearson and Johnson (1978; see also discussions in Paul, 1998, 2009; Trezek et al., 2010). Much as we did with the target words for each story, we ascertained the equivalency of the two sets of questions—one for the Typical stories and one for the Cornerstones stories within each experiment. The sets of questions within one experiment did not need to be equivalent to the sets within the other experiments. To determine whether each story was new to students and to obtain a pretest score in that case, examiners showed the storybook to each child individually and asked if he or she had ever seen or read it before. If the student responded positively, then a series of comprehension questions were asked. For these children, posttest scores were compared to pretest scores. Most children were unfamiliar with the books. During the posttest, the examiner confirmed the “name signs” (with pictures) for the main characters in the story that had been used by the classroom teacher during the teaching of the story. For oral students, the spoken form, which was matched with

62

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 63

the picture, was confirmed. The book was shown to the student, and warmup questions were asked. The book was then put away and the student was not allowed to look back at the book and its printed text during the Story Comprehension posttest. There were 10 comprehension questions for Experiments 1 and 3. Experiment 2 had 9 comprehension questions. If the child gave an answer that was off track, the examiner did not try to get the student back on track. If the child was on the right track but the answer was incomplete, the examiner would ask if the student had anything else to add. For Story Comprehension, each correct answer was awarded 2 points. Each incomplete or partly correct answer was awarded 1 point. Incorrect or irrelevant answers were awarded no points.

Analysis Two sets of data were analyzed in the study: the quantitative outcome data and the qualitative implementation data. For the quantitative outcome data, the raw scores on Word Identification and Word Knowledge obtained on preand posttests for each student (N = 22) were analyzed using a two-tailed paired-sample t test to compare the difference scores (difference score = posttest score—pretest score) for the Typical story and the Cornerstones story in each experiment. Only the raw scores for Story Comprehension in the posttests within each experiment were compared in the t test, because of insufficient data from pretests. We also calculated the mean score and standard deviation of all outcome variables in each story. For all paired samples (i.e., the Typical story scores and the Cornerstones story scores within each experiment), Cohen’s d (Cohen, 1988) was calculated to determine the effect size for all outcome variables.

While the focus of the outcome data was on the students, the focus of the implementation data was on the teachers. The implementation data included the observation checklist data and the debriefing and focus group interview data. The observation checklist data were based on teacher and observer logs, which permitted analysis of the implementation both within an experiment and across all experiments. The analysis of the observation data consisted of looking at simple descriptive statistics (e.g., means and standard deviations) that compared and contrasted the literacy interventions (i.e., Typical vs. Cornerstones). The variables were the number of lessons, overall length of the entire instructional time for the story, number of words taught, and additional word dimensions (i.e., nuances, meanings) taught. The teachers’ debriefing and focus group interview data, particularly the postproject focus group data, were analyzed via a color-coded system. Emerging themes were identified through the coding process. Results

Results From the Outcome Data Word Identification With respect to Word Identification within Experiment 1, the difference scores of the Typical story for all participants (N = 22) had a mean of 7.82

(SD = 5.779), whereas the mean difference scores for the Cornerstones story was 14.41 (SD = 8.421; see Table 5). When the difference scores in Experiment 1 for Word Identification were compared, there was an observed t value of -3.599 and a p value of .002, which indicated that the difference scores for the Cornerstones story were significantly greater than those for the Typical story (p < .05). Similarly, the results indicated that there were statistically significant differences between the Typical story scores and the Cornerstones story scores on Word Identification in Experiment 2 (t = –3.017, p = .007) and Experiment 3 (t = –3.560, p = .002). The students’ Word Identification difference scores showed that the effect size (d) was large in all three experiments: –.913 in Experiment 1, –.869 in Experiment 2, and –.892 in Experiment 3.

Word Knowledge With respect to Word Knowledge, the t test on difference scores of the Typical and the Cornerstones stories revealed an observed p value of .236 in Experiment 1, .959 in Experiment 2, and .125 in Experiment 3 (see Table 6). The p value was not statistically significant in any of the three experiments. The effect size (d ) of students’ difference scores in the Typical and Cornerstones interventions for Word Knowledge was –.282 (medium) in

Table 5

Word Identification: t Test on Difference Scores

Pair 1 Pair 2 Pair 3

T1 C1 T2 C2 T3 C3

M

SD

t

Significance (two-tailed)

7.82 14.41 12.32 19.36 13.73 19.32

5.779 8.421 7.094 8.990 6.080 6.454

–3.599

.002

–3.017

.007

–3.560

.002

Notes. T, Typical. C, Cornerstones. The numbers 1, 2, and 3 refer to the experiments.

63

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 64

EFFECTIVENESS OF THE CORNERSTONES PROJECT Table 6

Word Knowledge: t Test on Difference Scores

Pair 1 Pair 2 Pair 3

T1 C1 T2 C2 T3 C3

M

SD

t

Significance (two-tailed)

13.68 15.27 19.23 19.32 14.36 17.32

6.402 4.773 5.255 7.858 6.396 7.080

–1.220

.236

–0.52

.959

–1.598

.125

Notes. T, Typical. C, Cornerstones. The numbers 1, 2, and 3 refer to the experiments.

Experiment 1, –.013 (small) in Experiment 2, and –.439 (medium) in Experiment 3.

Story Comprehension For Story Comprehension, the t test on the posttest scores for the Typical and the Cornerstones stories revealed an observed p value of .000 in Experiment 1, .021 in Experiment 2, and .197 in Experiment 3 (see Table 7). The p value was statistically significant in Experiments 1 and 2, but not in Experiment 3. Students’ Story Comprehension posttest scores indicated that d = –.714 (medium) in Experiment 1, –.538 (medium) in Experiment 2, and –.196 (small) in Experiment 3.

Results From the Implementation Data The patterns that emerged from the observation checklist data illustrated the overall difference in implementation between the Typical and Corner-

stones approaches (see Table 8). For example, the average number of lessons taught in the Typical stories was 4.53 (SD = 0.92), whereas the average in Cornerstones was 6.33 (SD = 0.49). The average hours of instruction in Typical stories was 6.18 (SD = 2.01), whereas the average in Cornerstones was 12.37 (SD = 1.58). Although the average numbers of words taught in Typical stories (M = 24.87, SD = 14.06) was not much different from the average in Cornerstones (M = 29.20, SD = 2.08), the average additional word dimensions taught in Typical stories (M = 21.40, SD = 14.81) was significantly less than that in Cornerstones stories (M = 68.13, SD = 19.80). Furthermore, the observation checklist data answered the second research question: Are there carryover effects from the Cornerstones approach to the use of the Typical approach in subsequent experiments? If one refers to the counterbalanced de-

Table 7

Story Comprehension: t Test on Posttest Scores

Pair 1 Pair 2 Pair 3

T1 C1 T2 C2 T3 C3

M

SD

t

Significance (two-tailed)

11.14 14.50 10.45 12.59 12.91 13.59

4.744 4.668 3.912 4.043 3.279 3.660

–4.581

.000

–2.488

.021

–1.334

.197

Notes. T, Typical. C, Cornerstones. The numbers 1, 2, and 3 refer to the experiments.

sign presented previously in the present article, it can be argued that Cornerstones had a carryover effect on the Typical stories in both Experiments 2 and 3. As indicated in Table 8, comparison of Experiment 1 with Experiment 3 showed a slight increase in the mean number of lessons (from 3.8 to 5.0), a noticeable increase in the length of time spent on a Typical story (from 4.8 to 7.3), and, most interesting for our purposes, a substantial increase in the number of words taught (almost threefold) as well as the number of meanings (more than fourfold). Debriefing and focus group discussions constituted another piece of evidence emphasizing the effects of the Cornerstones treatment, and provided an indication of teachers’ attitudes toward the use of Cornerstones. In general, teachers reported that Cornerstones provided a richer instructional environment than their usual instructional approaches. The availability of high-quality technology-infused materials and comprehensive lesson guides with suggestions for strategies and activities in Cornerstones provided a more stimulating atmosphere for both students and teachers to engage in literacy events. The teachers were in agreement that their students exhibited a high level of motivation, interest, and engagement with the Cornerstones approach. Based on their written responses to the preinterview questions and comments during the focus group, the teachers were unanimous in their praise of and enthusiasm for Cornerstones. They appreciated the emphasis on vocabulary, especially the activities involving multiple meanings, nuances of words, compound words, and idioms. The influence of Cornerstones outside the classroom (and beyond the project) could not be evaluated scientifically. Nevertheless, there

64

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 65

Table 8

gaging students in the Word Knowledge and Story Comprehension tasks:

Observation Checklist Results Experiment 1

Approach T1 C1

2

T2 C2

3

T3 C3

All

T C

M SD M SD M SD M SD M SD M SD M SD M SD

Number of lessons

Hours of instruction

Words taught

Additional dimensions of words

3.80 1.10 6.00 0.00 4.80 0.45 6.00 0.00 5.00 0.71 7.00 0.00 4.53 0.92 6.33 0.49

4.80 2.43 12.15 1.77 6.45 1.51 11.70 1.43 7.30 1.39 13.25 1.40 6.18 2.01 12.37 1.58

13.80 8.38 27.80 0.45 24.00 6.82 27.80 0.45 36.80 15.59 32.00 0.00 24.87 14.06 29.20 2.08

7.40 4.77 52.00 15.86 27.00 14.46 79.40 19.42 29.80 12.77 73.00 15.17 21.40 14.81 68.13 19.80

Notes. T, Typical. C, Cornerstones. The numbers 1, 2, and 3 refer to the experiments.

seemed to be a carryover effect due to the use of strategies, such as breaking the story into segments or chunks, and an increase in the focus on words and meanings. In essence, this led to some students’ use of other words and more sophisticated language structures. As one teacher related, One of my children who had language testing, the linguist came in to me and she said, “. . . what did you do with this little boy this year? He is using words like ultimatum.” She said, “I can’t believe this. His language score has just zipped up.” So I think that chunking and the focusing really does help these children, and they do take it and start to apply it.

With respect to the third research question—What is the feasibility of using the Cornerstones approach for literacy instruction?—the results were positive. Teachers were highly enthused about Cornerstones’ emphasis on the importance of learning words

and vocabulary. Teachers’ comments seemed to underscore the purposes and goals of Cornerstones and the manner in which it was structured to accomplish these ends: The way that Cornerstones does it, I think, broadens their knowledge in that they apply it to every word, and not just the few that were present in the story. They’re thinking about all words in that meaning. They started using the terms like “Is this one an idiom?” You know, they were using these terms more and taking more time when they read it. “That one doesn’t make sense, this one must be an idiom” or something.

The teachers liked the ready-made units of Cornerstones as contrasted with what they typically had to do when preparing to teach a new story, and they appreciated being able to choose among a range of activities and materials, all of which provided them with multiple methods for en-

Having a ready-made unit that’s for your population, ready to go, makes it so much easier to teach the story, because often 100% of my time is making the tools to match the kids in the population. And so sometimes you can’t get into the element of story and the true literacy things that you are supposed to be teaching and reading, because you are so busy making materials to get them to that place. So for me, that’s one of the things I think is a real plus.

There was praise for the major components of Cornerstones—for examples, lesson guides (especially, the lesson objectives summary), students’ activity workbooks, games (e.g., BINGO, online games), activities (Destination Library), clip art images, and hypertext. The teachers also constantly reiterated a critical theme: the idea that the design of Cornerstones broke the lessons into “chunks of learning”: Cornerstones structures the teaching of a story into chunks that allow the students the time to soak in smaller parts of vocabulary and text in one lesson per day. Overall, the same amount of vocabulary is covered as well as text but several things happen, students see the vocabulary with more frequency and the students read the text several times over the course of the unit. This is different than having students learn all the vocabulary first and then having them read the story. They lose interest after this typical process and don’t want to revisit the text, thus making it harder [for me] as a teacher to expand their understanding of the deeper meanings of the story.

65

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 66

EFFECTIVENESS OF THE CORNERSTONES PROJECT Conclusions The present study assessed the efficacy of the Cornerstones approach, which integrates technology with beginning literacy instruction for students who are deaf or hard of hearing. A mixed research design was developed to answer three research questions. The first research question was, Will children who are deaf or hard of hearing demonstrate differences in beginning reading skills as measured by Word Identification (for printed words), Word Knowledge, and Story Comprehension? With respect to these three variables, a within-subject quasi-experimental design was developed to compare students’ outcomes from Cornerstones approach and the Typical approach in three experiments. The results revealed that there were statistically significant gains from the Cornerstones instruction in all three experiments for Word Identification (p = .002, .007, and .002), and in Experiments 1 and 2 for Story Comprehension (p = 000 and .021), but no significant gains for Word Knowledge (p = .236, .959, and .125) or in Experiment 3 for Story Comprehension (p = .197). The mixed results might be explained in part by the findings associated with the second research question: Are there carryover effects from the Cornerstones approach to the use of the Typical approach in subsequent experiments? The implementation data, especially the observation checklist data, supported a positive answer to the question. Participating teachers enjoyed the features of the Cornerstones units in Experiment 1, and they started to adopt some of these features in subsequent experiments for the stories taught using the Typical approach. Because teachers were required to conduct a Typical unit before they were introduced to the Cornerstones

unit in Experiment 1, the results in Experiment 1 might be a good indication of the difference between the Cornerstones approach and the Typical approach. In regard to Experiment 1, t tests comparing the difference scores of the Typical and Cornerstones interventions revealed that there was a statistically significant difference for Word Identification (t = –3.599, p = .002), but no significant difference for Word Knowledge (t = –1.220, p = .236). At the same time, a t test comparing the posttest scores of the Typical and Cornerstones interventions in Experiment 1 revealed that there was a statistically significant difference for Story Comprehension (t = –4.581, p = .000). In addition to the carryover effects from the Cornerstones approach, there might be other factors that contributed to the indistinguishable difference between the two interventions in Word Knowledge—for example, the rating artifacts of the Word Knowledge test. Historically, it generally has been difficult to assess vocabulary knowledge, much less the vocabulary knowledge of young children who are deaf or hard of hearing (see discussions in Paul, 1998, 2009). It is possible that the rating scale was not sensitive enough to account for nuances provided by the students. Second, assessment fatigue might have influenced the data. During the assessments, the Word Knowledge portion was administered last. Students were requested to say everything they knew about a word, for 20 words, one after the other. Clearly, some students became fatigued, not only because of the number of words, but also because the words were presented out of context. That is, the demands on each student were high; it was up to him or her to determine what to say about each word. In fact, teachers were able to pinpoint instances of children

who used target words such as ultimatum in class correctly yet responded to the word on the posttest with “I don’t know.” Students also became tired on the posttest and did not want to repeat what they had said on the pretest. In addition, it appeared that many students disliked being pulled out of the classroom for testing. These themes emerged in the focus group sessions as well (i.e., from the comments by teachers and examiners). It is also possible that the techniques and activities for developing Word Knowledge in the Cornerstones approach, albeit research based for hearing students (e.g., see Paul, 1998, 2009; Trezek et al., 2010), were not sufficient for students who were deaf or hard of hearing. Given that there has been little or no research comparing the effectiveness of various methods of vocabulary instruction, it is difficult to comment further on this issue for deaf and hard of hearing students. There needs to be additional intervention research on vocabulary instruction. The third question was, What is the feasibility of using the Cornerstones approach for literacy instruction? Based on the data collected from the teacher debriefing and focus group interviews, we conclude that using the Cornerstones approach is highly feasible. The teachers praised the technology aspects of the project and noted the usefulness of the variety of instructional strategies and activities embedded in the materials. One teacher summed up by stating, “I have become Cornerstones’ biggest cheerleader. I mean, so much that [my school] want[s] me to make a presentation.” Limitations of the Study The present study had a few limitations. First, because of the limited number of student participants, a within-subject quasi-experimental de-

66

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 67

sign was developed to analyze the outcome data. The carryover effect influenced the outcome of the data. To address this limitation, future studies should develop a more rigorous experimental design to evaluate the effectiveness of the Cornerstones approach with a larger population of students who are deaf or hard of hearing. Second, the assessment used for Word Knowledge might not be effective at capturing the gains made by students. Future investigations might employ a standardized measure of Word Knowledge more closely aligned with classroom instruction to obtain more accurate information. It might also be the case that the techniques and materials used to develop vocabulary knowledge were not effective for the group of students in our study. This is another issue for further intervention research on methods of vocabulary instruction. Third, assessment fatigue had a negative impact on the data. To alleviate this limitation, future investigations should shorten the assessment time and include assessments that are more user friendly. In the present study, students were pulled out for assessments while the rest of the students were watching movies. This had a negative impact on the performance of some students. Fourth, in the Typical approach, the reported average hours of instruction was 6.18 (SD = 2.01), whereas it was 12.37 (SD = 1.58) under the Cornerstones approach. In the present study, the duration associated with the lesson was determined by the individual teacher, and different teachers took varying amounts of time. In essence, teachers were free to determine the amount of time, materials, and other aspects, which constituted what they would “typically” do for a particular story. Future studies should

control the time factor to account for the influence of this variable. In sum, the results of the Cornerstones intervention revealed that deaf and hard of hearing students demonstrated differences in beginning reading skills as measured by Word Identification; however, no differences were found for Word Knowledge, and mixed results were found for Story Comprehension. There were some carryover effects from the Cornerstones approach to the use of the Typical approach in subsequent experiments, which might have influenced the results. The carryover effects also supported the feasibility of using the Cornerstones approach for beginning literacy instruction. Additional research is needed on the effectiveness and feasibility of integrating technology and literacy instruction for children who are deaf or hard of hearing. Note The research for the present article was supported by funded grants from the Office of Special Education Programs, U.S. Department of Education, to the WGBH National Center for Accessible Media (Grants H327A98023 and H327A010005). The second author served as principal investigator. The opinions expressed do not necessarily reflect the position, policy, or endorsement of the supporting agency. References Andrews, J., & Mason, J. (1991). Strategies used among deaf and hearing readers. Exceptional Children, 57, 536–545. Andrews, J. F., Winograd, P., & DeVille, G. (1994). Deaf children reading fables: Using ASL summaries to improve reading comprehension. American Annals of the Deaf, 139, 378–386. Banner, A., & Wang, Y. (2009, April). A comparative analysis of the metacognitive reading strategies used by highly skilled and less skilled deaf readers. Paper presented at the annual meeting of the American Educational Research Association, San Diego, CA.

Barr, R., Kamil, M. L., Mosenthal, P., & Pearson, P. D. (Eds.). (1991). Handbook of reading research. New York, NY: Longman. Cain, K., & Oakhill, J. (Eds.). (2007). Children’s comprehension problems in oral and written language. New York, NY: Guilford Press. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum. Conrad, R. (1979). The deaf schoolchild: Language and cognitive function. London, England: Harper & Row. Davey, B. (1987). Postpassage questions: Task and reader effects on comprehension and metacomprehension processes. Journal of Reading Behavior, 19, 261–283. Dyer, A., MacSweeney, M., Szczerbinski, M., Green. L., & Campbell, R. (2003). Predictor of reading delay in deaf adolescences: The relative contribution of Rapid Automatized Naming speed and phonological awareness and decoding. Journal of Deaf Studies and Deaf Education, 8, 215–229. Ewoldt, C. (1986). What does “reading” mean? Perspectives for Teachers of the Hearing Impaired, 4, 10–13. Fischler, I. (1985). Word recognition, use of context, and reading skill among deaf college students. Reading Research Quarterly, 20, 203–218. Israel, S., & Duffy, G. (Eds.). (2009). Handbook of research on reading comprehension. New York, NY: Routledge. Jackson, D. W., Paul, P. V., & Smith, J. C. (1997). Prior knowledge and reading comprehension ability of deaf adolescents. Journal of Deaf Studies and Deaf Education, 2, 172–184. Kamil, M. L., Mosenthal, P. B., Pearson, P. D., & Barr, R. (Eds.). (2000). Handbook of reading research. Mahwah, NJ: Erlbaum. King, C. (1984). National survey of language methods used with hearing-impaired students in the United States. American Annals of the Deaf, 129, 311–316. King, C., & Quigley, S. (1985). Reading and deafness. San Diego, CA: College Hill Press. LaSasso, C., Crain, K., & Leybaert, J. (2003). Rhyme generation in deaf students: The effect of exposure to Cued Speech. Journal of Deaf Studies and Deaf Education, 8, 250–270. LaSasso, C., & Mobley, R. (1997). National survey of reading instruction for deaf or hardof-hearing students in the U.S. Volta Review, 99, 31–58. Luckner, J. L., Sebald, A. N., Cooney, J., Young, J., & Goodwin, S. (2005/2006). An examination of the evidence-based literacy research in deaf education. American Annals of the Deaf, 150, 443–456. MacGinitie, W. (1969). Flexibility in dealing with alternative meanings of words. In J. Rosenstein & W. MacGinitie (Eds.), Verbal behavior

67

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

18047-AAD156.1 4/4/11 1:19 PM Page 68

EFFECTIVENESS OF THE CORNERSTONES PROJECT of the deaf child: Studies of word meanings and associations (pp. 45–55). New York, NY: Teachers College Press. Nielsen, D. C., & Luetke-Stahlman, B. (2002). Phonological awareness: One key to the reading proficiency of deaf children. American Annals of the Deaf, 147, 11–19. Paul, P. (1996). Reading vocabulary and deafness. Journal of Deaf Studies and Deaf Education, 1, 3–15. Paul, P. (1998). Literacy and deafness: The development of reading, writing, and literate thought. Needham Heights, MA: Allyn & Bacon. Paul, P. (2009). Language and deafness (4th ed.). Sudbury, MA: Jones & Bartlett. Paul, P., & Gustafson, G. (1991). Hearingimpaired students’ comprehension of highfrequency multimeaning words. Remedial and Special Education, 12(4), 52–62. Paul, P., Stallman, A., & O’Rourke, J. (1990). Using three test formats to assess good and poor readers’ word knowledge (Technical Report 509). Champaign, IL: University of Illinois Center for the Study of Reading. Pearson, P. D., Barr, R., Kamil, M. L., & Mosenthal, P. (Eds.). (1984). Handbook of reading research. New York, NY: Longman. Pearson, P. D., & Johnson, D. (1978). Teaching

reading comprehension. New York, NY: Holt, Rinehart & Winston. Ruddell, R., & Unrau, N. (Eds.). (2004). Theoretical models and processes of reading (5th ed.). Newark, DE: International Reading Association. Schirmer, B. (2000). Language and literacy development in children who are deaf (2nd ed.). Boston, MA: Allyn & Bacon. Schirmer, B. R., & McGough, S. M. (2005). Teaching reading to children who are deaf: Do the conclusions of the National Reading Panel apply? Review of Educational Research, 75(1), 83–117. Schirmer, B. R., & Winter, C. R. (1993). Use of cognitive schema by children who are deaf for comprehending narrative text. Reading Improvement, 30, 26–34. Silvaroli, N. (1975). Classroom reading inventory (2nd ed.). Dubuque, IA: William Brown. Silverman-Dresner, T., & Guilfoyle, G. (1972). Vocabulary norms for deaf children: The Lexington School for the Deaf education series, book VII. Washington, DC: Alexander Graham Bell Association for the Deaf. Snow, C., Burns, S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press.

Snowling, M., & Hulme, C. (Eds.). (2005). The science of reading: A handbook. Malden, MA: Blackwell. Strassman, B. K. (1992). Deaf adolescents’ metacognitive knowledge about schoolrelated reading. American Annals of the Deaf, 137, 326–330. Trezek, B. J., & Wang, Y. (2006). Implications of utilizing a phonics-based reading curriculum with children who are deaf or hard of hearing. Journal of Deaf Studies and Deaf Education, 11, 202–213. Trezek, B., Wang, Y., & Paul, P. (2010). Reading and deafness: Theory, research and practice, Clifton Park, NY: Cengage Learning. Trezek, B. J., Wang, Y., Woods, D. G., Gampp, T. L., & Paul, P. (2007). Using Visual Phonics to supplement beginning reading instruction for students who are deaf/hard of hearing. Journal of Deaf Studies and Deaf Education, 12, 373–384. Walter, G. (1978). Lexical abilities of hearing and hearing-impaired children. American Annals of the Deaf, 123, 976–982. Wang, Y., Trezek, B., Luckner, J., & Paul, P. (2008). The role of phonology and phonologically related skills in reading instruction for students who are deaf or hard of hearing. American Annals of the Deaf, 153, 396–407.

68

VOLUME 156, NO. 1, 2011

AMERICAN ANNALS OF THE DEAF

Suggest Documents