validation of a game understanding test in badminton

39 downloads 170 Views 83KB Size Report
70. SEO. ARO. TAP. Scored points. Novice. Expert. **. ***. ***. - The video segments consisted of: * 3 second still frame. * 2-20 seconds of lead-up to the match ...
DESCRIPTION AND PRIMARY VALIDATION OF A GAME UNDERSTANDING TEST IN VOLLEYBALL Mikko Häyrinen, Minna Blomqvist, Pekka Luhtanen, Kare Norvapalo and Tomi Vänttinen KIHU – Research Institute for Olympic Sports, Jyväskylä, Finland INTRODUCTION Skilled performance in sport involves an integration of general and specific motor skills and knowledge. General motor skills and knowledge are elements that may contribute to successful performance in a variety of games. Sport skills and knowledge are specific when they are unique to one (or a few) sport(s). In volleyball sport-specific skills include serving, receiving, setting, attacking, blocking and defending (Figure 1). Where and how to serve, where to receive, where to set, how to attack, what direction of attack to block and where to defend for certain kind of attack are examples of sport specific knowledge in volleyball (Griffin et al. 1997).

Figure 1. A model for the progress of a single volleyball point (Häyrinen et al. 2000).

The purpose of this study was to develop and establish the validity and reliability of a volleyball test procedure in order to be able to assess game understanding in volleyball.

MATERIAL AND METHODS Development and description of test - Collecting the video material: 90 minutes of normal women’s volleyball. - Selecting the video samples: 17 different video segments (vignettes) were selected by four experts out of 30 tactical match situations. - Test segments: * two on serving * three on receiving * three on setting * three on offence (Figure 2) * six on defence.

Figure 2. An offensive test situation.

***

70 Scored points

- The video segments consisted of: * 3 second still frame * 2-20 seconds of lead-up to the match situation * 10 seconds of video of the last field of view of the video segment * 45 seconds of blank tape. - The participants were asked to select from three possible options, which were represented by arrows drawn in a still frame. - The participants had to choose two arguments from a set of 7–9 written arguments, as to why they believed their chosen option was correct. Grading of responses - The response awarded two, one or zero points. - The argument awarded two, one or zero points. - Each player received three scores: * the points from selected options (SEO) (max. score 34 points) * the points from the two argument options (ARO) (max. score 51 points) * the total amount of points (TAP) (max. score 85 points) Setting, participants, and procedure - 16-19 years old novice (n=21) and expert (n=22) players. - The novices performed the test once. - The experts performed the test-retest. - Prior to each test, all participants were given the same instructions. - The test began after four rehearsal situations. - The video sequences were shown on a screen (1.2 x 1.2 metres) by a projector. Establishing reliability and validity - Content validity: demonstrating that the items in the test adequately represent all important areas of the content. - Construct validity: an independent t-test and effect size. - Internal consistency: the coefficient alpha technique. - Reliability in the expert group: the test-retest method.

RESULTS AND DISCUSSION Content validity The content of the test was selected by four experts from an extensive video material including one match of women’s volleyball. Two of the experts had international or national level playing experience and they were also international level coaches. The other two experts were national level coaches. The selection of the situations was based on both offensive and defensive main strategies that are relevant in teaching volleyball (Griffin et al. 1997). Therefore, it could be argued that the test items selected were essential for the game covering widely the different areas of the game. Construct validity Figure 3 presents the game understanding test scores for both groups. The effect sizes were for SEO 1.10, ARO 2.75 and TAP 2.29. According to Cohen (1969) all effect sizes can be considered large (ES>.80). These results clearly indicate that the test was able to detect differences between the groups in each game understanding variable.

60

50

**

***

40 Novice Expert 30

20

10

0 SEO

ARO

TAP

Figure 3. Comparison of the scores in the game understanding test between the groups (** = p