Spatial Interfaces
Editors: Frank Steinicke and Wolfgang Stuerzlinger
GyroWand: An Approach to IMU-Based Raycasting for Augmented Reality Juan David Hincapié-Ramos University of Manitoba Kasim Özacar Tohoku University Pourang P. Irani University of Manitoba Yoshifumi Kitamura Tohoku University
O
ptical see-through head-mounted displays (OST-HMDs), such as Epson’s Moverio (www.epson.com/moverio) and Microsoft’s Hololens (www.microsoft.com/microsoft -hololens), enable augmented reality (AR) applications that display virtual objects overlaid on the real world. At the core of this new generation of devices are low-cost tracking technologies using techniques such as marker-based tracking,1 simultaneous localization and mapping (SLAM),2 and dead reckoning based on inertial measurement units (IMUs),3,4 which allow us to interpret users’ motion in the real world in relation to the virtual content for the purposes of navigation and interaction. These tracking technologies enable AR applications in mobile settings at an affordable price. The advantages of pervasive tracking come at the cost of limiting interaction possibilities, however. Off-the-shelf HMDs still depend on peripherals such as handheld touch pads and other peripherals. Mid-air gestures and other natural user interfaces (NUIs) offer an alternative to peripherals but are limited to interacting with content relatively close to the user (direct manipulation) and are prone to tracking errors and arm fatigue.5 Raycasting, an interaction technique widely explored in traditional virtual reality (VR), is another alternative for interaction in AR.6 Raycasting generally requires absolute tracking of a handheld controller (known as a wand), but the limited 90
g2spa.indd 90
March/April 2016
tracking capabilities of novel AR devices make it difficult to track a controller’s location. In this article, we introduce a raycasting technique for AR HMDs. Our approach is to devise a way to provide raycasting based only on the orientation of a handheld controller. In principle, the rotation of a handheld controller cannot be used directly to determine the direction of the ray as a result of intrinsic problems with IMUs, such as magnetic interference and sensor drift. Moreover, a user’s movement in space creates a situation in which the virtual content and ray direction are often not aligned with the HMD’s field of view (FOV). To address these challenges we introduce GyroWand, a raycasting technique for AR HMDs using IMU rotational data from a handheld controller. GyroWand’s fundamental design differs from traditional raycasting in four ways: ■
■
■
■
interprets IMU rotational data using a state machine, which includes anchor, active, out-ofsight, and disambiguation states; compensates for drift and interference by taking the orientation of the handheld controller as the initial rotation (zero) when starting an interaction; initiates raycasting from any spatial coordinate (such as a chin or shoulder); and provides three disambiguation methods: Lock and Drag, Lock and Twist, and AutoTwist.
Published by the IEEE Computer Society
0272-1716/16/$33.00 © 2016 IEEE
2/22/16 10:37 PM
This article is a condensed version of a paper presented at the 2015 ACM Symposium on Spatial User Interaction.7 Here we focus on our design choices and their implications and refer the reader to the full paper for the experimental results.
IMU for Raycasting To provide raycasting for AR HMDs, we leverage the sensors located on the controller piece normally attached to HMDs. Such a controller contains a nine-axis IMU that captures the controller orientation. Our approach is to use this rotational data to control the pointing direction of the ray. This approach can be extended to HMDs without a handheld controller (such as a Microsoft HoloLens) using external objects to capture rotational data, such as Bluetooth connected wands, rings, or even smartphones. The rotational data gathered from IMUs has several known problems, however, including noise, drift and/or magnetic interference, and axis mapping. For raycasting, sensor noise or interference means that the direction of the ray will exhibit jitter or trembling, even if there is none. Sensor drift means that the same controller orientation results in different ray directions as drift accumulates. Finally, the mismatch between the tracker coordinate system and the real world leads to an unnatural mapping between the movements of the controller and the ray, which negates one of the strengths of raycasting. Furthermore, IMUs cannot provide data related to the handheld controller’s actual location. A raycasting solution using rotational data as provided by an external IMU should address the following challenges: ■■
■■
■■
■■
■■
Compensate for noise-induced jitter or interference. Acknowledge the continuous effect of sensor drift. Recognize the changing relation between the coordinate systems of the handheld controller and the virtual content. Provide an alternative origin for the virtual ray given that the actual controller position cannot be tracked. Propose disambiguation techniques that leverage the degrees of freedom available to the controller.
GyroWand Design GyroWand is a raycasting technique for selfcontained AR HMDs. GyroWand directs a ray originated at a predefined location in virtual space using rotational data acquired from an IMU in the
g2spa.indd 91
Figure 1. GyroWand enables raycasting in head-mounted displays (HMDs). In this example the ray origin is the user’s chin. The ray direction is controlled using the inertial measurement unit (IMU) on the handheld controller. Hand twist –45°
Ray apex
1 m away
0°
0 cm
2°
3.5 cm
6°
10.5 cm
0°
+45°
Figure 2. Pronation and supination control apex aperture. These help reduce jitter while maintaining the isomorphism between the movements of the controller and the ray.
user’s hand (see Figure 1). This section explains how GyroWand addresses the five challenges we just described.
Dynamic Apex To address the first challenge, the effect of noise on jitter and selection accuracy, GyroWand uses a dual-pronged approach. First, it filters IMU data with a moving window from the last five data points. This approach reduces jitter while maintaining the isomorphism between the movements of the controller and the ray. Second, we use a cone shape with a 2-degree aperture apex (as in earlier work8). Moreover, users can decrease the aperture apex to 0 degrees by twisting the controller 45 degrees inward (pronation). Similarly, users can increase the aperture apex to 6 degrees by twisting the controller 45 degrees outward (supination) (see Figure 2). Alternatives to the manual control of the aperture apex also exist. For example, the system could monitor the noise level and automatically adjust the apex to the noise, providing a wider apex when the noise level is high and a narrow one when the noise level is low. Another alternative would be to IEEE Computer Graphics and Applications
91
2/22/16 10:37 PM
Spatial Interfaces Rotate/tap
TwistUp or Time out
Out of sight
Rotate
Tap Rotate
Anchor
Tap start Active
TwistUp or Rotate time out
Disambiguation
Rotate
TwistUp or tap end
Figure 3. State machine for IMU-based raycasting. The four states are anchor, active, out-of-sight, and disambiguation.
increase the apex aperture according to the phase of the selection task. In the ballistic phase, when the user is moving from one target to another with large movements, the apex can be wider. In the corrective phase, when the user is refining the selection with small movements, the apex can be narrower.
A State-Machine for Raycasting GyroWand addresses the second (drift) and third (coordinate mapping) challenges using the state machine presented in Figure 3. GyroWand is always in one of four states: anchor, active, out-ofsight, and disambiguation. GyroWand transitions between states automatically via timeouts or when the user manually rotates or touches the controller. The initial anchor state is set during the initialization process. In the anchor state, GyroWand points its ray to a predefined coordinate in space relative to the HMD called the anchor point. When the user moves or rotates his or her head in virtual space, the anchor point remains at the same position relative to the HMD and therefore the ray seems static to the user. Rotational data has no effect on the ray direction. The user interface should clearly show the ray as anchored and therefore disabled for interaction. Tapping on the controller’s touch pad activates the ray, transitioning from the anchor state to the active state. On transitioning, GyroWand captures the controller’s orientation and uses it as the initial or baseline rotation—that is, the rotation at which the active ray points at the anchoring point. All further rotational movement of the controller affects the ray direction. GyroWand sends a hover event to all the virtual objects it crosses. A tap event (< 150 ms) sends a selection notification to the hovered object closest to the ray. 92
g2spa.indd 92
Users can transition back to the anchor state in two ways: an immobility timeout and a TwistUp gesture. The immobility timeout (for example, 5 seconds) changes the state back to anchor when the total rotation on all axes is smaller than a given threshold. This timeout serves as an implicit interaction to disable the ray when the user places the controller on a surface (such as a table). The user can also issue a TwistUp gesture (akin to pulling a fishing rod). In the presence of drift, a ray slowly changes direction even as the physical controller remains still, eventually leading to the ray leaving the display area. Typically, a user compensates for drift by rotating the controller in the opposite direction. However, when drift is large, users can reset the initial rotation by moving back into the anchor state (using TwistUp gestures) and back again into the active state (by tapping the touch pad). Moving between the anchor and active states, thus setting the handheld controller’s initial or baseline rotation, helps users to manually deal with conflicting coordinate systems. If the user is moving around while the GyroWand is active, he or she can reset the baseline rotation by going back into anchor state and back again to active. The GyroWand’s ray can at times be out of the user’s view—that is, outside the small FOV of the HMD. For example, the Epson Moverio BT-200 has a diagonal FOV of 23 degrees (approximately 11.2 degrees vertical). Accumulated drift or user movement in the real world easily causes the ray to exit this small window. User interactions with the handheld controller or a change of hand posture to reduce fatigue can also lead to a controller orientation that takes the ray of out sight. Recovering from the situation when the ray is out of the user’s FOV can be challenging because the user
March/April 2016
2/22/16 10:37 PM
has no indication of how the handheld controller must be rotated to bring the ray back into view. GyroWand enters into the out-of-sight state automatically when it judges the ray to be out of the HMD’s FOV. In the out-of-sight state, touch input is ignored to avoid triggering selection events on virtual objects out of the user’s view. The user can bring the GyroWand back into the active state by rotating the controller until the ray is back into view. The user can also issue a TwistUp gesture to anchor the ray and then go back to active state. Finally, GyroWand transitions into the anchor state automatically after a given time threshold (about 5 seconds). This transition aims at reducing the time users spend “looking” for the ray. The disambiguation state helps users refine the actual target they want to select. GyroWand goes into the disambiguation state only when a tap is started from the active state and the ray crosses several virtual objects. In the disambiguation state, the ray is locked on the position and orientation where the tap started. The default selection target changes from the object closest to the ray to the one closest to the ray origin. Users can use any disambiguation method to iterate over the highlighted objects and specify a new selection target. Disambiguation ends when the user releases the finger from the touch pad, issuing the selection event on the current selection target and returning to the active state.
On-Body Raycast Origin Given that GyroWand uses only rotational data, it does not know the spatial location of the controller in the real world. Therefore, we must determine a suitable body location as the ray origin. Our initial idea was to use the eyes’ center as the origin. However, the visual representation of such a ray is confusing at best: the ray is shown too close to the users’ eyes, resulting in poor 3D visual effect (often visible only to one eye or hard to fuse stereoscopically), it occupies considerable display real estate, and it causes eye strain. Therefore, we explored different body locations (see Figure 4). The first ray origin location we considered was the middle side, which is located at (15, –40, 20)— that is, 15 cm to the user’s dominant side, 40 cm below the virtual camera, and 20 cm in front of the user. This location most closely resembles a user’s hand position when he or she is holding the HMD controller while standing. The second ray origin location was the chest at (0, –30, 10). This location is similar to the way smartphones are held when the user is typing. The third ray origin location was the shoulder at (15, –20, 0).
g2spa.indd 93
HMD origin (0, 0, 0) Chin (0, –12, 0) Shoulder (15, –20, 0)
Chest (0, –30, 10) External (from tracker) Middle side (15, –40, 20) Figure 4. Possible virtual origins for GyroWand’s ray. For each ray origin, we list the distance from the HMD origin in centimeters.
The final ray origin location was the chin at (0, –12, 0). The chin location reduces the effect of the distance to the target on the ray direction. For example, when content is close to the eyes, the controller would have to be held almost vertical when coming from the chest and almost lateral when coming from the shoulder. These orientations cause a mismatch between the hand movement and the user’s capacity to acquire a target. A ray coming from the chin will almost always move forward and within the HMD FOV. Nonetheless, these differences minimize rapidly as the distance to the target increases. Our experimental results showed that an origin near to the user’s chin is the best choice regardless of the distance to the targets. The chin location was better than the other origins not only in terms of completion time, but we also observed a low error rate. Moreover, participants did not report any extra fatigue when the ray came from the chin, compared with other origins. An interesting observation is that raycasting from the actual controller location yielded some of the worst performance metrics—the slowest completion time and a higher error rate. This means that users actually struggled with selecting content within arm’s reach with a ray coming from their hands, even when they could accommodate the controller in a more convenient position. This result could have important implications on the use of raycasting in other scenarios such as virtual IEEE Computer Graphics and Applications
93
2/22/16 10:37 PM
Spatial Interfaces Target
Tap start
Tap start + ∆t
Drag
+ Drag
Tap end Figure 5. Lock and Drag. Placing a finger down start disambiguation. Dragging the finger up and down refines the selection. Lifting the finger confirms the selection. Objects not within the selection cone are in gray, objects in the selection cone are in light blue, the current selection candidate is dark blue, and the selected object is magenta. Pink indicates the active ray, and green indicates the disambiguation ray.
environments. (A more detailed experiment analysis of this design is available in previous work.7)
IMU-Based Disambiguation Common approaches to raycasting disambiguation include marking menus and movement along the ray’s axis.8,9 Although the use of marking menus such as the Flower Ray8 and similar approaches is an interesting direction, we are interested in approaches that do not require user interface changes. Moreover, moving the controller along the ray’s axis is not an option because our design does not use external tracking, and IMU-based estimations of spatial movement are still unreliable for the consumer-level IMUs included in offthe-shelf HMD devices. Therefore, we focus our exploration on techniques that use IMU-based rotational movement and touch input.
Lock and Drag Figure 5 shows our first technique: Lock and Drag. Using the LD approach, the user directs the ray toward a set of objects including the object to select. These “hovered” objects provide graphical feedback. Putting a finger on the touch pad transitions GyroWand into the disambiguation state, locking the ray and providing feedback about the new state. As long as the finger presses on the touch pad, the ray position and orientation remain 94
g2spa.indd 94
unchanged. Initially, the object closest to the ray is the selection candidate. If the user does not remove the finger within a given ∆t (< 150 ms), the object closest to the ray origin becomes the selection candidate. If the user drags the finger in any direction away from the initial point of contact and after a set distance threshold (50 pixels), the next object away from the origin becomes the selection candidate. Therefore, to select the first object on the ray path, a simple tap plus ∆t is enough. Dragging the finger one distance threshold selects the second object, two thresholds select the third, and so on. Dragging the finger back to the point of contact moves the selection candidate back to the previous object. Selection is triggered when the user removes the finger from the touch pad (see the magenta circle in Figure 5). The locking gesture is a source of errors in itself. The finger movement needed to activate locking is often accompanied by a hand rotation and sometimes a full-body movement, a phenomenon that is known as the Heisenberg effect.10 GyroWand corrects for the Heisenberg effect by correcting the ray’s rotation and position to their respective values 50 milliseconds before locking the ray and triggering the selection event. Nonetheless, users might still activate the ray at the wrong position because of their manipulation of the controller or normal operational or perceptual errors. We minimized that possibility by limiting the locking operation to situations in which the ray is actually intercepting an object and the intersection point with such an object is inside the users’ FOV. Users can unlock the ray (out of the disambiguation state and into the active state) by issuing a TwistUp gesture.
Lock and Twist Lock and Twist (LT) leverages the extra degree of freedom in wrist movement that is not used by the GyroWand (see Figure 6). GyroWand uses flexion to rotate the ray along the x axis and ulnar and radial deviation for rotation along the y axis. LT leverages pronation and supination (a twist of the wrist) as an available degree of freedom. Similar to the previous technique, the user points the ray toward a set of objects before moving into the disambiguation state. The user changes the selection candidate by turning the wrist outwards (supination) by a given threshold. Based on earlier work,11 we implemented a linear segmentation of the available rotational space into four spaces (60 degrees available for supination, divided by a 15-degree threshold).
March/April 2016
2/22/16 10:37 PM
Target
AutoTwist Figure 7 presents our final disambiguation tech nique: Autolock and Twist (AT), or simply Auto Twist. The difference between AT and LT is the way in which the GyroWand enters the disambigua tion state. In AT the user points the ray toward the set of objects and issues a quick supination movement. After iterative testing, we settled for a rotational speed of 50 degrees/second. Once the trigger twist is detected, the system corrects the ray’s orientation and position to those at the start of the movement.
Tap start
Tap start + ∆t
Twist
Experimental Analysis Our experimental results show LT to be generally fastest compared with the other techniques, with out any negative effects for error rate and physi cal exertion. This advantage was clear even when AutoTwist showed a faster disambiguation time. We believe LT has an overall advantage because of the perceived difficulty participants encountered in locking the ray with the AutoTwist technique. Based on these results, we recommend using LT when the number of targets to disambiguate is small. The reason is that having to navigate through a large list of objects by twisting the wrist does not scale well and leads to uncomfortable hand positions. (A more detailed experiment anal ysis of this design is available in previous work.7) In the future, one possible way to deal with scal ability is a disambiguation mechanism that com bines the LT and LD approaches. Yet, allowing the user to switch to the more time-consuming fin ger dragging method in combination with a twist movement is likely inconvenient.
O
ur work has revealed three main implications of the GyroWand design. First, IMU sensing alone is a suitably good approach for target selec tion in AR HMD platforms. Second, raycasting using IMU data can originate at a location away from the controller. An origin close to the user’s chin is a good candidate. This is particularly im portant for close targets, as we tested in our study. Our results support earlier research12 that found that origins close to the eye to provide “good per formance for both targeting and tracing tasks.” Lastly, using small hand rotations, particularly wrist pronation and supination, are good design options for disambiguating targets laid out in 3D. In combination, a mechanism to switch into a disambiguation mode, such as the LT approach, provides a suitable balance to mitigate unwanted movement and allow precise selection.
g2spa.indd 95
+ Twist
Tap end Figure 6. Lock and Twist. Placing a finger down starts disambiguation, supination and pronation refine the selection, and lifting the finger confirms the selection. Target
Twist trigger
Twist
+ Twist
Tap Figure 7. AutoTwist. A quick twist starts disambiguation, supination refines the selection, and a tap confirms the selection.
References 1. Y. Nakazato, M. Kanbara, and N. Yokoya, “Wearable Augmented Reality System Using Invisible Visual Markers and an IR Camera,” Proc. 9th IEEE Int’l Symp. Wearable Computers (ISWC), 2005, pp. 198–199. 2. J. Ventura et al., “Global Localization from Monocular SLAM on a Mobile Phone,” IEEE Trans. Visualization and Computer Graphics, vol. 20, no. 4, 2014, pp. 531–539. 3. A. Mulloni, H. Seichter, and D. Schmalstieg, “Handheld Augmented Reality Indoor Navigation with Activity-Based Instructions,” Proc. 13th Int’l IEEE Computer Graphics and Applications
95
2/22/16 10:37 PM
Spatial Interfaces
Conf. Human Computer Interaction with Mobile Devices and Services (MobileHCI), 2011, pp. 211–220. 4. P. Zhou, M. Li, and G. Shen, “Use It Free: Instantly Knowing Your Phone Attitude,” Proc. 20th Ann. Int’l Conf. Mobile Computing and Networking (MobiCom), 2014, 605-616. 5. J.D. Hincapié-Ramos et al., “Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI), 2014, pp. 1063–1072. 6. F. Argelaguet and C. Andujar, “A Survey of 3D Object Selection Techniques for Virtual Environments,” Computers & Graphics, vol. 37, no. 3, 2013, pp. 121–136. 7. J.D. Hincapié-Ramos et al., “GyroWand: IMU-based Raycasting for Augmented Reality Head-Mounted Displays,” Proc. 3rd ACM Symp. Spatial User Interaction (SUI), 2015, pp. 89–98. 8. T. Grossman and R. Balakrishnan, “The Design and Evaluation of Selection Techniques for 3D Volumetric Displays,” Proc. 19th Ann. ACM Symp. User Interface Software and Technology (UIST), 2006, pp. 3–12. 9. K. Hinckley et al., “A Survey of Design Issues in Spatial Input,” Proc. 7th Ann. ACM Symp. User Interface Software and Technology (UIST), 1994, pp. 213–222. 10. D.A. Bowman et al., “Using Pinch Gloves™ for Both Natural and Abstract Interaction Techniques in Virtual Environments,” Proc. HCI Int’l, 2001, pp. 629–633. 11. M. Rahman et al., “Tilt Techniques: Investigating the Dexterity of Wrist-Based Input,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI), 2009, pp. 1943–1952.
12. R. Jota et al., “A Comparison of Ray Pointing Techniques for Very Large Displays,” Proc. Graphics Interface (GI), 2010, pp. 269–76. Juan David Hincapié-Ramos completed this work while he was a postdoctoral researcher at the University of Manitoba and a visiting researcher at Tohoku University. He is now a human-computer interaction researcher at Lenovo. Contact him at
[email protected]. Kasim Özacar is a PhD candidate at Tohoku University’s Research Institute of Electrical Communication. Contact him at
[email protected]. Pourang P. Irani is a professor of human-computer interaction at the University of Manitoba. Contact him at irani @cs.umanitoba.ca. Yoshifumi Kitamura is a professor in the Research Institute of Electrical Communication at Tohoku University and head of the Interactive Content Design Lab. Contact him at
[email protected]. Contact department editors Frank Steinicke at
[email protected] and Wolfgang Stuerzlinger at w.s @sfu.ca.
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
EXECUTIVE STAFF PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field. MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. COMPUTER SOCIETY WEBSITE: www.computer.org
Next Board Meeting: 6–10 June 2016, Buckhead, Atlanta, GA, USA EXECUTIVE COMMITTEE President: Roger U. Fujii President-Elect: Jean-Luc Gaudiot; Past President: Thomas M. Conte; Secretary: Gregory T. Byrd; Treasurer: Forrest Shull; VP, Professional and Educational Activities: Andy T. Chen; VP, Member & Geographic Activities: Nita K. Patel; VP, Publications: David S. Ebert; VP, Standards Activities: Mark Paulk; VP, Technical & Conference Activities: Hausi A. Müller; 2016 IEEE Director & Delegate Division VIII: John W. Walz; 2016 IEEE Director & Delegate Division V: Harold Javid; 2017 IEEE Director-Elect & Delegate Division VIII: Dejan S. Milojičić
BOARD OF GOVERNORS Term Expriring 2016: David A. Bader, Pierre Bourque, Dennis J. Frailey, Jill I. Gostin, Atsuhiro Goto, Rob Reilly, Christina M. Schober Term Expiring 2017: David Lomet, Ming C. Lin, Gregory T. Byrd, Alfredo Benso, Forrest Shull, Fabrizio Lombardi, Hausi A. Müller Term Expiring 2018: Ann DeMarle, Fred Douglis, Vladimir Getov, Bruce M. McMillin, Cecilia Metra, Kunio Uchiyama, Stefano Zanero
96
g2spa.indd 96
Executive Director: Angela R. Burgess; Director, Governance & Associate Executive Director: Anne Marie Kelly; Director, Finance & Accounting: Sunny Hwang; Director, Information Technology Services: Ray Kahn; Director, Membership: Eric Berkowitz; Director, Products & Services: Evan M. Butterfield; Director, Sales & Marketing: Chris Jensen
COMPUTER SOCIETY OFFICES Washington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036-4928 Phone: +1 202 371 0101 • Fax: +1 202 728 9614 • Email:
[email protected] Los Alamitos: 10662 Los Vaqueros Circle, Los Alamitos, CA 90720 Phone: +1 714 821 8380 • Email:
[email protected] MEMBERSHIP & PUBLICATION ORDERS Phone: +1 800 272 6657 • Fax: +1 714 821 4641 • Email:
[email protected] Asia/Pacific: Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku, Tokyo 107-0062, Japan • Phone: +81 3 3408 3118 • Fax: +81 3 3408 3553 • Email:
[email protected]
IEEE BOARD OF DIRECTORS President & CEO: Barry L. Shoop; President-Elect: Karen Bartleson; Past President: Howard E. Michel; Secretary: Parviz Famouri; Treasurer: Jerry L. Hudgins; Director & President, IEEE-USA: Peter Alan Eckstein; Director & President, Standards Association: Bruce P. Kraemer; Director & VP, Educational Activities: S.K. Ramesh; Director & VP, Membership and Geographic Activities: Wai-Choong (Lawrence) Wong; Director & VP, Publication Services and Products: Sheila Hemami; Director & VP, Technical Activities: Jose M.F. Moura; Director & Delegate Division V: Harold Javid; Director & Delegate Division VIII: John W. Walz revised 10 February 2016
March/April 2016
2/22/16 10:37 PM