Interaction with 3D Objects in Augmented Reality on Android Platform

13 downloads 33304 Views 4MB Size Report
10.1 Linear acceleration . .... a complex 3D model of the future house with all the indoor and outdoor equipment like living room furniture, fence, bathroom, ... developed on mobile device HTC Wildfire S with AOS version 2.3.5. Figure 3.1: AOS ...
}w !"#$%&'()+,-./012345 |(3/2 - 5/3)| Second step – final preview: (640/384, 5:3) the smallest of the larger resolutions

5.2

Applying the Camera Preview

There are several possibilities to place the chosen camera preview in the screen but in case of ARIO application the preview must occupy the most of the screen area. A preview with an equal aspect ratio to the device screen obviously occupies full screen. Cases when the preview is horizontally or vertically shorter than device’s screen are just uniformly shrinked or expanded and moved to the middle of the screen (figure 5.4). Area of OpenGL rendering window will then the same the occupied area of the camera preview.

Figure 5.4: Chosen camera preview is uniformly scaled if neede and then placed into middle of the screen

10

5. Setting up Reality

5.3

Cyclops Problem

One problem that appears with the hardware camera and the application does not solve is the placement of the camera lens on the device. Almost all mobile devices with AOS have situated the lenses in the left half of their body from the user’s landscape orientation perspective. It means that the real part of the augmented reality is not captured as it we would look through a glass (figure 5.5).

Figure 5.5: The object in front of the device is not in view angle of the camera

11

6 The Source of Virtual Objects OpenGLES(Embedded System) rendering technology is used to display the virtual 3D scene. It is much the same as normal OpenGL but stripped down of some functionality for sake of mobile devices’ possibilities [9]. In this text there is no introduction or tutorial to the topic of programming for OpenGL but there many sources that are suggested for deeper information about this topic [10].

6.1

Possible Versions

There are two versions of OpenGLES supported by current Android devices. OpenGLES 1.1 and OpenGLES 2.0. The difference between them is like difference between normal OpenGL of those versions. Mainly there is a different approach to the rendering pipeline using custom shaders through GLSL(GL Shading Language) [11]. The older devices support only Opengl ES 1.1 but the newer since Android 2.2 may support both of them. For application development it means that if a 3D accelerated application needs to be run on new and old devices, two different rendering engines must be written for both OpenGLES versions because they don’t relate with each other.

6.2

Backward Compatibility

Because the ARIO application intended to support as many devices as possible, there was an intention to write two separate rendering engines that even older devices would be supported. At the beginning of the project development according to official Android dashboards there was still a big part of devices on market that supported only OpenGL ES 1.1. But nowadays there is a negligible amount of devices supporting only the older version (figure 6.1). That is why the focus was redirected only to version 2.0.

Figure 6.1: Supported OpenGL ES versions by devices in May 1, 2012 and May 1, 2013 [8] 12

6. The Source of Virtual Objects

6.3

Actual Renderer

The intention of supporting as many devices as possible requires the ARIO application to be able to run on devices with small operating memory and low-frequency processors. Except the ARIO aplication there are many other applications simultaneously running on each device. That is why simple rendering technique is being applied. Each object in a scene is rendered with its texture. One light coming from the virtual camera is added for more plastic feeling. For better performance the light diffusion component on objects is computed through vertices.

6.4

Android Matrix Issue

Matrix operations are always needed when programming application using OpenGL. The most common are operations with objects in the scene like rotation or translation. These operations may be performed multiple times per second and even on multiple objects. Usually a developer uses some complete library containing all the required methods for matrix operations. Such library is already contained in Android SDK(Software Development Kit) in package Android.Matrix. The library is very useful but there is one problematic issue about it. The AOS runs applications on virtual machines with a garbage collector. The most common performance killers for such environment are creation of objects and their destruction by garbage collector. Every real-time application should probably avoid performing such operations too frequently. That is where some methods of Android’s matrix library make critical failures. As an example let’s take matrix operation of rotation. When rotating a matrix by an angle there must be computed a rotation matrix and the original matrix is then multiplied by the rotation matrix. At this point the method from the library creates an array of sixteen 32-bit floating point numbers. It is created only for this operation and then never used again. So after some time it is collected and destroyed by garbage collector. In ARIO application a user manipulates with virtual objects or even with group of objects very often and matrix operations such as rotation are called multiple times per action. The consequence is that the garbage collector has a lot of work to do which influences the performance of the application. For this particular case was written a custom matrix rotation method.

6.5

Setting up Virtual Camera

How the virtual camera is moved according to movement of the physical mobile device is described in chapters dealing with GPS and sensor values reading. The main task of the OpenGL renderer is to create fitting 2D projection of virtual objects. In other words, apply such projection deformation to virtual objects that if they were real, they would look like a 13

6. The Source of Virtual Objects real object captured by device’s camera. Camera lenses in general are defined by multiple physical attributes which influence how we see the world through them. As mentioned before, there is a variety of mobile devices with different camera lenses which create different image of different quality attributes. The main attribute of camera lens in this case is so called field of view(FOV). It defines how much a camera can see of the scene lying before. There are two fields of view for each lens defined as the angle between counter-lying boundaries of the result image. For the horizontal FOV they are left and right and for vertical FOV top and bottom. Only vertical FOV is needed for virtual camera because the horizontal FOV is derived then from the aspect ratio of the result image and vertical FOV (figure 6.2). For that purpose the Android API provides a method to get the information about FOV the device’s camera.

Figure 6.2: Influence of FOV on image A lot of manipulations with objects depend on orientation vectors of the virtual camera which represent orientation of the physical device. Those vectors are being mentioned under names UP, LOOK, SIDE (figure 6.2).

6.6

Hierarchies of virtual objects

One of the features offered by the ARIO application is to build hierarchies between model instances. Such hierarchy looks like a tree or a forest of trees (figure 6.3). Every instance may have one superior instance called master instace and multiple sub14

6. The Source of Virtual Objects instances. It allows a user to manipulate with multiple objects simultaneously. Moving or rotating one instance influences position and rotation of all sub-instances beneath it in hierarchy. For example there is a house with a table within and a vase on some corner of the table in a scene. By moving the house 10 meters somewhere else and rotating the table 45 degrees the vase should be still situated on the same corner of the table as before the manipulation steps and not floating somewhere in the air. Description of possibilities to maintain hierarchies are provided by Appendix B.

Figure 6.3: Example of hierarchy and its consequences Some 3D models available in ARIO application come from online libraries of free 3D models12 . Authors and source are also mentioned in cretids section of the application.

1. 2.

http://www.sharecg.com http://opengameart.org

15

7 Moving the Virtual World Touch screens are common component of modern mobile devices with AOS. With a simple touch of our fingers we can select any component of graphical user interface, slide items or using multiple fingers to perform multi-touch gestures. Introducing the first iPhone Steve Jobs once said that we will use the best pointing device in the world we were born with1 . This chapter describes manipulation with virtual objects in virtual space performing translation and rotation moevements. The meaning of gestures is tied with the interactivity mode that is enabled at a time of touching the screen. Switching inractivity modes and features like selecting and deleting instances and creating object hierarchies are described in Appendix B.

7.1

Assuming direct control

One of the main features of the ARIO application is the possibility to manipulate with virtual objects using device’s touch screen. The application needs to provide translation and rotation of objects and manipulation with the virtual camera. That means simple object manipulation options that are basic functionality of desktop computer’s modeling programs such as 3D Studio Max, except that there are no keyboard and mouse attached to mobile device. Because the application is intended to be used in terrain of open environment with the user on his feet, the controls of the application must be simple to perform and options for object manipulation may not be overwhelming. The Android API provides sufficient tools to monitor finger touches and to track movement of multiple fingers on the screen. Every new touch of a finger gets unique identification number by which it is by further actions recognized (figure 7.1).

Figure 7.1: Each finger touch has its ID unique amongst current touches The ARIO application listens to addition, removal and movement actions of finger touches. The system processes only one action at a time (figure 7.2). For multi-touch gestures it means that the actions of other fingers need to be memorized to identify the executed gesture.

1.

Apple Keynote 2007, iPhone Presentation

16

7. Moving the Virtual World

Figure 7.2: Simultaneus finger actions are processed one by one For sake of simplicity only gestures with up to two fingers are applied. When more than two fingers are being in an active contact with the screen, any performed gesture is ignored. The transitions between one-finger and two-finger gestures are handled seamlessly (figure 7.3).

Figure 7.3: Example of possible combination of gesture transition

7.1.1

Possible Gesture Controls

1. Simple gesture: movement over the screen with one finger. 2. Circular gesture: circle movement of one of the two fingers around the other. I can be combined into circle movement of both fingers simultaneously. 3. Pitch gesture: movement of one or both fingers away from each other or towards themselves along the line they form. Main difference between circular and pitch gestures is the movement angle of a finger accoring to the imaginary segment. There are 2 segments of 40 degree for each action which means there are 50 degree boundaries between them (figure 7.4).

17

7. Moving the Virtual World

Figure 7.4: Difference between the recognition of circle and pitch gestures

7.1.2

Appearing Problems

With the behavior described before, the transitions between gestures are fluent. But one problem may occur by using multi-touch gestures. When two fingers are too close to each other, the system will recognize them only as one active finger touch (figure 7.5). This is something a user needs to be aware of.

Figure 7.5: The system sees two close fingers as one The other problem during the use of the touch control occurs when a user wants to remove a fingers from the device screen. The finger being removed may still be considered moving because by removing it from screen the touch area is getting smaller and the center of touch is being displaced. The system treats this as a finger movement that can influence the virtual scene in an unintended way. To solve this issue the application uses the option of the system to determine the size of the touch area. A single touch action is performed only of it’s performing finger area size reaches specific minimum value (figure 7.6).

18

7. Moving the Virtual World

Figure 7.6: Unintended movement is filtered by minimum contact area

7.2

Gesture into action

The ARIO application provides four interactivity modes which determine the meaning of using touch controls described before. Two of them serve to interact with selected model instance and the rest to manipulate with the virtual camera or with the whole scene. 7.2.1

Interactivity Modes

1. Model relative to camera This mode serves for manipulation with objects depending mainly on the virtual camera’s orientation and position (figure 7.7). ∙

Simple gesture moves selected instance along the UP and SIDE vectors of the virtual camera.



Pitch gesture moves selected instance from and towards the virtual camera along the line between them.



Rotation gesture rotates selected instance around its Y axis i.e. vector towards the sky.

Figure 7.7: Model relative to camera

19

7. Moving the Virtual World 2. Model relative to compass In this mode objects are moved along imaginary world grid which is represented by world sites read from device’s compass (figure 7.8). ∙

Simple gesture moves selected instance in a direction of world site in which direction the touching finger is moving.



Pitch gesture moves selected instance upwards or downwards along its Y axis.



Rotation gesture rotates selected instance around its Y axis.

Figure 7.8: Model relative to compass 3. Camera vertical The mode is intended to be used when the GPS movement tracking is turned on which moves the virtual camera along horizontal plane automatically (figure 7.9). Hence movement in other directions is forbidden. ∙

Simple gesture moves the virtual camera upwards or downwards according to magnitude of y component of its UP and SIDE vectors.



Pitch gesture moves a virtual camera according to magnitude of y component of the camera’s LOOK vector.



Rotation gesture has no effect in this mode.

Figure 7.9: Camera vertical

20

7. Moving the Virtual World 4. Camera free-move This mode allows user to manually manipulate with the virtual camera without moving the physical device (figure 7.10). ∙

With simple gesture the virtual camera is moved along its to UP and SIDE vectors.



Pitch gesture moves it along the LOOK vector



Rotation gesture rotates the whole scene around the camera.

Figure 7.10: Camera free-move How to switch interactivity modes and how GPS is used to move the virtual camera along horizontal plane is described in Appendix B.

21

8 Environment Awareness The main connection between the virtual scene and the reality lies within behavior of the virtual camera. The virtual camera needs to follow all changes in orientation and position of physical device. This condition needs to be fulfilled to achieve an illusion that a mobile device is some transparent glass through which we can see virtual objects. Modern mobile devices with AOS posses many different sensors that help todays applications to be aware of the environment in various ways. Some of them are put to use by ARIO application to modify the virtual camera’s orientation. This chapter describes which sensors of currently available devices are used, how the extracted data from them influence the virtual camera, what problems occurred during development and how were they solved. The basic principles of how the sensors work are in detail described in book Professional Android Sensor Programming which serves as a great source for developers who would like to start with sensors on Android platform and describes principles of sensors’ functionality [5]. This text mentions only important parts for reader to be able to imagine how it works.

8.1

Android API Offer

The Android API allow the application to listen to any chosen sensor which is available on a device. Output of a sensor is defined by raw values that are periadically measured. Sensor equipment differs on various devices and same sensors on different mobile devices differ in attributes like update frequency and accuracy.

8.2

Magnetic field

One of the sensors used by ARIO application is a magnetometer. The basic principle of how this sensor works is very similar to commonly known compass, because practically it is a compass. The built-in sensor measures intensity of magnetic field that occurs in environment. Its pricnciple may be based on the Hall effect, magneto-resistive materials or the Lorentz force [5]. Magnetometer returns as a result values of magnetic field in three axis of the device in micro-Tesla units (figure 8.1).

Figure 8.1: Vectors of the magnetometer 22

8. Environment Awareness There are also same problems and difficulties like by compass. The application counts with the global Earth’s geomagnetic field but that can be interfered by local sources of magnetic field or by metallic objects nearby the device. The interference may be so strong that the calibration of the inner compass becomes corrupted and the output values are consistently in error. It can be corrupted even by sliding the hardware keyboard or connecting USB cable. The sensor can be reset by movement of the device in a figure 8 pattern along different axis (figure 8.2). This movement of the devices ensures that there will be many immediate changes to magnetic forces having effect on all three axis of magnetic sensors.

Figure 8.2: 8 pattern movement for magnetometer calibration [15]

8.3

Acceleration

Next sensor frequently used by the ARIO application is the accelerometer. This sensor measures the device’s acceleration momentum. There are three output values which define the acceleration in three axis like by magnetometer. The units of these values are in m/s2 . Basic principle how it works can be observed on a ball hanging on springs that create orthogonal axis system. As the device accelerates in some direction the ball strives to stay in its momentum which causes that some strings will be pulled and some pushed by the ball. When a device is still and one of the axis of accelerometer points towards the middle of the Earth, in most cases the output value belonging to that axis is approximately 9.8 m/s2 . It means 1 g, a force that is needed to overcome gravitational acceleration of the Earth (figure 8.3). This sensor is commonly used to determine whether the screen of a device points towards ground or in games like ball in a labyrinth. Sensors mentioned before are the elemental base for the ARIO application to determine orientation of the device. They are present in the most of mobile devices with AOS and are supported since API version 3. They are pure hardware sensors returning measured values. There is also gyroscope hardware sensor which measures magnitude of rotation angles in 3 axis but it is not present in many low-cost devices. Its functionality is not crucial for the ARIO application. 23

8. Environment Awareness

Figure 8.3: Accelerometer principle [5]

8.4

Virtual sensors

Other group of sensors provided by AOS are synthetic sensors or software sensors. They receive data from physical sensors and process them to required output. The additional processing may be some filtering or calculation of deducible data. Such sensor usable by ARIO application is for example linear accelerometer. The data provided by virtual sensor can be computed manually. When using such sensor we would count on implementation for crucial part of ARIO application. The linear accelerometer just takes the newest data measured by physical acceleration sensor and subtracts the former measured data from it. By this subtraction we get so-called linear acceleration, i.e. acceleration values without influence of Earth’s gravity. But as described further in this chapter, this method does not fit to ARIO application.

8.5

Orientation of virtual camera

Orientation of the camera means in other words how is it rotated in all local three axis. Applied to physical device it needs to be determined whereto heads the physical camera horizontally (yaw), vertically (pitch) and how it is rotated around its heading vector (roll). The horizontal heading is considered to world sites like in a case of compass, hence further in the text there is used word azimuth instead of yaw (figure 8.4).

24

8. Environment Awareness

Figure 8.4: Three axis of orientation To determine the correct azimuth, knowing exact direction to world sites, there is one thing the application needs to know when used in such way as ARIO application. Mobile devices like phones and tablet’s are supposed to be used in portrait mode. For such setting it is natural that measuring azitmuth of 0 degree means that the device is pointing towards north. But because the ARIO application requires a device to be positioned in landscape mode, the azimuth circle rotates itself 90 degree counterclockwise (figure 8.5).

Figure 8.5: Landscape mode changes the meaning of azimuth angles

8.5.1

Micro-Teslas to Angle Units

The output values of magnetometer need to be converted to desired result values in degrees or radians to be easily used for adjusting the virtual camera’s orientation. The API provides function to get rotation in form of 4x4 rotation matrix representing the orientation of the physical device. This matrix can be directly used for any rotation operations or angles of 25

8. Environment Awareness three axis in radians can be subtracted from it (azimuth, pitch and roll). Because of more advanced filtering, the original rotation matrix is not used and the virtual camera is adjusted using azimuth, pitch and roll separately. The function that returns the rotation matrix requires for its parameters not only geomagnetic data but also acceleration values from accelerometer. Magnetometer data determine azimuth and acceleration data determine pitch and roll. With this fusion of data it can be determined the device’s orientation angles in all three axis. This fusion complicates any attempt of filtering magnetic and gravity values and getting steady orientation. As mentioned further in this chapter, a proper filtering is very important.

26

9 Signal Filtering On the field of signal processing from hardware analog sensors is something like filtering always needed because readings of sensors are never crystal clear. Acquired analog signal is accompanied by interferences from environment and even from the device itself. For example a consequence of such interference may be unstable azimuth value from -175 to +175 degrees despite of the device heading directly to north (figure 9.1).

Figure 9.1: Signal interferences cause unstable output Sensors in current mobile devices with AOS are really not accurate and filtering in any form is needed. In general almost every filtering method creates some sort of lag latency. The more aggressive the output is filtered the longer it takes for the output value to be updated to the real current value. Aggressive filtering means that a very small part of difference between new and old data is added to the old data therefore more time is needed for the data to be actualized to current data. But in other hand it provides steady values despite intensive interferences. Because the result values of orientation depend on magnetic field values and acceleration values, they both need to be filtered in a such way that the lag latency is similar. Otherwise some unwanted behavior may appear which is described further in this chapter.

9.1

Unwanted Behavior

As mentioned before, there may be some lag latency of output data caused by speed of sensor update or aggressive filtering. A large change of device’s orientation does not update the magnetic and accelerometric data at once. Because of the latency, they require some time 27

9. Signal Filtering to reach the actual sensor values. The time may not be same for both and one value array may be updated faster than the other. For example a device is oriented north and towards the sky. After fast change of its heading to west and pointing horizontally the sensor values begin to update themself. If the data form magnetometer are filtered more aggressive then the acceleration data, after some time the data would show that the device heads horizontally but still somewhere between north and west. It needs to be added another factor influencing the immediate orientation. Because the acceleration data are needed, the acceleration of the force moving and rotating the device is added to the whole computation. All these factors cause an unwanted behavior of the virtual camera by moving the physical device (figure 9.2). Solution lies in compromise of filtering aggressiveness and responsiveness and chosen filtering model.

Figure 9.2: Unwanted behavior of the virtual camera by pitch rotation

9.2

Specific filtering

To avoid unwanted behavior and at the same time have smooth and responsive updates of sensor values a fitting filtering method and model need to applied. The application was primarily developed on the HTC Wildfire S where unwanted behavior was suppressed and data update from sensor is quite responsive. But as mentioned before, the devices with AOS differ in various hardware which may require different filtering model for sensor’s data processing. So some signs of unwanted behavior may appear on different devices. Filtering model may heavily influence the final quality of output data. But chosen model is dependent from used filtering method which defines the actual algorithm of filtering. The model describes how the method is used. During development of ARIO application many 28

9. Signal Filtering methods were applied and tested but only one for each filtering situation was used. Further in text are mentioned the basic principles of how they work and advantages and disadvantages of their application for the ARIO application.

9.3

Filtering methods

With each filtering method there were different results and according to them the most fitting one was chosen. Hardware sensors in current mobile devices with AOS have signal data heavily influenced by interferences. Even data filtering does not help to completely clear the data of noise. As mentioned before there may appear output value oscilation in range of 10 degrees by measuring azimuth. This behavior leads to use of more aggressive model for filtering method for clearer signal but on the other side it causes unresponsiveness of the data update. With respect to user experience of the ARIO application a fitting compromise between responsiveness, signal clearness and unwanted behavior appearance needs to be found. 9.3.1

Simple Moving Average (SMA)

Principle: The current value is computed as an average of last k measured values in queue (figure 9.3).

Figure 9.3: Queue containing last measured values where k=5 Problem: Value of parameter k: ∙

The smaller the closer to unfiltered value (e.g. k=1).



The larger the bigger update latency and more computations (averaging more values and rearranging the array of last measured values). 29

9. Signal Filtering ∙

Value may result into rotation jumps in time unadequate to user experience.

Because of violating the fluidity of user experience and not such good results, it was not chosen for the application’s signal filtering.

Figure 9.4: Influence of parameter k on signal values [5]

9.3.2

Weighted Smoothing

Principle: Setting a level of importance to the newest value and the value measured before. (figure 9.3). ∙

It may be applied in two variants: r e s u l t = ( a l p h a * o l d V a l u e ) + ( ( 1 − a l p h a ) * newValue ) ; r e s u l t = o l d V a l u e + a l p h a * ( newValue − o l d V a l u e ) ;



The only parameter to change here is the weight alpha. The larger alpha is the slower the result value heads to actual sensor value but the more steady the output is. 30

9. Signal Filtering Applying this filter method the application showed quite satisfactory results. There is no "jumpy" behavior by immediate large orientation changes and with high enough alpha there is also small signal noise (figure 9.5). But because of aggressivness of filtering with large alpha there was also high update latency and signs of unwanted behavior of the virtual camera.

Figure 9.5: Influence of parameter alpha on signal values [5]

9.3.3

Kalman Filter

A Kalman filter can provide excellent signal processing results, but is complicated to implement for all but the simplest examples. To use a Kalman filter, prior knowledge about the source of the data is needed. The algorithm is fed noisy measurements, some predictions about how the measurement’s true value is behaving, maybe some knowledge about forces that are causing the system to change, and a Kalman filter algorithm can efficiently find an accurate estimate of something’s true value. Kalman filters are extremely flexible and can be used to smooth high-frequency noise or to isolate a periodic signal [5]. Because of complexicity of the topic, the whole principle is not contained in this text. But the reader is encouraged to read the materials which may gave a deeper insight to the problematic of applying the Kalman filter [6]. Kalman filter generally requires computation of matrices and vectors to achieve the result. For the case of ARIO application a very simple one-dimensional variant of Kalman filter was used. It is then applied separately on every dimension of sensor output data. 31

9. Signal Filtering Such implementation is mentioned also in the book Professional Android Sensor Programming [5]. But one very well done example was posted on blog of Interactive Matter Lab web page [12]. There is a clear and simple explanation of how the one-dimensional implementation works and how the filtered values behave when changing the filtering model.

32

9. Signal Filtering Kalman filtering method for ARIO application in Java: p u b l i c f l o a t update ( f i n a l f l o a t measurement ) { // p r e d i c t i o n update estimated_error = estimated_error + process_noise ; // measurement update weight = estimated_error / ( estimated_error + sensor_noise ) ; v a l u e = v a l u e + w e i g h t * ( measurement − v a l u e ) ; estimated_error = (1 − weight ) * estimated_error ; return value ; } measurement: the raw value from sensor output estimated_error: dynamic member of the method process_noise, sensor_noise: members derived from properties of sensor behavior weight: weight of the gain similar to weighted filtering method value: filtered sensor value as a result Mainly changing values for the static members process_noise and sensor_noise influences quality of filtered values (figure 9.6). Specification of the model progressed in a way of trial and error by observing the reaction of the virtual camera on device’s orientation change according to prerequisites mentioned before. But as later testing of the application showed, one static model is not enough to cover all hardware configurations of different devices which sensors may behave differently. 9.3.4

Additional Filtering of Azimuth

Values from magnetometer and accelerometer are separately filtered value by value. Combining them the application gets values of device’s actual orientation. Filtering method and model influence behavior of the virtual camera’s orientation. During development were many variants of filtering models applied and tested with respect to user experience. As mentioned before, the goal was to find a compromise. Even if some method or model had sufficient results, orientation values of azimuth were always unstable and heavily influenced by interferences which was still noticable. Solution is to additionaly filtrate only azimuth values without influencing the other data. The azimuth output is 0 to 180 and 0 to -180 degrees. Adding 360 degrees to the negative half makes it 0 to 360 degrees output. 33

9. Signal Filtering

Figure 9.6: Difference between raw signal and filtered signal by extreme values of process_noise and sensor_noise [12]

Figure 9.7: Adding 360 degrees to negative values makes the data more comfortable for additional processing The problem is that the orientation may flip over the 0 and 360 degree boundary. A method that returns any degree value back to scale 0 to 360 degrees needs to be implemented. Let’s call it amod360: r e s u l t = ( x < 0 ? x + 360 : x ) ; r e s u l t = r e s u l t − 360*(( f l o o r ( r e s u l t / 3 6 0 ) ) ) ;

34

9. Signal Filtering Then the actual filtering goes similarly as methods mentioned before: d e l t a = amod360 ( measurement − v a l u e ) ; v a l u e = ( d e l t a > 180 ? amod360 ( v a l u e + w e i g h t * ( d e l t a − 3 6 0 ) ) : amod360 ( v a l u e + w e i g h t * ( d e l t a ) ) ) ; Like by weighted filtering method, adjusting the weight member the particular filtering may be more aggressive. After these steps the result value may be converted back to original degree scale and applied to azimuth orientation of virtual camera. The ARIO application uses this method of additional filtering because sensor readings on all tested mobile devices were in case of azimuth output very noisy.

35

10 Accelerometer as Movement Detector The text before was primarily focused on orientation of the virtual camera. In this chapter there is described a concept of using the accelerometer to reflect the device’s movement to virtual camera. As it name states it measures acceleration and not the translation movement of its device. That’s why it is not ideal for movement monitoring because of its nature. But it would be still useful to measure short movements of the device being hold in hands. Such configuration mostly produces shaking of hands which can be monitored by accelerometer.

10.1

Linear acceleration

The accelerometer is the only hardware sensor capable to monitor translation movement in three axis. But the force spent to hold the device from falling down is captured too because of Earth’s gravitational force. So the main goal is to receive three zero values when the device is still with respect to Earth. During the development there were two methods revealed to transofrm the acceleration data to so called linear acceleration data. Description of following methods counts with perfectly filtered data because the filtering does not have influence on final results. 10.1.1 Subtraction of Measured Data This method is recommended by official Android developer guide to API and by the book Professional Android Sensor Programming too. There is even a software sensor described before for this purpose in some devices. It takes the last saved data and subtracts them from the last measured data (figure 10.1). As a result, in calm state of the device the data of linear acceleration are zeros. Moving the device in directions of the sensor’s axis evokes a change in data accordingly to direction of the movement.

Figure 10.1: The last measured data are subtracted from the data measured before 36

10. Accelerometer as Movement Detector It may seem as perfect solution for application’s using accelerometer in some way. For most of the basic functionality of the operating system or applications (games, dismissing alarms by turning phone) it works fine. But the the device may not be rotated during the measuring. The problem with ARIO application is that the device is randomly rotated by the user’s hand. According to the former definition of this method, the normal rotation in whichever axis evokes a change in linear acceleration data despite of no translation movement. So the problem of this method is that an unwanted linear acceleration is recorded just by rotating a device. This unwanted acceleration may be however filtered by subtracting the expected unwanted acceleration from the linear acceleration data. For that purpose is the change in angle between a acceleration axis and the water-level needed. That angle is of course achievable and the expected acceleration change in any angle is easily computable if the supposed acceleration towards middle of the Earth is approximately 9.8 m/s2 (figure 10.2).

Figure 10.2: Acceleration vectors are rotated in a sphere with radius 9.8 But as the second method shows, it is not that easy in reality. 10.1.2 Subtraction of the Last Measured Data and Expected Data The principle of this method is similar as the principle with expected data by rotation in the method before. Let say there is some component of the ARIO application performing the computation First the component knows the current orientation of the device like described in former chapters. Expected magnitudes of acceleration vectors in calm state can be derived the orientation data supposing the gravitational acceleration e.g. 9.8 m/s2 (figure 10.3). And let’s define some generous error treshold 1 m/s2 so the application would consider linear acceleration only in cases of greater than 10.8 m/s2 and smaller than 8.8 m/s2 Then the component receives separately the pure acceleration data. Subtracting them from the computed expected values we get the wanted linear acceleration. This solution would be very fitting in theory. But there are always some difficulties in praxis. The problem is what if sensor would measure on some of its acceleration vectors 37

10. Accelerometer as Movement Detector

Figure 10.3: By perfect conditions the expected values of acceleration in all axis can be computed default g, i.e. vector pointing towards the sky, even beyond the set treshold. This actualy happens by all mobile devices with AOS. During testing the capability of accelerometer sensors appeared values like 11.5 m/s2 or even 7.9 m/s2 . This behavior is probably consequence of faultiness of the built-in sensor in mainstream mobile devices. Statically increasing the treshold would not help because the values could nondeterministically overleap it and too high treshold makes the application unable to detect gentle movements. Because of nondetermonistic behavior of the default g for each accelerometer vector which can be changed by rapid movement or even temperature, this method of translation movement detection was not applied.

38

11 Global Systems for Movement Detection Even if there is no way to measure movement of a device directly with its hardware sensors, it can be determined by knowing the device’s location on Earth according to some system of relays. Knowing the location change according to the system in periodic update allows the the application to compute the movement of virtual camera in the virtual scene. Android devices provide also multiple ways to determine device’s location. Cell-ID: mobile network Base transceiver stations (BTS) located in range of the mobile device are in use to define the location. Internet network: uses the mobile network or WI-Fi to determine the location. GPS (Global Positioning System): a GPS receiver in the mobile device is used to determine the location via satellites. With combination of these methods current mobile devices are able to pinpoint the location relatively fast and accurately. An example of such fusion of GPS and wireless networks is A-GPS (Asisted GPS) (figure 11.1).

Figure 11.1: Except GPS a device can use other location providers [16] 39

11. Global Systems for Movement Detection Because the wireless networks are not always available and their usage is not always cost gratis, the ARIO application makes use only of GPS. GPS is even the most precise method of them 1 but it is also the greatest power consumer.

11.1

GPS in General

As the principle of how the GPS works is an old knowledge and there are many useful materials dealing with the topic like the book GPS for Everyone: How the Global Positioning System Can Work for You [7]. This chapter provides only a brief description. Another useful source dealing with GPS on mobile devices is article by Joseph Henzi [13] that serves as useful summary of this topic. GPS satellites transmit empheris and almanac data every 30 seconds2 . The GPS receiver gets a signal from each GPS satellite. The satellites transmit the exact time the signals are sent. By subtracting the time the signal was transmitted from the time it was received, the GPS receiver can tell how far it is from each satellite. The GPS receiver also knows the exact position in the sky of the satellites, at the moment they sent their signals. So given the travel time of the GPS signals from three satellites and their exact position in the sky, the GPS receiver can determine position in three dimensions – longitude and latitude (figure 11.2). For altitude information is another fourth satellite needed.

Figure 11.2: Trilateration In the whole process of receiving data from satellites is a great emphasis on time, especially on its synchronization. The device’s inner clock needs to be set as most accurately as possible. That can be achieved by synchronization with some atomic clock in the world. There are already applications for AOS providing such synchronization e.g. ClockSync. The synchronization must be done before every usage of GPS receiver because of the time drift of the inner clock [13]. Also the precise time synchronization influences accuracy of all applications using GPS. 1. 2.

http://developerlife.com/tutorials/?p=1375 http://gpsinformation.net/main/almanac.txt

40

11. Global Systems for Movement Detection Considering importance of time synchronization there is also an interesting topic targeting the theory of relativity influencing the time precision3 .

11.2

GPS movement tracking

Knowing the last two locations a movement vector of the device can be computed which is then applied to virtual camera. The geolocation coordinate system consists of 3 values: longitude, latitude and altitude. longitude and latitude represent horizontal and vertical angles with apex in center of geoid. altidute represents the height from water-level and with radius of geoid it means distance from the center of it. Together they define a point somwhere on Earth (figure 11.3).

Figure 11.3: Point on Earth is defined by longitude, latitude and altitude [17] For purpose of movement tracking the application requires a method that returns movement beteen two points as 2D vector. There is no official method provided by Android API, only a method calculating distance between two points using WGS84 ellipsoid4 The Earth has a shape of geoid with complicated terrain but for simplicity the ARIO application computes with perfect sphere shape.

3. 4.

http://www.astronomy.ohio-state.edu/ pogge/Ast162/Unit5/gps.html http://www.oosa.unvienna.org/pdf/icg/2012/template/WGS_84.pdf

41

11. Global Systems for Movement Detection The algorithm is following: r i g h t M i n L a t 1 = PI_HALF − l a t i t u d e ; x1 = a l t i t u d e * c o s ( l o n g i t u d e ) * Math . s i n ( r i g h t M i n L a t 1 ) ; z1 = a l t i t u d e * s i n ( l o n g i t u d e ) * Math . s i n ( r i g h t M i n L a t 1 ) ; r i g h t M i n L a t 2 = PI_HALF − mOldLatitude ; x2 = mOldAltitude * c o s ( mOldLongitude ) * s i n ( r i g h t M i n L a t 2 ) ; z2 = mOldAltitude * s i n ( mOldLongitude ) * s i n ( r i g h t M i n L a t 2 ) ; moveX = x1−x2 ; // North−South movement moveZ = z2−z1 ; // East−West movement The algorithm requires conversion of latitude and longitude from degrees to radians and to altidute we need to add radius of the spehere which is approximately 6 378 137 meters. Results moveX and moveZ is in meters which are then applied to horizontal movement of the virtual camera. Accuracy of the algorithm is sufficient for needs of ARIO application because in comparison to API method of distance calculation betwwen two points, the Android method shows distance 156.17387 meters and vector computed by described algorithm is 157.41049 meters long. Altitude change is not applied to virtual camera’s position because the altitude read from GPS receiver is very faulty and measurement precision is in meters which would cause drasticaly change in altitude of the virtual camera with every GPS update.

42

12 Improvement by Image Processing Because of imperfections of the accelerometer, magnetometer and GPS there is also a way to supplement their awareness of environment. Advanced image processing of the image captured by device’s camera may be the solution. In the time of writing this text there is already a framework PointCloud SDK made by a Swedish company 13th lab, primarily developed for iOS1 . From its description it uses only accelerometer and image processing. The image processing stands for the overall orientation and movement in space. During writing of this text there was no opportunity to test the technology in person but according to videos made by the developers and users it works really smoothly with impressive results. But problems appear when a device is rotated too fast and the anchor image pattern is lost and the application loses its overall orientation. I would suggest that both approaches could be merged in a way that they would erase imperfections of both solutions. It takes some time for the application to process the image captured by camera and find in it patterns and tresholds. Changing the orientation of device for example by 90 degrees in whichever direction causes a loss of the captured image pattern and then the application requires to process current camera image. This method image processing for fine movement tracking is not implemented in ARIO application. Adding the method to the application would make more useful in indoor space and for small distance movements. Using sensors like magnetometer and GPS may supplement the loss of orientation until the next image from camera is processed (figure 12.1).

Figure 12.1: Using compass and GPS the device would still know its orientation and position despite of losing the image anchor 1.

http://pointcloud.io

43

13 User Interface This chapter deals with design of user interface of the ARIO application so as no to stand in way of user experience. Meaning of controls and buttons is decribed in Appendix A. Buttons primarily have combination of black and white color to be clearly visible on any kind of background that is actually an image captured by camera (figure 13.1). For special cases there are instead of white background used intensive grades of basic colors like green, red, yellow and blue. Other elements of user interface like word strings are on black background with white characters for the same reason.

Figure 13.1: Button visible on all kind of background color The shape of buttons is represented by circle because independetly of size they provide always some area of background throughput which is for ARIO application important and signs within the circles are apparently clear to be read (figure 13.2).

Figure 13.2: The red area represents visible background Important thing is also layout of buttons. Because center of user’s attention must be the whole image of virtual objects and camera preview, position of buttons must be at border of the screen (figure 13).

Figure 13.3: The red area represents possible location for buttons 44

13. User Interface The next important thing considering mobile devices is how they will be held by the user. Tablet’s are large enough to be held comfortably almost in any way but the problem by mobile phones is trickier. Almost all mobile phones have their camera lens located on the left side from user’s view in landscape orientation. This setting makes the holding of device more comfortable in right hand so as not to cover the camera lens. Therefore the left hand will be supposingly used for the most of frequent actions (figure 13.4).

Figure 13.4: Example of how the device can be held Assuming such setting the most frequently used buttons should be placed at left or bottom border of the screen so as not to frequently cover the user’s view on the screen by intention of tapping them. With such position of frequently used buttons there is also lower chance of unwanted hitting the Android default buttons situated at the right side. There also buttons on right and top borders of the screen but their functionality is not intended to be used very often or can be reached by a thumb of the right hand (figure 13.5).

Figure 13.5: Possible consequences of different layouts for frequently used buttons

45

14 Conclusion The goal of this thesis was to analyze possibilities of mobile devices with Android operating system of this generation to run augmented reality application providing interaction with virtual 3D objects and create such application according to results of the analysis. Specific solutions for problems were chosen with respect to support as much devices on market as possible. One of the most difficult stage of the aplication’s development was finding capabilities of sensors available in current mobile devices able to measure values useful for tracking device’s translation movement and orientation. The most problematic aspect of their implementation was to choose optimal technique to filter sensor’s output data according to compromise between responsivness and noise troughput of the output signal which affected the behavior of the virtual scene. Virtual objects are displayed using simple OpenGL rendering engine and manipulation with them is provided by touch gestures via touch screen and user interface. Controls were designed for comfortable usage with a mobile device used by being held in hands and moving in terrain. The application provides options to take a screenshot, save and load virtual scenes and help providing information about controls of the appliacation.

14.1

Possible Future Development

Augmented reality applications are currently a very popular topic and such applications running on mainstream mobile devices have a great potential for daily use. Each layer of the ARIO application can be separate topic for improvement by different approaches to current technology or by taking an advantage of newer hardware with better performance. Virtual objects might be displayed with greater emphasis on detail and effects. Interactivity options provided by user interface can be expanded. Greatest potential lies in improvement of sensing the environment by using different filtering approaches or adding other hardware elements to overall sensor fusion like advanced image processing of the camera preview. The future applications could even automatically calibrate the hardware sensors.

46

Bibliography [1] UC Berkley School of Information Augmented Reality: Theory and Practice of Tangible User Interfaces. [cited April 1, 2013] [2] O. Kutter and A. Aichert and C. Bichlmeier and Christoph Bichlmeier and S. M. Heining and B. Ockert and E. Euler and N. Navab: Real-time Volume Rendering for High Quality Visualization in Augmented Reality. New York: AMI-ARCS. 2008. [3] J. Syrový: Augmented reality on Windows Phone – restaurants seeking application Brno: Masarykova univerzita. Fakulta informatiky. 2013. [4] R. Meier: Professional Android 4 Application Development. 3. ed. Chichester. John Wiley & Sons, Ltd. 2012. 978-1118102275. [5] G. Milette and A. Stroud: Professional Android Sensor Programming. 1. ed. Chichester. John Wiley & Sons, Ltd. 2012. 978-1118183489. [6] V. M. Moreno and A. Pigazo: Kalman Filter Recent Advances and Applications. Manhattan. InTech. 2009. 978-953-307-000-1. [7] L. C. Larijani GPS for Everyone: How the Global Positioning System Can Work for You. Atlanta. American Interface Corporation. 1998. 978-0965966757. [8] Google Android: Official site for Android developers Dashboards. . [cited 13.3.2013] [9] The Khronos Group: .

OpenGL

ES

2.0

Specification.

[10] K Brothaler: OpenGL ES 2 for Android: A Quick-Start Guide. 2013. 978-1-93778-534-5. [11] J. Kessenich and D. Baldwin and R. Rost: The OpenGL Shading Language. The Khronos Group. 2012. . [12] Interactive Matter Lab: Filtering Sensor Data with a Kalman Filter. 2009. . [cited March 20, 2013]. [13] Joseph Henzi: LPT: Improve Android GPS Accuracy By Having The Accurate Time. 2012. . [cited February 14, 2013]. 47

Bibliography [14] N. Navab: Improving Depth Perception and Perception of Layout for In-Situ Visualization in Medical Augmented Reality. . [cited January 24, 2013]. [15] D. Ballinger: Windows Phone 7 – Compass Bearing. . [cited March 12, 2013].

2012.

[16] Tomáš Beleš: Čo je A-GPS (Assisted GPS) v novom iPhone 3G, aké sú jeho výhody a TomTom pre iPhone. 2008. . [cited March 18, 2013]. [17] Intergovernmental Committee on Surveying and Mapping: Fundamentals of Mapping. . [cited February 6, 2013]. [18] How does GPS work?. 2012. . [cited February 13, 2013].

48

A Screenshot Gallery

49

A. Screenshot Gallery

50

B Controls description

Figure B.1: User interface of main screen (1) Selected interactivity mode (2) Delete selected instance (3) Release selected instance from hierarchy (4) Bind selected instance to master instance (5) Add new instance (6) Select instance (7) Deselect instance (8) Rename selected instance (9) Turn ON/OFF GPS movement tracking (10) Make screenshot (11) Menu (12) Number of fingers in contact with the screen (13) Status of GPS movement tracking (14) Heading (15) Information that the device’s orientation angles may cause unwanted behavior (16) In which direction will be the selected instance moved (17) Name of the selected instance (18) Selected instance is highlighted with yellow color 51

B. Controls description

Figure B.2: Expanded selection of interactivity modes from left to right: (1) Model relative to camera (2) Model relative to compass (selected) (3) Camera horizontal (4) Camera free-move

Figure B.3: Expanded selection of deleting selected instance from left to right: (1) Close selection (2) Master instance and sub-hierarchy of the selected instance in hierarchy will be connected (3) Hierarchy will be cut at the selected instance (4) Sub-hierarchy starting at the selected instance will be deleted.

Figure B.4: Expanded selection of releasing selected instance from left to right: (1) Close selection (2) Master instance and sub-hierarchy of the selected instance in hierarchy will be connected (3) Hierarchy will be cut at the selected instance (4) Sub-hierarchy starting at the selected instance will be released from its master instance.

52

C Electronic Attachment ∙

Installation file



Source codes

Contents of the electronic attachment can be found in thesis’ archive in Information System of Masaryk University.

53

Suggest Documents