Sensor Synchronization for AR Applications - VTT Virtual project pages

48 downloads 129 Views 151KB Size Report
can completely restore the signal quality in later stages of data pro- cessing. Moreover, noise introduced by temporal errors are hard to model, as the ... Figure 2: Synchronizing two USB sensors, neither providing sam- ... In this example the problem lies in recovering the ... to trigger measurements using an external signal.
Sensor Synchronization for AR Applications Tuomas Kantonen∗ VTT Technical Research Centre of Finland

A BSTRACT In this paper, we give a brief introduction to the sensor synchronization problem, highlight how the choice of hardware affects what synchronization methods can be applied, and present our ongoing research on sensor synchronization. Our work is based on estimating timestamps on the host processor, using a method suitable for mobile phones and other low-cost consumer grade devices since it does not require special hardware support. We also describe an experiment to measure sensor synchronization performance using a simple calibration rig. Our initial results show that the estimated timestamps provide a stable synchronization and a clear improvement in synchronization accuracy compared to the direct method of using sample arrival times as timestamps. Keywords: Sensor synchronization; Temporal matching

0 Sensor 1

time

Sensor 2 0

Host 0

time

time

Figure 1: Synchronizing two sensors with unknown clock offset (red arrow). The sensors have different delays (green arrows) from sample capture (solid black arrows) to sample arrival at host (dotted black arrows). Blue arrows indicate the required correction to achieve synchronization.

Index Terms: I.4.8 [Image processing and computer vision]: Scene Analysis—Sensor fusion; 1

Sensor 1

time

Sensor 2

time

I NTRODUCTION

The term sensor fusion refers to the art of estimating the state of an unknown dynamic process using measurements from multiple sensors. Benefits of using multiple sensors, instead of just one, can include increased precision of the state estimate and robustness against changes in the process that a single sensor would not be able to handle. However, using more than one sensor introduces a new problem as the sensors need to be synchronized. The sensor synchronization problem should be carefully considered in every application where sensor fusion is applied, as even minor temporal errors can waste a lot of the “accuracy potential” of the sensors. In technical terms, the signal to noise ratio of a poorly synchronized system is less than optimal. No trick, however clever, can completely restore the signal quality in later stages of data processing. Moreover, noise introduced by temporal errors are hard to model, as the noise is highly dependant on the precise dynamics of the underlying process. The behaviour and capabilities of the sensors affect on how the synchronization can be done. As an example, Figures 1 and 2 illustrate some of the challenges related to sensor synchronization. Solid black arrows mark the actual sample timestamps while dotted black arrows mark the time the host processor receives the samples. In Figure 1, both sensors have built-in clocks providing sample timestamps. In this case, the main challenge is in estimating the relative clock offset (red arrow) from the measurements alone, as timestamps do not provide any information about the absolute time difference. The host clock can be used as an absolute reference since the delay from sample capture to sample reception (green arrows) remains constant. Figure 2 illustrates realistic behaviour of two consumer grade USB sensors, neither containing reliable clock sources for timestamping. The only information available is the time the host receives the sample, but the transmission ∗ e-mail:

[email protected]

IEEE International Symposium on Mixed and Augmented Reality 2010 Science and Technolgy Proceedings 13 -16 October, Seoul, Korea 978-1-4244-9345-6/10/$26.00 ©2010 IEEE

Host 0

time

Figure 2: Synchronizing two USB sensors, neither providing sample timestamps. Transmit delays of the first sensor are corrupted by systematic variations. Arrows as in Figure 1.

delay from capture to reception of the first sensor varies systematically due to mismatch between sensor sampling rate and USB bus data frame rate. In this example the problem lies in recovering the actual sample timestamps from arrival times that are corrupted by non-Gaussian noise. 2 S AMPLE TIMESTAMPING The first step in successful sensor synchronization is determining the time of when a sample was measured. The simplest way of “timestamping” samples is to force all measurements happen simultaneously and to use a counter, incremented for each measurement, as the sample timestamp. The benefit of this method is accuracy and simplicity as no processing is required by the host. However, the method can only be used when the hardware provides a means to trigger measurements using an external signal. Some sensors contain a built-in clock and provide sample timestamps based on that clock. Timestamps from different sensors are then based on different physical clocks. Unless the clocks are somehow synchronized, they will eventually drift relative to each other, slowly introducing temporal error that needs to be corrected. Since clock synchronization methods require two-way communication, they cannot be used unless the sensor has the required logic built in (see e.g. the work of Patt-Shamir et al. [4] for different aspects of clock synchronization). Instead, the drift can be eliminated by applying constant on-line resynchronization.

245

Table 1: Prediction errors using different timestamp estimation algorithms. Reported errors are RMSE between IMU predicted rotation and the rotation detected from camera image. Standard error deviations are in parenthesis. LAD and KF are the different algorithms presented in Section 2. “Raw” method uses the measured arrival times as sample timestamps. Prediction method IMU + LAD IMU + KF IMU + Raw

Figure 3: A simple rig for calibrating clock offset between a camera and an IMU, from the point of view of the camera. The IMU is attached to a round wooden turntable with five ALVAR markers.

Our work is focused on the case when neither synchronous hardware nor reliable sensor clocks are available. We use a similar approach as Nilsson and H¨andel [3], where a Kalman filter (KF) is used to iteratively estimate the current sample timestamp based on the previous timestamp, estimated sampling frequency, and the arrival time of the next sample. In addition, we implemented an off-line algorithm based on minimizing the least absolute difference (LAD) between estimated timestamps and the sample arrival times. Since all estimated timestamps are based solely on arrival times, they are all based on the same host clock. This eliminates any possibility for clock drift between sensors. Therefore, instead of constant resynchronization, only a single, one-time synchronization is required. 3 S AMPLE SYNCHRONIZATION Once samples are timestamped, sensors can be synchronized by computing the offsets between different sensor clocks. If synchronous hardware or synchronized sensor clocks are used, there should be no offset and the sensors are already synchronized. If timestamps are based on non-synchronized sensor clocks, an online sample synchronization must be performed. For timestamps estimated on the host processor, off-line synchronization can be used instead, preferably using a special calibration rig for best synchronization accuracy. One way to achieve on-line sample synchronization was demonstrated by Huber et al. [1], where a cross correlation between the samples was maximized to find the offset between two sensors. They noted that the sensor data needs to have enough variation to avoid wrong offset values. As any on-line system is likely to be controlled by a user, there is no guarantee that the data will contain the required variation. Off-line sample synchronization, on the other hand, can be done using a controlled calibration rig. Depending on what kind of sensors need to be calibrated, various setups have been successfully used to measure sensor data latencies, like the pendulum setup in the work of Liang et al. [2]. In our work, the aim is to create an affordable calibration rig for synchronizing a camera with an inertial measurement unit (IMU), simple enough for an end-user to use. The rig, illustrated in Figure 3, consists of a e5 turntable and a single sheet of paper. 4 S YNCHRONIZATION EXPERIMENT To compare the synchronization performance of the different timestamp estimation algorithms, the calibration rig in Figure 3 was rotated using angular accelerations approximately corresponding to rotation capabilities of a human head. Camera images and IMU

246

USB camera (mrad) 1.2 (0.29) 1.3 (0.36) 22 (2.7)

Firewire camera (mrad) 1.1 (0.25) 1.2 (0.25) 3.1 (0.23)

samples were timestamped using the two timestamp estimation algorithms. Angular velocities measured by the IMU were integrated to get a delta rotation between two consecutive camera frames and the rotation of the turntable was measured from camera images by detecting the markers attached to the turntable. The sensors were then synchronized by finding the clock offset that minimized the prediction error, the difference between the IMU predicted frameto-frame rotation versus the rotation detected from the camera images. Several different measurements were made to test for synchronization accuracy using different timestamp estimation algorithms, synchronization stability over a period of five days, and how a varying host CPU utilization rate affects the synchronization accuracy. Two different cameras, Philips SPC900NC (USB) and Unibrain Fire-i BW (IEEE 1394), were synchronized against Xsens MTx IMU to test the effect of varying data latency of the USB bus. The results of the synchronization accuracy measurements are in Table 1. The calibrated clock offsets did not show any significant change during the five days of the experiment. The CPU processing load had a small effect on the synchronization of the USB camera. A high CPU load increased the spread of calibrated clock offsets, caused by increased noise in measured sample arrival times. 5 D ISCUSSION Our initial experiment clearly shows that by using estimated timestamps, synchronization performance can be greatly increased compared to using the measured arrival times as such. The improvement for USB camera synchronization was notably large as measured arrival times of USB devices tend to be very noisy. Measuring the absolute accuracy of the estimated timestamps is difficult since the sensors don’t provide ground truth timestamps to compare against. Therefore we plan to implement an external timestamping apparatus that provides accurate camera and IMU timestamps, directly synchronized with the host clock. Also, we plan to use the timestamp estimation method to implement a more robust IMU aided visual camera tracking system. R EFERENCES [1] M. Huber, M. Schlegel, and G. Klinker. Temporal calibration in multisensor tracking setups. In ISMAR ’09: Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality, pages 195–196, Washington, DC, USA, 2009. IEEE Computer Society. [2] J. Liang, C. Shaw, and M. Green. On temporal-spatial realism in the virtual reality environment. In UIST ’91: Proceedings of the 4th annual ACM symposium on User interface software and technology, pages 19– 25, New York, NY, USA, 1991. ACM. [3] J.-O. Nilsson and P. H¨andel. Time synchronization and temporal ordering of asynchronous sensor measurements of a multi-sensor navigation system. In Proceedings of PLANS 2010, May 2010. [4] B. Patt-Shamir and S. Rajsbaum. A theory of clock synchronization (extended abstract). In STOC ’94: Proceedings of the twenty-sixth annual ACM symposium on Theory of computing, pages 810–819, New York, NY, USA, 1994. ACM.

Suggest Documents