these applications, haptic-enabled medical training simulations attract the most .... The forces sent to the motors of the devices at both sites are given as follows:.
Design and Implementation of Haptic Tele-mentoring over the Internet Jilin Zhou, Xiaojun Shen, Abdulmotaleb El Saddik, and Nicolas D. Georganas Distributed and Collaborative Virtual Environments Research Laboratory (DISCOVER) School of Information Technology and Engineering University of Ottawa, K1N 6N5 Canada E-mail: {jzhou/shen/abed/georganas}@discover.uottawa.ca
Abstract Haptic Tele-mentoring refers to an educational technique in which the mentor can teach the mentee, in a hand-by-hand manner over communication networks, through the coupling of two haptic devices. Essentially, the realization of tele-mentoring relies on the efficient transmission of haptic information, such that either end of the network can sense and/or impart forces. A few of obstacles in developing the telementoring applications over the internet are network delay, jitter, and packet loss. These impairments potentially affect the stability of the tele-mentoring system and degrade the feeling of guiding. This paper analyzes the design and implementation constraints of the system, as well as the simulation and experimental results. To compensate for the network latency, a novel approach based on the behaviors of the human arm trajectory is proposed to lower the overshoot so as to improve the overall system stability. The experimental results show the effectiveness of the anti-overshoot algorithm.
these applications, haptic-enabled medical training simulations attract the most attention because of their tremendous social and practical values [2] [3]. At our lab, a distributed cataract surgery training simulator (presented in [4]) is designed to facilitate the medical residents’ practice of surgical steps. Trainees located in geographically dispersed sites can join the simulation and watch the trainer performing the surgery in real-time or they can practice the procedures alone. Moreover, the trainees can be guided by the trainer’s hand movement through coupling of two haptic devices, as shown in Figure 1. This real-time hand-by-hand teaching technique is referred to as haptic tele-mentoring. Compared with the traditional tele-operation, tele-mentoring has a more flexible control architecture, which can be adjusted on the fly to improve its effectiveness, according to different learning scenarios and modalities.
1. Introduction Recent advances in Human Computer Interface (HCI) technologies have gradually brought the sense of “touch” into the interactive multimedia applications. This study of the sense of “touch” in the context of computer science is termed computer haptics [1]. A haptic device is an HCI which is able to provide the feeling of “touch” to the users. These devices were originally designed for the bilateral tele-operation applications in which the force feedback facilitates the operator’s control over the remote manipulator. However, they have been successfully integrated into many Virtual Reality (VR) applications such as manufacturing modeling, surgical training, entertainment, military simulations, etc. Among all
Figure 1: Conceptual view of tele-mentoring Essentially, tele-mentoring relies on the bilateral transmission of haptic information such that either end of the communication can sense and/or impart forces. One of the obstacles enabling real-time tele-haptics is the network impairments of delay, jitter, and packet loss. These impairments potentially affect the system performance and destroy the sense of “immersion”. Early research on this field emphasized more on the guaranteed stability with the trade of the transparency [5]. In this paper, we analyze the constraints of design and implementation of haptic tele-mentoring in details. The implementation follows a generic system
architecture proposed for tele-haptics in [6]. Then, we discuss an anti-overshoot prediction approach to compensate the network latency based on the human arm trajectory characteristics. The rest of this paper is organized as follows. In section 2, the related work is presented. Section 3 analyzes main issues of telementoring. Section 4 describes network compensation for the tele-mentoring and presents the anti-overshoot prediction approach. Finally, section 5 summarizes the paper and discusses the future research work.
2. Related Work In [7], the authors discussed the effects of different system architectures on the performances of shared haptic VE applications. The solution for keeping the graphic consistency among the users, typically implemented with UDP, consists of communication techniques that quickly synchronize both ends in the presence of packet loss, or reduce the jitter by buffering [7]. Another type of solution uses visual cues called decorators that inform the user about the presence of network lag, allowing he/she to adjust to the situation accordingly [8]. These solutions focus on the improvements of the graphic inherency among the users with the sacrifice of real-time force feedback performance. Time delay effects on the performance of teleoperations have been studied extensively in the last two decades. The main criteria to evaluate teleoperation performance are transparency and stability. In [5], it is theoretically proven that a tradeoff has to be made between these two, under the effects of the timedelay in the closed-loop control. Among all the algorithms available in the literature, the most recognized one was proposed in [16], which guarantees the stability of the whole system under a constant time delay with the tradeoff of position deviation between master and slave sides. Their method considered the communication channel as an independent dynamic system and studied the stability of the whole system based on the superposition property of passivity theory [16]. To improve the position deviation, some prediction algorithms have been proposed [5] [17]. For Internet-based teleoperation, Brady and Tarn proposed a supervisory control architecture in which a time-forward observer and an appropriate delay model were developed [9]. Some researchers exploit different energy-conserving filters and variable time-gain to guarantee the passivity of the communication channels [10] [11]. In [12], the sampling time and synchronization effects for telementoring are discussed.
3. Design and Implementation 3.1 Background For haptic-enabled VR applications, the goal is to reproduce a real life “touch” experience, as much as possible, under the limits of the current hardware settings and computing resources. Due to the complexity of the human body, it is extremely difficult to design a haptic device which can faithfully match the real human tactile system. From the perspective of implementation, the haptic applications should send force commands to the haptic device 1000 or more times per second, to get an acceptable continuous feeling of “touch”. Minsky et al. derived an expression for the guaranteed stable haptic interaction using the impedance control theory and then modified it with the experimental results, considering the sampling and damping effects [13]. They noted the tradeoff among the sampling rate and achievable simulation stiffness width. Another critical point is the regularity of the sampling interval. Experiments have shown that even if the sampling rate is high enough but has irregular sampling interval, the performance will be worse than with a lower sampling rate with regular sampling intervals. Fsh
xm
xdm
xcm
Fsc
Fse
Ts
xds Fme
xs
Ts
Fmc xcs Fmh
Figure 2: Tele-mentoring architecture One application scenario of the simulation is the tele-mentoring mode in which the trainer’s device is coupled with that of the trainee’s, so the trainee can follow the trainer’s movement in real-time. Due to the latency and jitter over the network, the entire telementoring system could be unstable. Figure 2 shows a high level architecture of tele-mentoring. Telementoring takes on a special form of bilateral telehaptics, in which one person takes on the role of a mentor while the other acts as a mentee. The mentor's role is to aid the mentee in accomplishing the task in order to acquire essential skills. In this paper, “student” and “mentee’ are used interchangeably. Legends in Figure 2 are described as follows: • Subscript m and s refer to Master (Mentor) and Slave (Mentee) respectively;
MC and SC represent controllers at two sites; xm and xs denote the device position;
the
•
Case I: The tele-mentoring controller runs in an event-based fashion. At both sides, the simulation advances one logical step at every time a data packet is coming in. The actual delay can be written as
•
xdm and xds denote the delayed position ; xcm and xcs are the compensated position; Fme and Fse are the VE output force; Fmh and Fsh denote the forces exerted by the
•
mentor and the mentee respectively; Fmc and Fsc denote the controller output force.
• •
A key design criterion of tele-mentoring, beyond the architecture, is the flexibility with which the mentor can take the control of the mentee at any time. This control takes the form of indirectly grabbing the mentee’s hand through the loosely or tightly coupling of two devices and dragging it through a series of motions. For example, the mentee may navigate the VE and complete the procedures freely as in a standalone simulation without any guidance from the mentor. This design serves to reinforce the apprenticeship model currently in use, while allowing the tutor to play a more active role in guiding the student. In our simulation, two SensAble OMNI 1 devices are used. Since they are only equipped with the joint sensors, it is natural to adopt the position-error based tele-operation method. The forces sent to the motors of the devices at both sites are given as follows:
original
positions
{td = nTs + aTs | n ∈ Ζ + ; 0 ≤ a < 1}
. Due to the nature of the discrete sampling, the relationship can be described as
xdm (k ) = xm (tk − Lk Ts )
xds ( k ) = xs (tk − M k Ts ) , where k denotes the logical step of the simulation,
Lk ∈ Ζ + and M k ∈ Ζ + represent the delay at the kth step in two directions respectively. Figure 3a shows that the original sampled position signal is either compressed or stretched. In addition, packet loss and reordering are also possible.
k
tk
(a)
Fs =Fsc +Fse =Ksp(xcm −xs)+Ksd (x&cm −x&s)+Fse
V (t3 ) = 3
The first two terms in the equations form the expression of a classical PD controller. K mp and K sp denote the adjustable proportional gains at both sites, while K md and K sd denote the derivative gains. For example, if the mentee is more confident with the skills, he/she can lower the K sp to feel more from the simulation itself. Similarly, the mentor is also able to adjust K sp based on the judgments of the mentee’s performance. The mentor also can set
K mp and K md to
zero to focus on how to perform the simulation. The reactive forces Fme or Fse from the VE model can be turned off which results in a pure trajectory guiding.
3.2 Effects of the Network Impairments In order to quantify the relationship of the delayed position information after transmission with respect to SensAble Technologies Inc.
and
xds = f ( xs , t ) , two cases are considered.
Fm =Fmc +Fme =Kmp(xcs −xm)+Kmd (x&cs −x&m)+Fme
1
xdm = f ( xm , t ) ,
• •
V (t7 ) = 0
tk = nTs
tk = nTs
(b)
Figure 3: (a) Case I; (b) Case II Case II: The simulation actively reads the incoming buffer every Ts second. Then the relationship can be described as xdm (tk ) = xm (tk − Lk Ts ) I {Vs (tk ) > 0}
xds (tk ) = xd (tk − M k Ts ) I {Vm (tk ) > 0}
,
where I is an indicator function defined by: ⎧1, if the statement is true; , and I (S ) = ⎨ ⎩0, otherwise. V (tk ) denotes the number of packets coming in during time interval [tk −1 , tk ) . Figure 3b shows the same original signal received in the case II. It is easy to see that V (t3 ) = 3 , while V (t0 ) = V (t1 ) = V (t2 ) = 0 , etc.
From Figure 3, we can see that the received signals are quite different from the original ones. The system stability could be a concern at the standpoint of the closed-loop control theory. As aforementioned, since the device is sensitive to sampling jitter, we adopt a second method, which reads the input buffer at a fixed rate.
3.3 Velocity Estimation In a standalone haptic application, the velocity of the device gives the momentum information to the simulation. For example, we feel more pain when we hit a wall at a high speed than at a slow speed. Since the haptic device is designed with very low mechanic friction, the software damping which depends on the velocity is sometimes needed for a stable haptic interaction. For tele-mentoring, we need the velocity information to minimize the overshoot and stability of the tracking. Ideally, the transparent position errorbased tele-operation with force feedback needs all the information of the position, velocity and the acceleration [5]. However, most of currently available haptics are not equipped with tachometers, so the velocity information has to be derived from the position measurements. To understand the issue better, consider a joint sensor with a resolution of θ . Let Ts be the sampling
where w(i ) is the filter coefficients and M denotes the filter length. The filter coefficients are computed with the standard least square method and given as M −1 − i) 2 w(i ) = , 2 Ts M ( M − 1) 12(
0 ≤ i ≤ M −1
For simplicity, we divide the human arm trajectory into three classes, in terms of the magnitude of the velocity, low, medium, and high. Table 1 lists the selection of the length M based on the experimental results with a 1 KHz sampling rate. Table 1: Trajectory classification Class M Speed ( mm / s ) 20 Low v ≤ 300 Medium
300 < v ≤ 600
15
600 ≤ v
10
High
Figure 4 shows the results of three different velocity estimation methods: first-order, Kalman filtering, and adaptive window least square. Only x direction trajectories are shown and the sampling rate is 1 KHz. Apparently, the first-order difference estimation is not acceptable for its high frequency noise. Compared with the Kalman filtering method, the adaptive window filtering method is still superior for its smoothness, especially during the change of the direction.
interval and N be the number of increments measured during the interval Ts . The velocity estimated with the first order difference equation will be N θ / Ts . Thus, the quantization step for the velocity estimate is θ / T s . The higher the sampling rate, the lower the resolution will be. The acceleration estimates are even worse, which gives θ / T s 2 quantization step. Apparently, this is sure to fail since the high frequency noise components are correspondingly amplified. In [14], different velocity estimation methods have been compared and the authors proposed an adaptive window least square estimation algorithm. The filter window length is based on the device trajectory itself. If the speed is low, the filter length needs to be longer to compress the high frequency noise while with a high-speed trajectory; the filter length is shorter to capture the transient behavior. This adaptive window length is especially important when the sampling rate is at the order of 1000 or higher [14]. The mathematical equation to compute the velocity is described as: M −1 vˆ(n) = w(i )u (n − i ) ,
∑ i =0
Figure 4: Velocity estimation
3.4 Time Management For the hardware-in-the-loop simulations, time management can only be achieved based on the wall clock time. In the mean time, to guarantee a high and even haptic rendering rate for the interactions between hardware devices and /or human operators, scheduling mechanism with high timing resolution is demanded. Specifically, the system timestamps obtained from Windows OS are limited to a maximum resolution of 10 or 15 milliseconds, depending on the underlying
hardware [21]. To achieve a better resolution, we may query the performance counters and combine with the system time. For the evenly sampling, fortunately the Windows multimedia library provides a timer with 1ms high resolution. These multimedia timer services allow scheduling timer events at a higher resolution than other timer services. Another way to improve the scheduling is to use the real-time underlying operation system. Because the system clocks in different computers drift relatively, periodic clock synchronization is a must to ensure the timestamp values for the distributed haptics are properly synchronized. The effect can be considered as the application level jitter and it will cause the instability. The problem of time synchronization in distributed systems has been studied extensively in the literature. NTP, the network time protocol, currently used as a worldwide for clock synchronization achieves synchronization in the tens of milliseconds in the best case [15].
(DOF) inputs and 3 DOF outputs, which mean only translational forces can be generated. To minimize the load of data traffic, only the raw joint angles measured from the sensors are transmitted over the network every 10ms. The selection of 10ms instead of 1ms transmission interval is the tradeoff between the sampling theory and data packet overhead. Since the highest frequency of human hand trajectory is less than 30 Hz, this 100 Hz transmission rate is high enough for the reconstruction of the original signal in the ideal situation. Forward kinematics are used for the conversion from joint space to the Cartesian space. In addition to the tele-mentoring functionality, the HRTC also handles the synchronization of local and remote haptic data. This synchronization is especially important for the collaborative haptic simulation in which multi-users may manipulate the same virtual object simultaneously. Without the synchronization, it will be hard to achieve the graphic consistency among the users.
3.5 Implementation
3.6 Experimental Results
Our implementation of tele-mentoring follows a generic system architecture proposed by Shen et al. [6]. The architecture clearly separates the haptic rendering and the VE application (Figure 5). The original design of Haptic Real Time Controller (HRTC) is implemented over a real-time underlying operating system to improve the time scheduling. Our implementation is realized on a Windows NT system and we will move it to a real-time operating system soon. Essentially, the HRTC handles all the haptic rendering issues including tele-mentoring. The main functionalities of HRTC include: • Local haptic device rendering • Communication with remote HRTC • Communication with local VE station • Network delay and jitter measurements • Network delay compensation • Synchronization with remote HRTC • Tele-mentoring control
The proportional and derivative gains for the telementoring controller are determined according to the hardware device and experimental results. They are given as follows: K p _ max = [0.15 0.15 0.15]'
Figure 5: Generic tele-haptic architecture Two SensAble Omni haptic devices are used in the simulation. This device features 6 Degrees of Freedom
and
K d _ min = [0.00010.00010.0001]'
. To guarantee the stability of the closed-loop control, both gains should be adaptive to the measured network latency. Without the network compensation, the instant position error used in the controller is proportionally increased with the network latency, so we have to decrease the K P to limit the output force. Otherwise, the device will start oscillating at a certain point. The drawback is that the mentor can feel the sluggishness caused by delay. The consequences of this sluggish and sticky response are twofold. On the one hand, it gives the mentor current network latency information explicitly and he/she can adjust the movement accordingly. On the other hand, this feeling impedes the freedom of the mentor’s movement and tires the mentor’s arm after a very short time. Figure 6 shows the experimental results of tele-mentoring under different delay. In the experiment, to avoid the effect of time synchronization, we connect two haptic devices with the same computer and add the delay artificially. Only the x direction trajectory is plotted. Table 2 lists the PD controller gains. The experiments assume the mentee acting passively.
Figure 6: Time delay effect on tele-mentoring Table 2: PD controller gain Delay (ms) Kp(N/mm) Kd(NSec/mm) 0 0.15 0.0001 50 0.05 0.0003 100 0.02 0.0005 Figure 7 shows the experimental results of 10ms jitter added to a 50ms mean delay. The same PD controller gains were used for both with and without jitter experiments. It is obvious that the output force with constant delay is much smoother than that of variable delay. The high frequency components in the output of variable delay cause annoying whine noise because of limited bandwidth of the motor. If the same controller gains are used for the 20ms jitter, the system will start vibrate and lose the stability.
Figure 7: Jitter effects on tele-mentoring
4. Time Delay Compensation Network delay compensation is implemented in our application for two reasons. The first is to achieve the requirements of the consistency of the graphics at two sites under the constraint of instant force feedback. The second one is to improve the tele-mentoring performance. As aforementioned, one consequence of network latency on the position-error based telementoring is the sluggish response the mentor senses when the stability is guaranteed. This sluggish feedback also affects the mentor’s movement within the virtual environment. Network compensation is to
predict the remote device’s trajectory. Many prediction algorithms have been proposed in the past, e.g. the well known “dead reckoning”. In this paper, we adopt the Kalman filtering prediction algorithm proposed in [17] and enhance it under the consideration of the characteristics of human arm trajectory. When the user holds the end-effector, the trajectory can be represented in either joint space or Cartesian coordinates. The mapping between them is the device’s forward and inverse kinematics. To choose which space better fits the purposes of the prediction, we need to consider the basics of the human trajectory planning. The human arm trajectory formation refers to the planning and control of the kinematical aspects of arm movements [18]. The trajectory here includes both the configuration of the arm in space and the speed of the movements. Some researchers have argued that the trajectories are planned in the joint variables of the arm while others have argued that motor control is achieved by planning hand trajectories in Cartesian space; joint rotations are then tailored to produce these desired hand movements [19]. The second view has gained more support from the studies of planar, unconstrained human movements [18]. The simple experiment of asking subjects to move between two targets shows that the subjects generally tended to generate roughly straight trajectories with a singlepeak, bell-shaped speed profile no matter where these two targets are located in the reachable space of the human arm. These invariant features of movements are a strong indication that planning takes place in Cartesian space rather than joint rotations. Therefore, the Cartesian space is chosen for the prediction. Another consideration is that forward or inverse kinematics of haptic devices are generally nonlinear. If the prediction of one joint is out of its limit, the result will be difficult to interpret. To predict the positional information of a remote haptic device, an appropriate dynamical motion model is required to describe the state of the device. In the past three decades, various mathematical models for the maneuvering target tracking have been studied [20]. Generally, they fall into two classes. The first class takes the control input as an unknown deterministic process and estimates this process from on-line measurement data. This is more natural, since a maneuver is expected to accomplish certain tasks and the control input is rarely independent with respect to the time. For example, in our application scenario, the trainer shows the trainee how to make a straight cut in which the control force is quite correlated with time. However, the implementation of the model is challenging since it is very difficult to determine the statistical characteristics of the input such as the
average level and the variance. The second class treats the control input as a random process. For example, the trajectory of the haptic device has approximately constant velocity except for the maneuvering time in which the acceleration is random. In [17], the dynamics of the haptic devices are modeled with this random acceleration model, and then the Kalman filtering is used for the prediction. Their simulation results showed the effectiveness of the algorithm. However, when the speed of haptics is high, the prediction results give a very high overshoot, especially at the turning points. Figure 8 shows such phenomenon. In this experimental trajectory, the sampling rate is 100 Hz and the prediction is over 100 ms. The predicted signal with the Kalman filter, is compared with the original and the pure delay trajectories. Even the standard deviation of the predicted signal is acceptable but the maximum deviation will definitely cause a stability problem. Generally, when the trajectory is slowly time varying, the prediction over 100 ms will yield acceptable results. As the speed goes higher, it becomes more challenging, especially at the turning point. To alleviate the overshoot effects at the turning point, we consider the simple fact of the human arm movement behavior.
can be simply formulated as: a (n) v( n) ≤ 0 . In this adaptive state, the prediction equation is modified as: ) uˆ(n + 1) = u (n) + α Ts v (n) , where 0.5 ≤ α < 1 is the gain variable of the velocity to smooth the transition and vˆ( n) is computed using the least square method mentioned above. The final uˆ(n + k ) is computed iteratively. Let s denote the state, and the state transition can be described as:
⎧normal , if v ≤ 600 mm / s s=⎨ ) ⎩adaptive, if 600 mm / s < v and aˆ ( n ) v ( n ) < 0 Figure 9 shows the simulation results of the prediction under different time delays.
Figure 9: Prediction under different delays
5. Conclusion and Future Work
Figure 8: Comparison of prediction results The simple fact is that we almost always slow down when we make a turn while driving, running or writing. No doubt, it includes the human hand movement. Reversely, if a human hand trajectory starts with a high speed and slows down quickly, we may assume the chance to make a turn is high. Based on this observation, we divide the prediction into two states: normal state and adaptive state. In the normal state, the velocity is slow and the Kalman filtering is used for the prediction. Therefore,
v v uˆ (n + k ) = Au (n) , v Where u ( n) = [ p ( n) v ( n) a ( n)]T describes the device position, velocity and acceleration, and A denotes the
state transition matrix of the Kalman filtering over k = Td / Ts steps. The state transition happens when the velocity is high and starts to slow down. This condition
In this paper, we analyzed the issues of design and implementation of tele-mentoring. The network delay and jitter affect the stability of the haptic feedback and the transparency of tele-mentoring system. To compensate these negative effects, we considered the behavior of the human arm trajectory and proposed a novel approach to lower the overshoot caused by forward prediction. The method overcomes the problem of the overshoot which is mapped with the simple velocity estimation, and improves the performance of the tele-mentoring. With this method, the peer application is not necessary to buffer the incoming data, as most real-time network applications do, and delivers the predicted data to the upper layer instantly based on the estimation of the current network delay. We also suggested that the prediction should be done in the Cartesian space instead of the joint space because of the human arms’ planning and motor mechanisms. However, its robustness to network jitter still needs to be further tested. Compared with its counterpart media, audio and video, haptic research is still at its early stages. There
are already so many different standard representation formats for audio and video, which can be applied and converted into/from each other in the different contexts. However, there is no such representation for the haptic data as yet. In our lab, we have PHANToM series haptic devices, MPB Freedom6S 1 hand controllers, one HapticMaster 2 , and one set of CyberForce and CyberGlove3 tactile feedback systems. These devices are different in the number of input and output DOF, resolutions, and workspace, etc. For example, Freedom 6S features 6 input and 6 output DOF, which will generate about 96 KB per second with a sampling rate of 1 KHz. If we record 1 minute haptic data, it is about 6 MB. For the tele-haptic applications, this transmission rate is trivial for today’s network bandwidth. However, if a much more advanced tactile sense, such as temperature or pressure, is simulated, those devices would need to have a huge number of sensor arrays where haptic data representation will definitely be a problem. Can the traditional compression algorithms be applied directly and still give good results? How to do real-time haptic data streaming? How to authenticate them? One difficulty in defining the representation is its fundamental difference from the audio or video. When we listen to music or watch movies, the information flows in one way while the haptic interaction involves a bi-directional physical energy exchange. Then we will be facing questions like: how to characterize this energy exchange information in the data representation for real-time haptic applications? Our future work will focus on haptic data representation. With a standard haptic data representation, we can define a unified haptic communication protocol for tele-haptic applications.
6. References [1] C. Basdogan, and M. Srinivasan, “Haptic Rendering in Virtual Environments” Handbook of Virtual Environments”, K.M. Stanney and M. Zyda Eds., Lawrence Erlbaum Assoc., pp 117-134, 2002. [2] J. Zhou, X. Shen, N.D. Georganas, “Haptic Tele-Surgery Simulation”, Proc. IEEE HAVE 2004, Ottawa, ON, Canada, October 2004. [3] A. Liu, F. Tendick, K. Cleary, and C. Kanufmann, “A Survey of Surgical Simulation: Applications, Technology, and Education”, Presence, Vol. 12, No. 6, pp. 599-614, 2003. [4] N.R. El-Far, S. Nourian, J. Zhou, A. Hamam, X. Shen, N.D. Georganas, “A Cataract Tele-Surgery Training Application in a Hapto-Visual Collaborative Environment 1
MPB Technologies Inc. Moog FCS Inc. 3 Immersion Corporation 2
Running over the CANARIE Photonic Network”, Proc. IEEE HAVE 2005, Ottawa, ON, Canada, pp. 29-32, October 2005. [5] D. A. Lawrence, “Stability and Transparency in Bilateral Teleoperation”, IEEE Trans. on Robotics and Automation, Vol. 9, No. 5, pp.624-637, 1993. [6] X. Shen, J. Zhou, A. El Saddik, and N. D. Georganas, “Architecture and Evaluation of Tele-Haptic Environments”, Proc. 8th IEEE DS-RT, Budapest, Hungary, October 2004. [7] P. Buttolo, R. Oboe, and B. Hannaford, “Architecture for Shared Haptic Virtual Environments”, Computer and Graphics, Vol. 21, pp. 421-9, July-Aug 1997. [8] S. Shirmohammadi and N.H. Woo, “Evaluating Decorators for Haptic Collaboration over Internet”, Proc. IEEE HAVE, pp. 105-110, Ottawa, Canada, October 2004. [9] K. Brady, and T. J. Tarn, “Internet-based Teleoperation”, Proc. IEEE ICRA, p. 644-649, 2001 [10] Y. Yokokohji, T. Tsujioka, and T. Yoshikawa, “Bilateral control with time-varying delay including communication blackout”, Proc. IEEE Haptic Interfaces for Virtual Environment and Teleoperator Systems, Orlando, FL. 2002. [11] J. Kikuchi, K. Takeo, K. Kosuge, “Teleoperation system via computer network for dynamic environment”, Proc. IEEE ICRA, pp. 3534-3539, Leuven, Belgium,1998. [12] D. Wang, L. Ni, M. Rossi, and K. Tuer, “Implementation Issues for Bilateral Tele-mentoring Applications”, Proc. IEEE HAVE, pp. 75-79, Ottawa, Canada, 2003. [13] M. Minsky, M. Ouh-Young, O. Steele, F. P. Brooks, and M. Behensky, “Feeling and Seeing Issues in Force Display”, Computer Graphics, vol. 24, no. 2, pp. 235-243, 1990. [14] F. Janabi-Sharifi, V. Hayward, and C. J. Chen, “Discrete-Time Adaptive Windowing for Velocity Estimation”, IEEE Trans. on Control Systems Technology, Vol. 8, No. 6, November 2000. [15] D. Mills, “Internet Time Synchronization: The Network Time Protocol”, Trans. on Communications, Vol. 39, No. 10, pp. 1482-1493, 1991. [16] G. Niemeyer, J-J. E. Slotine, ”Stable Adaptive Teleoperation”, IEEE Journal of Oceanic Engineering, vol. 16, issue 1, Jan.1991. [17] C. Caradima, “Time Delay Compensation in Teleoperation over the Internet”, Master thesis, University of Waterloo, 1999. [18] W. Abend, E. Bizze, and P. Morasso, “Human Arm Trajectory Formation”, Brain, 105: pp.331-348, 1985 [19] N. Bernstein, “The Coordination and Regulation of Movements”, Pergamon Press, Oxford, 1976. [20] X. R. Li and V. P. Jilkov, “Survey of maneuvering target tracking-part I: dynamic models”, IEEE Trans. on Aerospace and Electronic Systems, vol. 39, issue 4, pp. 1333-1364, October 2. [21] J. Nelson, “Implement a Continuously Updating, HighResolution Time Provider for Windows”, retrieved from: http://msdn.microsoft.com/msdnmag/issues/04/03/HighResol utionTimer/