2013 IEEE International Conference on Systems, Man, and Cybernetics
Robot Assistance Selection for Large Object Manipulation with a Human Julie Dumora∗ , Franck Geffard∗ , Catherine Bidard∗ , Nikos A. Aspragathos† and Philippe Fraisse‡ ∗ CEA,
† Mechanical
LIST, Interactive Robotics Laboratory, Gif-sur-Yvette, France, Email: fi
[email protected] Engineering and Aeronautics Department, University of Patras, Greece, Email:
[email protected] ‡ LIRMM, University of Montpellier 2/CNRS, France, Email:
[email protected]
Abstract—In this paper, we propose a method that allows a human to perform complex manipulation tasks jointly with a robotic partner. To that end, the robot has a library of assistances that it can provide for helping the human partner during a priori unknown collaborative tasks. According to the haptic cues naturally transmitted by the human partner, the robot selects on-line the suitable assistance for the current intended collaborative motion. Based on the naive bayes classifier and the Matthew Correlation Coefficient, the parameters of the decisionmaking are automatically tuned. An experiment on a real arm manipulator is provided to validate the proposed approach. Index Terms—large object comanipulation, human intent detection, assistive control
I. I NTRODUCTION This work focuses on human robot haptic joint collaboration for achieving large object manipulation tasks. Each partner has complementary capacities: cognitive for the human vs physical for the robot. Thus, a huge challenge is the integration of the complementary capabilities of both partners to create a system that outperforms the capacities of each of them. The robot should assist the human partner who is able to decide on what to do and to instantaneously plan complex scenarios. To that end, the robot has to detect which motion assistance is needed by the human operator at each instant. According to the state-of-the-art, we can point out a narrow connection between the degree of task knowledge of the robotic partner and its role in the task. Under the assumption that the task is known and quite repetitive, the most common approach is to design a proactive robotic partner. This means that the robot anticipates human actions thanks to its task knowledge and actively participates to the effort sharing. The most common framework is to use programming by demonstration. The aim is to learn the task model to reproduce it [1][2]. Under the restriction that the robot has no prior knowledge about the task and environment, the strategy of a passive assistant robot is adopted. The most common method is to impose non-holonomic constraints to the robot. Thereby, the robot allows only certain displacements [3][4][5]. Each of these strategies has been compared with a follower strategy using a force controlled robot (without any assistance). Experimental studies [6][7] highlighted the ability of proactive robot to reduce human operator effort. However, a psychology study in [6] showed that human operators feel 978-1-4799-0652-9/13 $31.00 © 2013 IEEE DOI 10.1109/SMC.2013.315
more comfortable and safe with a passive robot. This result can be interpreted by the ability of the human to predict the passive robot behaviour while this is more difficult with a robot that continuously learns during tasks, or that has to estimate parameters for its trajectory generation. Furthermore, some methods endow the robot with the ability to adapt to human variability and disagreement by getting more passive [2]. However, they do not interpret this disagreement in order to provide a suitable assistance. Unlike proactive robot, passive assistant robots benefit from the independence of control parameters from the task: no knowledge about the task is needed to perform a motion with an assistance. They also take advantage of an easy predictability from the human partner. Moreover, these robots verify the passivity constraints [8], ensuring safe interaction with the environment and the human partner. However, the non-holonomic methods, related to passive assistant robot, compel the operator to combine a series of motions, in order to perform one movement of the object along a prohibited degree of freedom. A study presented in [4] shows that keeping movement redundancies is more effective to promptly fulfil a complex task. From these statements, we identify the following features as relevant for designing an efficient robotic partner: • safety • predictability • reduction of human effort • movement redundancy • interpretation of human disagreement and intention (unexpected variability in order to provide suitable assistance) • task-independence In our previous work, we thus proposed a control scheme that allows a human to perform complex manipulation tasks jointly with a robotic partner that fulfil the previous requirements [9][10]. We assume that the robot has no prior information about the task and the environment. We endow the robot with a library of assistances for performing standard collaborative motions. According to the haptic cues naturally transmitted by the human partner, the robot selects on-line the suitable assistance for the current intended collaborative motion. Assuming that many tasks can be decomposed into a sequence of collaborative motions, this method allows to perform a wide range of tasks with assistance while the robot 1828
has no prior knowledge about the task. A user study highlights the advantages of this method compared with a follower robot [10]. We implemented a basic algorithm of intention detection to switch from the current assistance to the new suitable one. This algorithm is based on the relationships between haptic measures and human intention. These relationships were apprehended in a human-robot haptic interaction model proposed and experimentally validated in a previous work [9]. Thresholds were empirically tuned to determine significant haptic measures, useful for the intention detection. This paper aims at automatically tuning the intention detection parameters based on the naive bayes algorithm and the Matthew Correlation Coefficient. Using this method, the user can choose the confidence level of the decision making that the robot should get to select a new assistance. In the next Section, we discuss the chosen framework. In Section III, we describe the intention detection algorithm. In Section IV, we present the experiment and results. Finally, we draw some conclusions and outline future work in Section V.
In this paper, the haptic data refers to wrench components as well as the displacement at the operator’s grasping point. They are expressed at the operator’s grasping point O in the frame depicted Fig. 1. B. Classification choice Methods for autonomous online segmentation assume the possibility to demonstrate the sequence of motions. This assumption prevents the use of these methods in our approach since the current virtual mechanism prevents from performing the next intended motion. Reinforcement learning deals with a trade-off between exploration and exploitation. Application of this strategy for autonomous robot presents successful results. However, the exploration phase can be costly in a physical human-robot interaction. The operator might have to perform an unbearable amount of demonstrations in order to tune the different parameters of detection, that will increase with the number of features and intended motions. Therefore, in our context, a classification strategy sounds more suitable. Moreover, the features that are useful for the operator’s motion intent inference are measured in directions constrained by the virtual mechanism. In this context, dynamics analyses, generally performed with Gaussian Mixture Model or Hidden Markov Model, may probably not provide further information than static analyses. Furthermore, these algorithms require large dataset, data preprocessing phases, tuning or predefining parameters according to the types of variables and tasks. Therefore, the trade off between the required expertise for tuning the parameters of these complex methods and the gain of information that they may provide in our context drives us to carry out a static analysis. The chosen classifier is discussed in the next section.
II. PROBLEM STATEMENT AND THEORETICAL BACKGROUND The goal is to endow the robot with the ability to provide suitable assistances during tasks achieved with a human partner. A. Assistive control In this work, the assistance for each collaborative motion is carried out with a virtual motion guide, as in Fig. 1. Each guide is implemented according to the virtual mechanism concept [11]. A motion guide allows free motions in chosen directions while constraining the others. When the current assistance corresponds to the human intent, the collaborative motion can be performed with low effort since the motion guide is wellsuited. When the human intends another collaborative motion, new directions of motions are solicited. The provided guide is not well-suited anymore since it constrains directions in which the human wants to move. Therefore, wrench and twist patterns are used to detect the human partner intention of motion and thus select the suitable guide. In order to provide a suitable assistance, the goal is to determine the current intention of the operator among the m overall possible intentions C = {C1 , ..., Cm } given the n current haptic measures F = {F1 = f 1 , ..., Fn = f n }. f i is the measure of the i-th component Fi of the haptic data.
C. Naive Bayes Classifier The Naive Bayes classifier is a supervised learning algorithm based on the well-known Bayes’ theorem: P(C = Ck |
n
i
P(C = Ck )P(
Fi = f ) =
i=1
P(
n
i=1 n
i=1
Fi = f i |C = Ck )
with: • Ck the k-th class; • P(C = Ck ) the prior probability, i.e the initial degree of belief in C = Ck ; n • P( Fi = f i |C = Ck ) the likelihood, i.e the degree of i=1
belief in •
P(
n i=1
•
n i=1
Fi = f i under the assumption of C = Ck ;
Fi = f i ) the evidence, independent of C and
normalizing constant; n P(C = Ck | Fi = f i ) the posterior probability, i.e the i=1
degree of belief in C = Ck having accounted for Fig. 1.
(1)
Fi = f i )
f ; i
Example of virtual guide for a pure lateral translation
1829
n i=1
Fi =
It consists of two steps: • Training step: According to a dataset, the method estimates the posterior probability of each data point belonging to each class by estimating the parameters of the likelihood and of the prior probability; • Prediction step: For any unseen sample, the chosen class C∗ is usually the one with the largest posterior probability, known as the maximum a posteriori (MAP) decision rule. However, the likelihood parameters estimation is often intractable. The number of parameters to estimate will often be unrealistically large because of the large number of features and/or the large number of values that a feature can take. In order to considerably reduce the number of parameters that have to be estimated, this classifier makes the strong assumption of class-conditional independence, also called the naive hypothesis. This assumption supposes that features are independent from each other within each class. As a consequence, the likelihood can be approximated by calculating the product of each individual feature conditional probability, given the class: n n P( Fi |Ck ) ≈ P(Fi |Ck ) (2) i=1
i=1
This way, the computation becomes tractable since it requires only to consider each feature in each class separately. This unrealistic assumption in real-world applications gave the name of naive to the classifier. However, this classifier can often outperform more sophisticated classification methods and works well in practice for real-world applications, in particular when the dimension of the feature space is high [12]. It allows to better estimate the parameters required for accurate classification while requiring a smaller amount of training data than many other classifiers. This statement was highlighted for example in [13], where a comparison between naive bayes classifier and logistic regression was presented. Furthermore, explanations of these ’surprising’ performances (as often described in the literature), even when the naive hypothesis is not valid, are discussed, for example, in [14], [15], [16]. Finally, this classifier requires only the memorization of a compact data base of parameters. D. Performance Metric Some performance indices are widely used for binary classification assessment, as the Area Under the Curve (AUC) associated with the Receiver Operating Characteristic (ROC) curve. Extensions of these measures for multi-class classification assessment, as well as comparisons, have been proposed and are reported, for example, in [17]. However, this study has been carried out for three-class classification. An extension to more classes requires complex and numerous calculations. Moreover, these metrics are not easy to be intuitively interpreted. Another performance index, the Matthew Correlation Coefficient (MCC) [18], is regarded as one of the most relevant metric describing the confusion matrix of true and false positives and negatives by a single number [19] [20]. It is known to overcome the weakness of the most
natural metric, the accuracy, since it is a balanced measure which can be used even if the classes are of very different sizes. An extension of this method was proposed in [21] for a multi-class classification assessment. This coefficient is a discrete version of the Pearson correlation coefficient, common measure of correlation in statistics, also known as the Pearson Product Moment Correlation (PPMC). The MCC returns a value between −1 and +1 and can be intuitively interpreted: −1 represents a total disagreement between prediction and observation, 0 a random prediction and +1 a perfect prediction. It can be written as: N
k,l,m=1
Ckk Cml − Clk Ckm
MCC =
N N Clk k=1
l=1
× N N Ckl k=1
l=1
1
N f,g=1 f =k
N f,g=1 f =k
Cgf
(3)
Cf g
with N the number of classes, and Ci,j the number of elements of true class i that have been assigned to class j by the classifier, i.e the ij-th element of the confusion matrix. III. SELECTION OF THE SUITABLE ASSISTANCE In the following, we describe the algorithm for selecting online the suitable assistance. For clarity’s sake, C = Ck is noted Ck , and Fi = f i is noted f i . A. Learning step First, a learning step is performed once, in order to tune the parameters of the assistance selector. The learning step consists in estimating the parameters of n prior probabilities P(Ck ) and m.n likelihoods P(f i |Ck ). Without any knowledge about the task, and so no transition model, the m prior probabilities P(Ck ) are assumed equipossible. As a uniform distribution, 1 the m priors are equal and constant: P(Ck ) = . Reminding m that the final goal is to compute the posterior probability according to (Eq. 1) and (Eq. 2), the prior and evidence that are independent of the class, can be included in the normalizing factor η: n P(f i |Ck ) n 1 i=1 i with η = f )= P(Ck | (4) n η i=1 m.P( f i ) i=1
By that means, the learning problem is reduced to the estimation of the parameters of n.m likelihoods. Each feature Fi given the class Ck is assumed to follow a normal distribution, with μik the mean and σik the standard deviation, being the 2 parameters that have to be estimated. The likelihood is thus defined by: 2 ) P(f i |Ck ) = N (f i , μik , σik
(5)
We need to estimate a set of parameters for each feature 2 given the class SFi |Ck = {μik , σik }. These parameters sets n m SFi |Ck } can be estimated from a training dataset S={ k=1 i=1
1830
of D demonstrations. We use the maximum likelihood estimator: D W D W 1 i 1 i 2 fw,d , σ ˆik = (fw,d − μ ˆik )2 μ ˆik = W.D W.D d=1 w=1 d=1 w=1 (6) The current virtual mechanism prevents from demonstrating the switch from a motion to a next one. However, the user might demonstrate his motion intent by forcing against the virtual guide as long as he might be able to apply effort. Therefore, unlike straightforward supervised classification, no signal length is suitably defined by the demonstrations. We need to determine the optimal number of training signal data points W that will be used for the parameters estimation in order to get the best intention detection performance. To that end, one set of parameters S = SW is computed from data points within a window of size W, with W incrementally increased from 1 to the whole signal size (Fig. 2). Then, a cross validation step is carried out in order to select the best set of parameters. For each set SW , the MCC mean μMCC in a chosen confidence level interval Iγ = [γmin ; γmax ] is computed: μMCC =
Nγ 1 MCCi Nγ i=1
(8)
A penalty may be applied on the window size increase in order to have a trade-off between the applied effort (and time switching) and the detection performance. B. Prediction step Once the learning step has been carried out, the assistance can be adapted on-line, according to the current haptic measures. Decision rule: The robot makes the decision of providing a new assistance when it predicts a new operator intention with
Fig. 2.
IV. EXPERIMENT
We confine the experiment in the horizontal plane. The library of assistances is composed of four assistances for performing basic motions that an operator might achieve in a collaborative planar task. These motions are described below: • TY (Fig. 3. a.): pure lateral translation; • TX (Fig. 3. b.): front/back translation; • RO (Fig. 3. c.): rotation of the object around the operator’s gripping point; • RA (Fig. 3. d.): rotation of the object around the robot’s gripping point. Each of these motions involves a different kind of assistance [9]. Starting with a current assistance for performing RA motion, the goal is to detect the next intended motion C∗ among C = {TY , TX , RO , RA } (Fig. 3). The robot will be thus able to provide the suitable assistance. The relevant features highlighted in relationships apprehended in [9] are used: the force along x-axis FX , the torque along z-axis MZ and the displacement of the operator’s handle along y-axis ΔXY . They are expressed at the operator’s grasping point O in the basis depicted Fig. 3.
i=1
As long as the confidence level is not overstepped, the robot remains in the current state since it is not enough confident of the operator’s intention. Accounting for a confidence level in the decision rule instead of the simple MAP rule allows to be more robust. Ambiguities: Most of the time, wrench measures allows to detect the intention of motion very fast. As we highlighted in [9], when there is a wrench ambiguity between two intentions, the displacement allows to distinguish them. However, more time is needed to distinguish them. In order to speed up and make the prediction step more robust, we apply first the decision rule on wrench measures. When an ambiguity is detected, the decision rule is applied on the displacement measures between the current position and the position when the ambiguity detection started.
A. Goal
SW
Ck
(7)
with Nγ the number of MCC computed in the interval Iγ . The chosen set of parameters S∗ is the one with the highest MCC mean: S∗ = arg max(μMCC )
a confidence degree of at least γ. This confidence level can be set by the operator or tuned as a parameter during the learning step. In other words, the chosen class C∗ is the one with the largest posterior probability higher than the confidence level γ: n C∗ = arg max(P(Ck | f i )) ≥ γ (9)
Illustration of windows for D demonstrations of a training dataset.
Fig. 3.
1831
Goal of the experiment: Which is the intended motion?
Learning step. The different sets of parameters SW have been computed. As an example, Fig. 5 presents the estimated parameter μ ˆFX |TX for each set SW . We can see that the larger the window size, the higher the mean of the force FX . Using a small window for the learning step allows to predict the intention before the operator applies too much effort. Then, we carried out a cross validation in order to choose the best set of parameters. Fig. 6 presents the performance metric according to the different confidence levels γ for each set of parameters SW . For most of the parameters sets, the higher the confidence level, the higher the performance
Fig. 4.
Fig. 6. Classification performance according to the degree of certainty. Each curve represents the classification performances obtained with a set of parameters SW .
index. Moreover, most of the time, the larger the window size W, the higher the performance index. Nevertheless, bad performances are obtained when requiring a 100% confidence level. Maybe a larger window size would be necessary to obtain good performances with a 100% confidence level. This is not acceptable since the user would have to apply very high efforts. Assuming that the operator will always choose a high confidence level, Fig. 7 presents the MCC mean for the confidence level interval Iγ = [0.75; 0.99]. The red bar refers to the chosen set of parameters S∗ with the highest performance index. As an example, Table I presents the chosen parameters sets S∗FX |Ck for the feature FX given each of the four intentions of motion C = {TY , TX , RO , RA }. As expected from the human-robot haptic interaction model presented in [9], a FX significant mean value is obtained for TX intention while FX non significant means are obtained for the other intentions. Prediction step. Tables II and III present the result of the classification for two levels of certainty γ = {0.75; 0.9}. It can be noted that the more the confidence level increases,
Fig. 7. Classification performance obtained with each set of parameters SW for the interval of certainty Iγ = [0.75; 0.99]. TABLE I E XAMPLE OF SETS OF PARAMETERS S F X |Ck
FOR THE FEATURE EACH OF THE FOUR INTENTIONS OF MOTION
C. Results
Fig. 5. Mean of FX feature given the intention TX for each set of parameters ˆFX |TX according to demonstration signals length SW , i.e the evolution of μ used for the estimation.
Participants. This experiment has been carried out with 30 subjects: 8 females and 22 males; 26 right-handed and 4 lefthanded. The average age is 28 years with a range from 22 to 54. None of them is considered as robotic expert. Moreover, none of them had prior knowledge about the reason of our study and witnessed the participation of any other subjects. Experimental apparatus. A 6-axis Force/Torque sensor is mounted at the wrist of a 6-DOF articulated arm manipulator (Fig. 4). One end of a bar of 0.56 m long is attached to the robot end-effector and a user’s handle is fixed on the other end. Between the bar and the handle is mounted a second Force/Torque sensor. The collaborative motions (Fig. 3) are drawn with different colors on a white paper (Fig. 4). The current assistance for performing RA motion is carried out with a virtual mechanism, using the Force/Torque sensor at the robot’s wrist. The servo-loop is running at 500Hz and is implemented in the TAO2000 framework [22]. Experimental instructions. Each subject was instructed to perform 5 times each collaborative motion. Although the virtual mechanism will prevent them from following TY , TX and RO trajectories, they have to intend to follow them. Every test lasted 5s. Data acquisition. The force and torque at the user’s handle as well as the position of each robot joint were recorded at 500Hz. The measurements of the robot joint positions have been used to measure the user’s handle position, in order to obtain the displacement of the operator’s hand. Data from 15 participants who have been randomly selected constitute the training dataset. Data from 8 participants who have been randomly selected among the remaining 15 participants are used for the cross validation dataset. A test is carried out with data of the remaining 7 participants.
B. Experimental set up
Parameters \ Intention μ ˆFX (N ) σ ˆFX (N )
Experimental apparatus and drawn motions
1832
TX 6.63 3.95
TY 1.42 1.77
RO 1.2 1.63
RA 1.22 0.99
FX
GIVEN
TABLE II C ONFUSION MATRIX OBTAINED WITH γ = 0.75 Predicted intention TX TY RO RA TX 35 0 0 0 TY 1 34 0 0 Actual intention RO 1 0 33 1 RA 1 4 0 30
A limitation of this method is the batch learning mode that requires to provide a whole arbitrary dataset. An interesting perspective would be to extend the method so that it does not require time-consuming extra demonstrations. Furthermore, instead of encapsulate the variability of human partner with demonstration of different subjects, an interesting improvement may be the ability of the robot to continuously adapt the detection to human partners variability.
TABLE III C ONFUSION MATRIX OBTAINED WITH γ = 0.9 Predicted intention TX TY RO RA TX 35 0 1 0 TY 0 35 0 2 Actual intention RO 0 0 33 0 RA 0 0 1 33
R EFERENCES
the fewer switching errors occur. For a confidence level γ = 0.75, the performance index is MCC = 0.92, with 8 wrong detections over the 140 trials, being a 92.8% success rate. For a confidence level γ = 0.9, the performance index is MCC = 0.96, with 4 wrong detections over the 140 trials, being a 97% success rate. Fig. 8 shows the posterior probabilities computed during an on-line recognition. First, the robot provides assistance for performing RA motion. Then, it detects the operator’s intention of TX motion and thus selects the corresponding assistance. The confidence level γ = 0.98 was arbitrarily chosen by the operator. V. CONCLUSIONS
!
This paper highlighted the ability of the robot to select online the suitable assistance for the current intended collaborative motion, according to the haptic cues naturally transmitted by the human partner. Thus, the operator can benefit from a sequence of suitable assistances generated on-line without prior knowledge about the task. This allows to perform a wide range of tasks with assistance while the robot has no prior knowledge about the task. The method for tuning the intention detection parameters can be applied without demonstrating the switch between motions. As no signal length is suitably defined by the demonstrations, the proposed method selects the optimal number of demonstration data points that will be used to estimate the intention detection parameters in order to get the best detection performance.
[1] S. Calinon, P. Evrard, E. Gribovskaya, A. Billard, and A. Kheddar, “Learning collaborative manipulation tasks by demonstration using a haptic interface,” in ICAR, Jun. 2009, pp. 1 –6. [2] J. Medina, T. Lorenz, D. Lee, and S. Hirche, “Disagreement-aware physical assistance through risk-sensitive optimal feedback control,” in IEEE/RSJ IROS, 2012, pp. 3639–3645. [3] T. Takubo, H. Arai, and K. Tanie, “Human-robot cooperative handling using virtual nonholonomic constraint in 3-D space,” in IEEE ICRA, vol. 3, 2001, pp. 2680–2685. [4] S. Yigit, C. Burghart, and H. Worn, “Co-operative carrying using pumplike constraints,” in IEEE/RSJ IROS, vol. 4, 2006, pp. 3877–3882. [5] T. Wojtara, M. Uchihara, H. Murayama, S. Shimoda, S. Sakai, H. Fujimoto, and H. Kimura, “Human-robot collaboration in precise positioning of a three-dimensional object,” Automatica, vol. 45, no. 2, Feb. 2009. [6] A. Mortl, M. Lawitzky, A. Kucukyilmaz, M. Sezgin, C. Basdogan, and S. Hirche, “The role of roles: Physical cooperation between humans and robots,” The International Journal of Robotics Research, Aug. 2012. [7] N. Jarrasse, J. Paik, V. Pasqui, and G. Morel, “How can human motion prediction increase transparency?” in IEEE ICRA., May 2008. [8] J. Colgate, “The control of dynamically interacting systems,” Ph.D. dissertation, Massachusetts Institute of Technology, 1988. [9] J. Dumora, F. Geffard, C. Bidard, T. Brouillet, and P. Fraisse, “Experimental study on haptic communication of a human in a shared humanrobot collaborative task,” in IEEE/RSJ IROS, 2012. [10] J. Dumora, F. Geffard, C. Bidard, and P. Fraisse, “Towards a robotic partner for collaborative manipulation,” in HRI Workshop on Collaborative Manipulation, Mar. 2013. [11] L. Joly, C. Andriot, and V. Hayward, “Mechanical analogies in hybrid position/force control,” in IEEE ICRA, vol. 1, Apr. 1997, pp. 835–840. [12] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning. Springer, Jul. 2003. [13] A. Ng and M. Jordan, “On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes,” in Neural Information Processing Systems, 2002, pp. 841–848. [14] D. Hand and K. Yu, “Idiot’s bayes-not so stupid after all?” International Statistical Review, vol. 69, no. 3, pp. 385–398, 2001. [15] H. Zhang, “The optimality of naive bayes,” in FLAIRS, 2004. [16] L. Kuncheva, “On the optimality of naive bayes with dependent binary features,” Pattern Recognition Letters, vol. 27, no. 7, May 2006. [17] M. Sampat, A. Patel, Y. Wang, S. Gupta, C. Kan, A. Bovik, and M. Markey, “Indexes for three-class classification performance assessment–an empirical comparison,” IEEE Transactions on Information Technology in Biomedicine, vol. 13, no. 3, pp. 300–312, May 2009. [18] B. Matthews, “Comparison of the predicted and observed secondary structure of t4 phage lysozyme.” Biochimica et biophysica acta, vol. 405, no. 2, pp. 442–451, Oct. 1975. [19] G. Jurman, S. Riccadonna, and C. Furlanello, “A comparison of MCC and CEN error measures in multi-class prediction,” PLoS ONE, vol. 7, no. 8, Aug. 2012. [20] P. Baldi, S. Brunak, Y. Chauvin, C. Andersen, and H. Nielsen, “Assessing the accuracy of prediction algorithms for classification: an overview,” Bioinformatics (Oxford, England), vol. 16, no. 5, May 2000. [21] J. Gorodkin, “Comparing two k-category assignments by a k-category correlation coefficient,” Computational Biology and Chemistry, vol. 28, no. 56, pp. 367–374, Dec. 2004. [22] F. Geffard, P. Garrec, G. Piolain, M. Brudieu, J. Thro, A. Coudray, and E. Lelann, “TAO2000 v2 computer assisted force feedback telemanipulators used as maintenance and production tools at the AREVA NC La hague fuel recycling plant,” Journal of Field Robotics, vol. 29, no. 1, pp. 161–174, Jan. 2012.
Fig. 8. Example of TX detection from a current assistance for performing RA . The decision rule on wrench measures allows to detect the next intention.
1833