Aug 20, 2015 - Abstract. This paper introduced a novel approach to recognize objects with tactile images by utilizing semi-supervised learning approaches.
Tactile Object Recognition with Semi-supervised Learning Shan Luo, Xiaozhou Liu, Kaspar Althoefer, and Hongbin Liu() Department of Informatics, King’s College London, London, UK {shan.luo,xiaozhou.liu,k.althoefer,hongbin.liu}@kcl.ac.uk
Abstract. This paper introduced a novel approach to recognize objects with tactile images by utilizing semi-supervised learning approaches. In tactile object recognition, the data are normally insufficient to build robust training models. Thus the model of Ensemble Manifold Regularization, which combines concepts of multi-view learning and semi-supervised learning, is adapted in tactile sensing to achieve better recognition accuracy. Different outputs of classic bag of words with different dictionary sizes are considered as different views to produce an optimized one based on multiple graphs learning optimization. In the experiments 12 objects were used to compare the classification performances of our proposed approach and the classic BoW model and it is proved that our proposed method outperforms the classic BoW framework and objects with similar features can be better classified. Keywords: Tactile sensors · Object recognition · Robot tactile systems
1
Introduction
In the last few decades, many efforts have been put into enabling robots to possess the tactile sensing as we humans. In the view of hardware, multiple tactile sensors have been developed using various sensing principles [1]–[5], e.g., capacitive, piezoelectric, optical, resistive and ultrasonic etc. In the meantime, some researchers contributed to decoding the conveyed message by the tactile sensors to retrieve the information of interacted object, such as contact locations [6], [7], slippage [8], object pose [9] and surface textures [10]–[12]. As defined in [13], the object characteristics can mainly be divided into two categories, i.e., material and geometric properties. The material properties can be recovered all at once (off-line) or over time (on-line) by exploiting the force frequency variance with force or tactile sensing and employing different machine learning techniques [14]–[16]. Geometric properties can be recategorized into size and shape. The former is easy to be characterized utilizing multiple metrics, e.g., length, width, height, diameter, perimeter, area and volume, etc. However, the object shape is particularly difficult to characterize due to its complex essence. It can be studied on two different contact scales, i.e., local and global. The local shape shed light on the fractional information of objects, namely, shapes that can fit within fingertip tactile sensors. The global shape refers to the overall shape of the © Springer International Publishing Switzerland 2015 H. Liu et al. (Eds.): ICIRA 2015, Part II, LNAI 9245, pp. 15–26, 2015. DOI: 10.1007/978-3-319-22876-1_2
16
S. Luo et al.
object and generally extends beyond the fingertip scale. Though the local tactile features have been studied in multiple recent researches [16]–[20], only a few approaches are available for recovering the global object shape. In our previous work [19], a novel feature Tactile-SIFT descriptor was extracted from local tactile images and a Bag-of-Features (BoF) framework was applied to classify the global shapes, which was widely used in the tactile object recognition [17], [21]–[23]. In the classic BoF framework the optimal dictionary size is decided by trial and error and remains the same invariant to the variance of collected data. However, to our intuition, the dictionary size tends to increase as the amount of data grows. It is similar to the case that a child accumulates his dictionary as he grows up and explores the ambient world. Inspired by this, in this paper we propose a novel method to process the obtained tactile readings from various views (different BoW streams) and a semi-supervised learning framework is applied. In our proposed approach a specific dictionary size is not essential in advance and a superior recognition performance was achieved in our experimental evaluation. Fig. 1 depicts our experimental system in which a Weiss Robotics tactile array sensor WTS 0614-34 with 6×14 sensing elements is attached to a Sensable PHANTOM Omni device; an example tactile reading is also illustrated in Fig. 1. The remainder of this paper is organized as follows. The related work in tactile object recognition is briefly illustrated Section 2. The proposed recognition system is introduced in detail in Section 3. The test rig and data pre-processing approaches are described in Section 4, and the results and discussions are present in Section 5. Finally, the paper is concluded and possible future research directions are proposed in Section 6.
Tactile Sensor Fig. 1. Left: The experimental set-up. A tactile sensor is attached to the joystick of the Phantom haptic device. Right: an example obtained tactile reading.
2
Related Work
Due to the low resolution of tactile sensors, early researchers generally employed onepoint force sensors to retrieve object shapes from contact points, thus computer graphics techniques were widely utilized. Allen et al. collected contact points during the interaction between objects and fingertips and fit them into super-quadric surfaces
Tactile Object Recognition with Semi-supervised Learning
17
[24]. In this work the normal directions at contact points were utilized to infer the geometric parameters of object surfaces. Charlebois et al. employed a similar scheme in [25] but tensor B-spline surfaces were used as the geometric models instead. In [26], a polyhedral model was acquired to recover unknown object shape by taking use of the contact locations and hand pose. Though arbitrary contact shapes can be retrieved by using this kind of approach, key features cannot be revealed and it can be time consuming in the case of investigating large object surface. As the spatial resolution and spatiotemporal response of tactile sensors increase [1], researchers tend to utilize tactile array sensors to draw shape information in recent years. In [27], each tactile reading was treated as one matrix to which principal component analysis was applied; three principal axes are therefore acquired and fed up into a Naïve Bayes classifier to recognize different geometric shapes. The raw tactile readings were transformed into 512-element vectors and classified by a neural network [18]. There is a trend to apply computer vision originated methods in tactile object recognition, by taking tactile arrays as sparse images. In this type of method, descriptors are first extracted from tactile images to represent them. Schneider et al. [22] and took tactile images as features directly and introduced the bag-of-features framework into object recognition with tactile sensing. Various feature descriptors from computer vision were studied and their performance in tactile sensing was compared [17]. In our previous work [19], [20], a new Tactile-SIFT descriptor that suitable for processing the tactile images was proposed by reformatting the Scale Invariant Feature Transform (SIFT) descriptor [28] and a good performance was achieved. In this paper, we will follow [19], [20] and Tactile-SIFT descriptors are used. There are only a few approaches available to recover the global identity of the object using the collected local tactile images. In [29], a mosaic method was proposed to synthesize tactile patches to reveal the object-level shape and both histogram and particle filters were used, in which the test objects are a set of raised letters. However, due to its mosaic essence, a large number of contacts are needed. Another widely employed approach is to apply unsupervised learning to create a codebook/dictionary of tactile features for the objects [17], [21]–[23], which is called bag-of-features framework. In this method the optimal dictionary size has to be chosen in the training process by trial and error and remains constant while classifying test objects. However, generally the dictionary size is expected to vary as the amount of data changes. Therefore, in this paper a novel approach is proposed to process the obtained tactile readings from various views (different BoW streams) and a semi-supervised learning framework is applied to tactile object recognition.
3
System Design
3.1
System Overview
As there is only local shape information can be perceived with one fingertip-object contact, the robot needs to contact the interacted object multiple times to achieve a global model. To begin with, the classic bag-of-words framework is briefly introduced here. The descriptors are first extracted from collected low-resolution intensity
18
S. Luo et al.
tactile images, and then k-means clustering is used to generate a dictionary from its training dataset. Therefore, the histograms of word occurrences for object classes can be created, the robots can then use those distributions to identify an object, by touching it for several times and comparing its occurrence histogram with the histograms in the database. Different from classic BoW framework, in our proposed system, the collected data are divided into several groups and a bag-of-words model is applied to each group. In this way, we treat the collected data as multi views of the object. As different feature spaces are formed through different views, they have particular statistical properties. It is proposed to employ ensemble manifold regularization (EMR) [30] to explore the intrinsic manifold for the multi-view learning, as illustrated in Fig. 2. The EMR framework assumes that there is low-dimensional manifold which supports the geometry of the intrinsic data probability distribution. By using the Laplacian Eigenmap, the Laplacian of the adjacency graph is generated in an unsupervised manner from each data set; Based on a group of initial guesses of graph Laplacian, the manifold is approximated by combining these initial guesses with alternating optimization and the optimal intrinsic manifold is thus achieved; the acquired optimal intrinsic manifold is then fed up into the graph-based semi-supervised classifier. More details of the EMR framework, especially the mathematical deduction can be referred to [30]. Start
Data 1
Data 2
Data 3
Graph Laplacian1
Graph Laplacian2
Graph Laplacian3
Alternating Optimization
Optimal intrinsic manifold
Graph-based Semi-supervised classification algorithm Fig. 2. The framework of EMR. The training data are first divided into several groups and a graph Laplacian matrix is obtained from each data set; By applying the alternating optimization, the optimal intrinsic manifold is acquired and used to train the graph-based semisupervised classifier.
Tactile Object Recognition with Semi-supervised Learning
3.2
19
Feature Extraction
Inspired by Scale Invariant Feature Transform (SIFT) descriptors [28] from computer vision, descriptors were formed by using gradient directions. In visual image processing, one objects can appear to be of different scales in different images thus a scale pyramid is built to make descriptors scale invariant in classic SIFT algorithm. But tactile sensing allows mapping real dimensions of pressed objects thus tactile images are not needed to be scaled and the scale pyramid is removed. Besides, there is limited information present in each tactile image and much less specific features will be included compared to visual images, therefore key point localization is eliminated [19]. To make features more robust, each tactile image is divided into three overlapping regions of the same sizeand one 128 dimensional SIFT descriptor is extracted for each region as shown in Fig. 3, taking region centers as “key points”.
p1
p2 p3 Fig. 3. One tactile image is segmented into three overlapping sub-patches of the same size and one descriptor is extracted from each sub-patch.
3.3
Graph-Based Semi-supervised Classifier
In the visual object recognition, images can be easily accessed and collected from WWW. Thus multiple supervised classification algorithms are widely used, e.g., Knearest Neighbor (KNN), Support Vector Machines (SVMs); they assume that an accurate and complete set of class definitions is given, the decision rule of training data for each class is also provided. In contrast, it is hard to acquire tactile training data and few tactile object models are available. Thus a graph-based semi-supervised classifier is employed in our case, which exploits available labeled training models and also takes use of the classified unlabeled test models. As discussed in [31], graphs can be applied to different machine learning tasks, e.g., classification, ranking, and embedding. Graph-based semi-supervised learning can be formulated as a regularization framework: Ω
,
(1)
20
S. Luo et al.
where F is the classification function, Ω is a regularizer on the graph, is an empirical loss, and is a tradeoff parameter to balance the empirical loss and the regularizer. For a c-class classification problem, the regularizer can be defined as: Ω F
∑
,
(2)
th
th
where is the relevance score vector for the i class, i.e., the i column of F. The classification loss is ,
3
where is the label vector for the ith class. Therefore, the following objective function is defined to obtain F: .
arg min
4
The details of building the graph Laplacian matrix are introduced as follows. Given a , , , its graph Laplacian matrix can be constructed as: dataset ,
(5)
and exp if is among the k-nearest neighbors where or vice versa; 0 otherwise. Besides, D is diagonal matrix calculated by of ∑ . In this paper, we adopt a normalized graph Laplacian matrix by performing normalization on L: /
/
/
/
,
(6)
where I is an identity matrix. Hence, we can rewrite Eq. (4) as: arg min ∑
4
.
(7)
Data Extraction and Pre-processing
In our experimental set-up, the tactile images are generated by a Weiss tactile sensor attached to a Phantom Omni manipulator arm. There are 84 sensor cells in the tactile sensor located in 14 rows and 6 columns with a whole size of 51mm×24 mm. The elastic rubber foam is used to cover the sensor to conduct the applied force. As the sensor is small compared to the objects chosen for this test, the objects need to be touched multiple times at different positions to collect all features. As the robot arm interacts with an object, the pressure distribution over the surface of object is transformed into a tactile image through the tactile sensor. In total, 12 objects are used to validate our algorithm, i.e., plier, fixture, ball, wheel, tweezers, point array, wide plier, thin plier, scissors, plug, character E and long
Tactile Object Recognition with Semi-supervised Learning
21
scissor, as illustrated in Fig. 4. They are all taken from the daily life or lab environment, except character that is a 3D printed object and possesses three dimensional features. Some of the objects share same features which increase the difficulty of object organization, e.g., different types of pliers. Several tactile reading samples of character E, ball and plier are shown in Fig. 5, in which some prominent features of these objects can be observed. For every object, the exploring procedure was repeated 5 times and the planar surface of the sensor were kept normal to the object surface to cover the entire object surface during each exploring procedure. The first four times were taken as the training set while the last one is taken as the test set. As a result, a total number from 1100 to 2483 tactile images w.r.t the object size were collected for each object. To verify that only a few touches are needed to recognize objects, m patches in the last procedure were sampled randomly for each test set and for each object, which was repeated as 25 test sets. The raw tactile readings are processed as follows. If the maximum value of a reading is lower than a pre-defined threshold or the sum of all the elements in one reading is smaller than a decision value, it is considered to be collected unintentionally and removed. To minimize the effect of non-linear sensor characteristics of the tactile sensor, the readings are then normalized to the maximum element value of each reading to achieve consistency in the dynamic range of collected tactile measurements; hence, the normalized values fall into a range of [0, 1].
point array 1
tweezers 5
plug 9
plug 2
plier
6
fixture 10
ball 3
wide plier 7
wheel 4
thin plier 8
scissors 11 long scissor 12
Fig. 4. The object pool. The objects are selected from daily life or lab environment. The name and the assigned number are given at the below of each object image.
22
S. Luo et al.
(a)
(b)
(c)
Fig. 5. Sample tactile readings collected from (a) character E (b) ball (c) plier.
5
Experimental Results and Analysis
To prove the superiority of our proposed approach, a comparative study was carried out to compare the recognition performance of both using classic BoW framework and our modified method with EMR semi-supervised learning. Here 10 tactile readins are used for each object in each test set. As the dictionary size is a varied parameter in Bag-ofWord model and different dictionary sizes have a different performance. Ensemble Manifold Regularization (EMR) was then introduced to optimize various BoW streams with different dictionary sizes and the semi-supervised classification algorithm in this new model was employed to classify the objects. The overall recognition rates are listed in Table 1. Two conclusions can be drawn based the results. 1). in general, as the dictionary size increase the objects are more reliable to be classified. 2). and the accuracy obtained from EMR model is higher than any individual BoW stream thus the objects are better recognized with the EMR framework and the semi-supervised learning, which means different BoW streams are integrated to achieve a better conclusion. Table 1. Overall recognition rates with various dict. sizes or with the EMR framework
Dict. size Overall recognition rate/%
10
20
30
40
50
60
EMR
61.3
68.7
65.3
68.3
76.7
79.3
83.3
The classification performances of each object are also studied and shown in Fig. 6. For the recognition with BoW framework, the best recognition rate of different dictionary sizes was selected to compare with the performance of the EMR framework and semi-supervised learning. It can be noticed that by applying our proposed approach, the classification performance of each object has been improved, especially for those with similar features. In more details, taking the pliers of different types (object number 7, 8, 9) for example, they were weakly classified in the single BoW model, but when the multiple BoW streams were combined in EMR framework a better performance has been achieved. This conclusion also applies to the scissors of different types (object number 10, 11, 12).
Recognition rate
Tactile Object Recognition with Semi-supervised Learning
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
23
BoW EMR
1
2
3
4
5
6
7
8
9
10 11 12
Object Number
Fig. 6. Comparison of recognition rates of each object with single BoW stream or with EMR framework and semi-supervised learning.
A confusion matrix is illustrated in Fig. 7 that shows an overall accuracy of 83.3% was obtained. It can be noticed that most objects can be recognized successfully; however, some are still assigned to wrong labels, especially those with similar local shapes, e.g., different pliers (6, 7, 8) or scissors (11,12). These objects are expected to be further classified by utilizing distribution of their local features in future work. Table 2. Confusion matrix of object recognition with EMR framework.
Rate 1 2 3 4 5 6 7 8 9 10 11 12
1 1 0 0 0 0 0 0 0 0 0 0 0
2 0 1 0 0 0 0 0 0 0 0 0 0
3 0 0 1 0 0 0 0 0 0 0 0 0
4 0 0 0 1 0 0 0 0 0 0 0 0
5 0 0 0 0 1 0 0 0 0 .40 0 0
6 0 0 0 0 0 .70 .12 .06 0 0 0 0
7 0 0 0 0 0 .22 .88 .28 0 0 0 0
8 0 0 0 0 0 .08 0 .60 0 0 0 0
9 0 0 0 0 0 0 0 0 1 0 0 0
10 0 0 0 0 0 0 0 0 0 .60 0 0
11 0 0 0 0 0 0 0 0 0 0 .55 .35
12 0 0 0 0 0 0 0 .06 0 0 .45 .65
24
6
S. Luo et al.
Conclusion and Discussion
This paper presents a new approach to recognize objects with tactile pressure arrays. As the widely used bag-of-words model needs to select the dictionary size by trial and error in the training phase, it is proposed to utilize the semi-supervised learning to integrate the information explored by different BoW models. Thus the Ensemble Manifold Regularization is applied to optimize multiple bag-of-words models with different dictionary sizes and a semi-supervised learning framework is employed. In the experiments 12 objects were used to compare the classification performances of our proposed approach and the classic BoW model. The experimental results prove that our proposed method outperforms the classic BoW framework and objects with similar features can be better classified. There are a few directions to extend current research. In this paper, only tactile readings are used but in general the ‘relative distance’ of different tactile features are expected to provide more information and a better recognition performance can be obtained. Further works can focus on a better way of selection which satisfies both of the following goals: (a) different features should be covered as much as possible, (b) Under the condition of maintaining a high accurate rate, the number of images should be reduced as much as possible.
References 1. Dahiya, R.S., Metta, G., Valle, M., Sandini, G.: Tactile Sensing––From Humans to Humanoids. IEEE Trans. Robot. 26(1), 1–20 (2010) 2. Dahiya, R.S., Mittendorfer, P., Valle, M., Cheng, G., Lumelsky, V.J.: Directions Toward Effective Utilization of Tactile Skin: A Review. IEEE Sens. J. 13(11), 4121–4138 (2013) 3. Xie, H., Liu, H., Luo, S., Seneviratne, L.D., Althoefer, K.: Fiber optics tactile array probe for tissue palpation during minimally invasive surgery. In: IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 2539–2544 (2013) 4. Liu, H., Puangmali, P., Zbyszewski, D., Elhage, O., Dasgupta, P., Dai, J.S., Seneviratne, L., Althoefer, K.: An Indentation Depth-force Sensing Wheeled Probe for Abnormality Identification during Minimally Invasive Surgery. Proc. Inst. Mech. Eng. H. 224(6), 751–763 (2010) 5. Althoefer, K., Zbyszewski, D., Liu, H., Puangmali, P., Seneviratne, L., Challacombe, B., Dasgupta, P., Murphy, D.: Air-cushion force-sensitive probe for soft tissue investigation during minimally invasive surgery. J. Endourol. 23(9), 1421–1424 (2009) 6. Liu, H., Nguyen, K.C., Perdereau, V., Bimbo, J., Back, J., Godden, M., Seneviratne, L.D., Althoefer, K.: Finger contact sensing and the application in dexterous hand manipulation. Auton. Robots 39(1), 25–41 (2015) 7. Luo, S., Mou, W., Althoefer, K., Liu, H.: Localizing the object contact through matching tactile features with visual map. In: IEEE Conference on Robotics and Automation (ICRA) (2015, to appear) 8. Song, X., Liu, H., Althoefer, K., Nanayakkara, T., Seneviratne, L.D.: Efficient BreakAway Friction Ratio and Slip Prediction Based on Haptic Surface Exploration. IEEE Trans. Robot. 30(1), 203–219 (2014)
Tactile Object Recognition with Semi-supervised Learning
25
9. Bimbo, J., Seneviratne, L.D., Althoefer, K.: Combining touch and vision for the estimation of an object’s pose during manipulation. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4021–4026 (2013) 10. Meier, M., Sch, M., Haschke, R., Ritter, H.: A Probabilistic Approach to Tactile Shape Reconstruction. IEEE Trans. Robot. 27(3), 630–635 (2011) 11. Martins, R., Ferreira, F., Dias, J., Member, S.: Touch attention Bayesian models for robotic active haptic exploration of heterogeneous surfaces. In: Iros, pp. 1208–1215 (2014) 12. Chathuranga, D.S., Ho, V.A., Hirai, S.: Investigation of a biomimetic fingertip’s ability to discriminate fabrics based on surface textures. 2013 IEEE/ASME Int. Conf. Adv. Intell. Mechatronics Mechatronics Hum. Wellbeing, AIM 2013, pp. 1667–1674 (2013) 13. Lederman, S.J., Klatzky, R.L.: Haptic perception : A tutorial. Attention, Perception, & Psychophys 71(7), 1439–1459 (2009) 14. Kaboli, M., Mittendorfer, P., Hugel, V., Cheng, G.: Humanoids learn object properties from robust tactile feature descriptors via multi-modal artificial skin. In: IEEE-RAS Internatianl Conference on Humanoid Robots (Humanoids) (2014, to appear) 15. Decherchi, S., Gastaldo, P., Dahiya, R.S., Valle, M., Zunino, R.: Tactile-Data Classification of Contact Materials Using Computational Intelligence. IEEE Trans. Robot. 27(3), 635–639 (2011) 16. Liu, H., Song, X., Bimbo, J., Seneviratne, L., Althoefer, K.: Surface material recognition through haptic exploration using an intelligent contact sensing finger. In: International Conference On Intelligent Robots And Systems, pp. 52–57 (2012) 17. Pezzementi, Z., Plaku, E., Reyda, C., Hager, G.D.: Tactile-Object Recognition From Appearance Information. IEEE Trans. Robot. 27(3), 473–487 (2011) 18. Liu, H., Greco, J., Song, X., Bimbo, J., Althoefer, K.: Tactile image based contact shape recognition using neural network. In: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 138–143 (2013) 19. Luo, S., Mou, W., Li, M., Althoefer, K., Liu, H.: Rotation and translation invariant object recognition with a tactile sensor. In: IEEE Sensors Conference, pp. 1030–1033 (2014) 20. Luo, S., Mou, W., Althoefer, K., Liu, H.: Novel Tactile - SIFT Descriptor for Object Shape Recognition. IEEE Sens. J. (2015, article in press) 21. Ji, Z., Amirabdollahian, F., Polani, D., Dautenhahn, K.: Histogram based classification of tactile patterns on periodically distributed skin sensors for a humanoid robot. In: IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 433–440 (2011) 22. Schneider, A., Sturm, J., Stachniss, C., Reisert, M., Burkhardt, H., Burgard, W.: Object identification with tactile sensors using bag-of-features. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 243–248 (2009) 23. Madry, M., Bo, L., Kragic, D., Fox, D.: ST-HMP: unsupervised spatio-temporal feature learning for tactile data. In: IEEE Conference on Robotics and Automation (ICRA), pp. 2262–2269 (2014) 24. Allen, P.K., Michelman, P.: Acquisition and interpretation of 3-D sensor data from touch. IEEE Trans. Robot. Autom. 6(4), 397–404 (1990) 25. Charlebois, M., Gupta, K., Payandeh, S.: Shape description of general, curved surfaces using tactile sensing and surface normal information. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2819–2824, April 1997 26. Casselli, S., Magnanini, C., Zanichelli, F.: On the robustness of haptic object recognition based on polyhedral shape representations. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 2, pp. 200–206 (1995)
26
S. Luo et al.
27. Liu, H., Song, X., Nanayakkara, T., Seneviratne, L.D., Althoefer, K.: A computationally fast algorithm for local contact shape and pose classification using a tactile array sensor. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1410–1415 (2012) 28. Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004) 29. Pezzementi, Z., Reyda, C., Hager, G.D.: Object mapping, recognition, and localization from tactile geometry. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 5942–5948 (2011) 30. Geng, B., Tao, D., Member, S.: Ensemble Manifold Regularization. IEEE Trans. Pattern Anal. Mach. Intell. 34(6), 1227–1233 (2012) 31. Zhu, X., Ghahramani, Z., Lafferty, J.D.: Semi-supervised learning using Gaussian fields and harmonic functions. In: Int. Conf. Mach. Learn. - ICML 2003, vol. 20, no. 2, pp. 912–919 (2003)