In Proceedings of the 9th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2006' ESTEC, Noordwijk, The Netherlands, November 28-30, 2006
Can Statistics help Walking Robots in Assessing Terrain Roughness? Platform Description and Preliminary considerations Luca Ascari∗† , Marco Ziegenmeyer,‡ , Paolo Corradi† , Bernd Gaßmann‡ Marius Z¨ollner‡ , R¨udiger Dillmann‡ and Paolo Dario† ∗ IMT,
Doctoral School in Biorobotics, Lucca, Italy Lab, Scuola Superiore Sant’Anna, Pisa, Italy ‡ FZI Forschungszentrum Informatik, Karlsruhe, Germany E-mail:
[email protected] † CRIM
Abstract Recognition of and adaptation to terrain environment are key issues for autonomous and semi-autonomous robots devoted to terrestrial and planetary exploration. These capabilities are of utmost importance both for the optimization of the path planning and for a proper control of the robot motion. Gait control and adaptation is particularly important for robots based on legged locomotion. The possibility to predict and manage terrain characteristics through the combined texture analysis of statistical vision content and tactile information, generated by a distributed sensing system under the robot feet, is being explored in this work. This approach would allow a continuous and reciprocal information feedback between visual and tactile-sensing data about the terrain, and consequently, a more efficient and reliable adaptation of the robot motion control during mission operations. In order to reduce the complexity of the computationally intensive texture analysis, a multidimensional terrain classifier based on Support Vector Machines (SVMs) is under development. Preliminary hardware characterizations and experimental evaluations have been performed on the hexapod robot L AURON IV, a universal platform for locomotion on rough terrain.
I. I NTRODUCTION Autonomous and semi-autonomous mobile robots devoted to surface exploration, both in terrestrial and space applications, should be able to autonomously move in a highly unstructured and unpredictable environment, typically rough, uneven and rugged. The interaction of the robot with its surrounding is among the most important factors for safe operation and optimal performances, affecting not only path planning but also the motion control. Although the relation between terrain features and robot path planning has been studied for several years, the topic remains an open research theme, as proved in very recent works( [1]–[3]). In addition, the capability to link terrain features and motion control becomes particularly important for robots with legged locomotion. This gives the possibility to adapt the gait to the surface morphology, while optimizing critical factors such as speed, locomotion safety and power consumption. While fine motion control for wheeled rovers has been already experimented through visual odometry software systems (even on slippery slopes, as in [4]), this paper focuses on gait adaptation to terrain features for a legged robot. In particular, it reports the preliminary study and first stage fabrication of a combined vision and tactile-based gait controller for a legged robotic platform. The final locomotion control will continuously operate on the base of two onboard systems: • a classifier that performs a classification of the terrain in front of the robot through an analysis of the statistical content of the images captured by the camera; this allows the robot to adapt its gait to the forthcoming terrain conditions; • a distributed tactile-sensing system on the robot feet, able to validate or correct the prediction made by the vision system, thus allowing continuous real-time adaptation of the robot motion control. This feature would allow not only the acquisition of more detailed information of the characteristics of the walked surface, but mainly it lets the robot constantly compute a more efficient and reliable adaptation of the motion control, on the base of a reciprocal information feedback between visual and tactile-sensing data about the terrain. The problem of classifying the terrain intrinsically involves two conceptually different steps: selection of a set of significant descriptive features and training an appropriate classifier based on these features. Concerning the first step, vision is used for terrain features estimation in several works. Traversability of a terrain, defined in terms of some terrain parameters, such as slope, roughness, discontinuity and hardness, and a set of algorithms based on edges extraction, region growing, and morphological operations are presented in [5]. Obstacles such as rocks are isolated depending on their size related to the rock-climbing capabilities of the robot, and their density and spacing are used to assess the traversability of the terrain. A simpler approach for vision-based terrain classification is presented in [2], where vertical displacement of a feature in the image plane is used to measure the robot gait bounce and consequently the roughness of the terrain. [3] presents the use of color stereo cameras and LADAR for obstacle
detection and terrain classification in natural scenes; a vision-based method, requiring the use of additional cameras, for estimating sinkage in wheeled robots and measuring slip conditions is described in [1]. An alternative methodology, usable on wheeled robots, for on-line terrain parameter estimation based on measurements of torque, angular and linear velocities of wheels is presented in [6]. Moreover, the measurement of the vibrations of the wheel springs is used in [7] as further evaluation parameter. Each of the methods developed so far presents positive and negative aspects. Information like vibrations, torque, linear and angular velocities, are intrinsically suited for wheeled robots, for instance, while LADARS, among other aspects, are bulky and heavy. Cameras, generally available on (planetary) exploration robots, seem to be the most obvious source of information for terrain classification, although vision-based systems can be affected by constraints such as high computational requirements for real-time operation [3] and potential misclassifications due to shadows [8]. In this paper we present a flexible sensor array with optical coding of information: pressure on the sensors is coded in intensity modulated optical signals, thus generating a tactile image. The algorithms used for both vision-based and tactile-based assessment of terrain features perform a texture analysis, of both the camera image and the tactile image. Textures have been a challenging research theme for decades, since their first definition in the 70s by Hawkins [9], and humans were found to be highly sensitive to texturized content [10]. Despite the huge amount of work and interest in textures, the often large computational power required by their analysis prevented researchers to exploit these approaches in real-time robotic systems. Nevertheless, we decided to use texture descriptive features, because of their high informative content. The statistical approach is considered as particularly suitable for natural textures: for example, previous works (see [11] for a review) show that considering micro-textures (i.e., very fine detailed textures) a classification based on a statistical approach gives better results compared to other methods. In order to lower the computational requirements, the features space consisted of a set of approximated first order moments, extracted from both the grabbed images and tactile data. The robustness of the classifier and its performances when dealing with approximated feature values are of crucial importance. For the classification of the terrain a variety of approaches are used. For example in [5], [8], [12] Artificial Neural Networks are used for the classification of terrain slope and hardness and a set of fuzzy-logic rules is used for the assessment of the traversability. [2] compares several different methods for terrain classification including discriminant analysis (LDA, QDA and LogDA), Support Vector Machine and Hidden Markov Models. In this work we use Support Vector Machines (SVM) [13], [14] for classifying the terrain. For training the SVM we use a set of manually segmented and preclassified images: the terrain was classified as fine cluttered, compact, or rocky. From these images the textures are extracted and a set of parameters describing the approximated first order statistics is computed. This set of statistical parameters is used afterwards as input vector for the SVM. The presented methodology was implemented and is being tested in L AURON IV hexapod robot, a stick insect like walking machine with six equal legs, which has been built as a universal platform for locomotion on rough terrain [15]. In the framework of the study, the feet are inserted in special endings, whose bottom surface contains arrays of distributed analog pressure sensors, behaving like a tactile skin under the feet. The paper describes the approach, both in the vision and tactile fields, to statistic feature selection and extraction, as well as the application of the SVM-based classifier to tactile input. The fabrication of the flexible tactile skin is presented, a swell as some preliminary results from its static characterization. The main result of this preliminary activity is to highlight the critical issues and to direct our future work in the field of combined vision-tactile assessment of terrain features for legged robots. II. M ETHODS A. The robotic platform L AURON IV is a stick insect like hexapod walking machine which has been designed for walking in rough terrain. It consists of a main body, a head and six equal legs with three degrees of freedom each (see figure 1). With all components mounted its total weight is about 24.5 kg and it has an additional payload of 10 kg. The length and width between the footsteps is about 120 × 100 cm and the typical distance between body and ground is 40 cm. It can reach a maximum velocity of 0.3 ms−1 and the accumulators last for about 45 min. The joint angles are measured by optical encoders and the load of the motors can be determined by the measured currents. L AURON IV is equipped with a pan-tilt head with a stereo camera system1 . Furthermore a small (50 × 50 × 70 mm) laser line scanner2 (range: 240, 400 cm, resolution: 0.36, 1 mm) is positioned in the middle of the head for scanning the near environment (see 1 Dragonfly 2 HOKUYO
http://www.ptgrey.com/ URG-X003 http://www.hokuyo-aut.jp/
2
Fig. 1.
The six legged walking robot L AURON IV (left) and its head sensor system (right).
figure 1 right). The main body carries a gyro-enhanced orientation sensor3 delivering information about the absolute spatial orientation of the robot. For determining the global position of L AURON IV a GPS receiver 4 is mounted on the robot. Moreover the legs are equipped with foot force sensors which allow the measurement of all three axial force components. By this the determination of the contact forces including direction and quantity is enabled. B. Basic behavior control of L AURON IV The control of L AURON IV [15], [16] is realized by a behavior-based architecture implemented with the modular controller software architecture MCA25 . Different levels of behavior are necessary in order to walk in unstructured environments. There are fast low level behaviors or reflexes like the elevator reflex for lifting a leg over an obstacle if a collision occurs or pressing the foot on the ground while a leg supports the robot. Furthermore there are behaviors for generating the general leg movement during support and swing phase, behaviors for coordinating the legs and for controlling the body posture. In the future more sophisticated behaviors at a higher level of control may influence the walking strategy for climbing a rock, walking over a gap or generating a plan for walking towards a given destination. Because the behavior control is directly affected by the local terrain conditions, it is necessary to acquire information by exploiting miscellaneous sensors. C. Parameters of the gait to be controlled For the adaptation of the locomotion of a walking robot to the terrain, several main control parameters can be adjusted (cf. [17]). These control parameters are not restricted to the special walking robot L AURON IV. On the contrary they are available by some means on most walking robots. One can distinguish the following three main control parameters: velocity, step height and main body height. Velocity: The velocity of the robot depends on the stroke, the cycle time and the duty factor (respectively the gait). The lower the speed of the robot is chosen the more time the robot control gains to examine the environment and to cope with disturbances. But considering the amount of time needed to reach the desired destination the velocity should only be decreased if really necessary. Step height: By increasing the step height higher obstacles like stones can be transcended without collisions. But the higher the legs are lifted the more turbulent the walking gets and the more unstable it gets from a dynamic point of view. Moreover by increasing the step height the time needed for lifting/lowering the legs increases and therefore the maximum possible cycle time is reduced. Main body height: The static stability of the walking robot can be increased by lowering the main body. But if the main body height is chosen to low on irregular terrain one runs the risk that the main body may collide with obstacles. As indicated above the parameters influence each other. If one parameter is altered (e.g., the step height is increased) this also affects other parameters (e.g., the cycle time). In general a lot of expert knowledge is required to find a suitable parameter set for a certain terrain. Although the exact conjunction of these parameters with the general walking behavior of the robot is quite complex, one could give e.g., a simple rule set for adapting the robot as follows: On severe or unknown terrain the robot should reduce its velocity and increase its step height. In opposition to this on flat and easy terrain the robot should increase its speed and decrease its step height. 3 Mircostrain
3DM-G http://www.microstrain.com/ RCB-LJ http://www.u-blox.com/ 5 http://mca2.sourceforge.net/ 4 u-blox
3
By means of fuzzy reasoning a possibility to handle this kind of knowledge is given: The external model knowledge can be written down straightforward using fuzzy rules. The rules are extensible in an easy way by simply adding new ones, and they are simple to understand and to adapt. Moreover the fuzzy approach even permits conflicting rules. A more detailed description of the currently used fuzzy rule set on L AURON IV can be found in [17]. D. Vision As anticipated in the Introduction, our aim was to relate human perceptual judgments such as coarse, fine, uniform terrain with statistical parameters describing textures. We decided to limit our analysis to first order statistics, i.e. to moments extracted from the first order histogram of the image: even though higher order statistics contain more information on the relations between pixels, the computational costs increase significantly; indeed, many information can be obtained from combinations of first order parameters, extracted from the image normalized histogram P (i) = NN(i) (where N is the total number of pixels of the image having NG grey levels, and N (i) is the number of pixels having the grey level i < NG ). To be rigorous, this passage implies the stationarity of the image, considered as a realization of a general random field “Image”: this assumption can be approximately verified if the area of the image on which the histogram and the parameters are to be computed is small, and entirely covered by the texture under exam. In our model the window is a square with an edge 15 pixels long: this value is dependent on the image resolution, the camera focal length, and the desired performances. Under the assumption of stationarity, histogram a probability density. A mean µ and a variance σ 2 PNG −1the normalized PNG −1 becomes 2 2 can be defined as µ = i=0 iP (i) and σ = i=0 (i − µ) P (i). We introduce three well known descriptors of a first order probability density: the coefficient of variation cv = σµ , hP i PNg −1 Ng −1 2 4 the energy e = i=0 P (i) and the kurtosis k = σ14 i=0 (i − µ) P (i) . cv is invariant with respect to linear changes in the grey scale, so that it can be used as index of global contrast; e is a measure of the uniformity of the histogram: high energy values mean smooth surfaces, while low energy values mean spread histograms, so more “busy” images; k is a measure of the “relative importance” of the peripheral areas of the histogram; the proper definition is used for the kurtosis, so k=3 indicates a gaussian profile, a lower kurtosis indicates a more spread histogram while a higher k value indicates a narrow histogram. Human experts were asked to analyze a collection of natural images acquired with the robot cameras, classifying terrains according to their perceptions: in particular, the identified perceptual classes6 recognized as the most salient for the robot locomotion were: α={Flat, Small stones, Larger stones}. Each input image was filtered so as to obtain the statistical features vector f = [cv, e, k, α] for each image area. The statistical features are computed over a sliding window 15 pixel wide: the choice of the kernel size is a delicate aspect in texture analysis, because the window must be wide enough to preserve the texture structure: in other words, a sufficient number of tonal primitives must be present in the window, in order for the measures to be meaningful (hypothesis of stationarity for textures). The model implements a faster and approximate computation of the parameters: the window is tiling the image, instead of sliding on it. This brings to artifacts and inaccuracies in the border reconstruction, but tests prove that accuracy and reliability are maintained. E. The new foot An essential condition for the extraction of statistical parameters from the terrain under the foot, is an increased contact area between the robot and the terrain. For this reason, a new foot was conceived, to be distally connected to the L AURON IV leg, and intented to be used in conjunction with the existing foot force sensor, shown in figure 2-a with the current foot and used by the control system to detect contact with the terrain. The new foot presents a contact surface with the terrain of 50cm2 , can host a 4mm thick skin, and a spherical joint allows a certain degree of adaptation to the terrain, of ±33 degrees from the vertical. The design of the foot is shown in figure 2-c. The new foot was fabricated using 3D-Printing rapid prototyping technology7 (see figure 2-d). The electronics used for the acquisition and optical coding of signals from the force sensors of the skin is hosted inside the foot structure: the foot shown in figure 2-e presents all the leds on; the fabricated foot, hosting the skin, the electronics, as well as the optical fibres carrying the tactile information are shown in figure 2-b. F. The skin A diffused array of sensors under the foot of the robot is essential for the extraction of statistical parameters from the terrain using tactile input. Such a “tactile skin” should have some peculiar features, in order to be usable for outdoor robotics, such us robustness, a suitable dynamic range, low cost: the sensors in the skin must tolerate large stresses 6 See
[18] for a definition of Perceptual class 3D Printer
7 INVISION
4
Fig. 2. L AURON IV new foot: a) The three axial load cell mounted in each leg; b) The new foot assembled on a leg; c) 3D CAD model of the foot; d) First feet by rapid prototyping technology; e) The foot with all the inside LEDs turned on.
without breaking, and maintain good mechanical performances in extended temperature ranges; moreover, the response of the skin should be analog in the range of expected tactile inputs, without early saturation. The low cost of the skin would allow its extensive use. In the present work polymeric resistive sensors (QTC pills from Peratech8 ) were chosen as sensitive modules of the skin: based on the quantum tunneling effect, their resistance decreases with pressure, following the curve shown in figure 3-a. The polarization circuit has great importance on the range of forces that can be detected: the electrical circuit of figure 3-c was selected, where R1 is the pull-up resistor (its value maximizes the linear response in the low pressure ranges), and R1-QTC is the force-dependent resistance across the sensor. The response that can be achieved with this solution is shown in figure 3-b: pressures below 0.7N/mm2 could not be detected, and values higher than 1.5N/mm2 would saturate the sensor. Figure 3-d represents pressure on the sensors in the foot for decreasing values of contact surface: the area of the foot is 50cm2 , resulting from a trade-off between sensorized contact area and robot agility on the terrain. On flat terrain, the pressure on each sensor would be less than 0.1N/mm2 , while on small and acute stones, with smaller parts of the foot in contact with the terrain, its value would become larger than 2N/mm2 . In order to increase the sensitivity of the sensors at lower pressures, a second pressure-dependent resistive path was introduced between the sensor and ground, namely R2-QTC in figure 3-c. This resulted in an analog response range of the sensors at significantly lower pressures.
Fig. 3.
Model of the QTC sensor, expected pressure range on the sensor, design of the polarization network.
The skin consists of a layered structure, visible in figure 4. The flexible substrate was obtained from LF9150R Pyralux (DuPont, Wilmington, DE, USA) by a photolithography process; Pyralux consists of a 127µm thick Kapton sheet with a 35µm layer of copper (305g/m2 ) on one side, that was patterned following a comb-shaped structure. Each skin module consists of nine active areas, each covered with a QTC sensor (see figure 4-a). 8 Peratech
http:\\www.peratech.com
5
Fig. 4. Skin fabrication:a) the 3x3 skin module; b) Pouring of silicon; c) a completed 3x3 skin module; d) skin after polyurethane pouring applied to the bottom side of a foot; e) Plastic Optical Fibres converging into the Aluminum socket.
Figure 4-b shows the pouring of silicone as well as the variable resistive path R2-QTC, consisting in a Kapton conductive covering layer; figure 4-c a 3x3 skin modules before the last phase, consisting in embedding the skin modules in a 4mm thick cast of flexible poyurethane with rough finishing (Poly 74-45 from Polytech Development, PA, USA); the final result (6 modules, resulting in 54 individual taxels) is visible in figure 4-d. Voltage output signals are converted to optical intensity-coded signals by means of a microprocessor-based electronics (not shown) embedded in the foot structure. The optical signals are conveyed by optical fibres and the tactile image is projected on a C-MOS digital camera (the fibres socket is shown in figure 4-e). G. Preliminary static characterization The preliminary static characterization of the skin was performed measuring the intensity of the minimum detectable stimulus: normal stimuli were applied to the sensors through a six-components load cell (ATI NANO 17 F/T, Apex, NC, USA) mounted in an experimental test bench by exploiting three micrometric translation stages with crossed roller bearing (M-105.10, PI, Karlsruhe, Germany). The latter allowed the positioning of the loading structure on the sensor array. Signals coming from the sensors array were acquired and stored digitally; the acquisition board was a National Instruments PC-6032E and the software interface was developed in LabView 7. H. Features selection Figure 5 contains two examples of tactile input from two different kinds of terrain; the images are numbered from top to bottom and from left to right. In both cases the top left image (image 1) is the input image; image 2 is a thresholded version of image 1. Image 3 shows the sensors that are activated, while image 4 shows the 3x3 modules having at least one sensor activated. The second row of images are zoomed versions of the corresponding areas in image 1, as indicated by the white lines. By comparing the left and right images, the differences on the way the skin modules are activated in the two cases are clearly visible. With the aim of quantifying these differences, a set of 11 statistical features was extracted from the acquired tactile images over each activated 3x3 skin module: average µ, sum Σ, root mean square rms, variance σ 2 , µ skewness s, kurtosis k, energy e, ratio max .
Fig. 5.
Tactile images and pre-processing corresponding to terrains consisting of large (left) and small (right) stones.
I. The classifier For the classification of the computed feature vectors Support Vector Machines [13], [14] are used. Support Vector Machines belong to the relatively new family of Kernel Methods, that combine the simplicity and computational efficiency of linear algorithms, such as the perceptron algorithm, with the flexibility of non-linear systems, such as for example neural networks, and the rigor of statistical approaches such as regularization methods in multivariate statistics. By reducing the learning step to a convex optimization problem, which can always be solved in polynomial time, the problem of local minima typical of neural networks, decision trees and other non-linear approaches is avoided. Therefore the training of support vector machines is deterministic and retraining is faster and easier. Moreover due to their foundation in the principles of Statistical Learning Theory they are remarkably resistant to overfitting especially in circumstances where other methods are affected by the curse of dimensionality.
6
III. P RELIMINARY EXPERIMENTAL VALIDATION A. Vision-based classification The validation of the vision-based terrain classification is being carried on; preliminary results, obtained from a collection of 50 natural environment images, acquired by the robot cameras and using the robot point of view indicate a success rate in the classification of 70%. The images represent cluttered terrains, with and without stones, and shadowed areas. Three classification samples are presented in figure 6, where the last three columns are perceptual maps (i.e. binary images in which white areas indicate regions where the corresponding perceptual quantity is high) built after local statistical features and used by the classifier.
Fig. 6. Perceptual maps used for classifying the terrain. On the left three original images are shown; the second, third and fourth columns contain perceptual maps built after the statistical content of each image and used by the classifier (white areas mean high level of the corresponding class). In particular the second and third columns contain indicators of texture busyness, while last column codifies for flat surfaces.
B. Preliminary characterization of the skin The results of the preliminary statical characterization of one of the skin elements are reported in table I. The sensitivities of the other modules present comparable variance. The same test, performed on the finished skin, (thus including the polyurethane protective layer) shows average values of the order of 1.8N/mm2 , with non negligible differences between the different 3x3 skin modules: this non-optimal result is a consequence of the uneven thickness of the polyurethane layer over the modules. This issue, as well as the excessive decrease in sensitivity, will be addressed in the next prototype. C. Touch-based classification: on-going work The validation of the touch-based classification is still in progress. Using a learn data set containing 392 data vectors from three different classes (C1: flat, C2: small stones, C3: bigger stones) and a verification data set comprising 287 examples the overall classification rate is 64.1%. The confusion matrix of this preliminary results is shown in table II. [N/mm2 ] R1 R2 R3
C1 0.4 0.7 1.1
C2 0.18 0.4 0.7
C3 0.5 1.1 1.8
C1 C2 C3
C1 0.47 0.07 0.03
C2 0.32 0.67 0.32
C3 0.21 0.26 0.65
TABLE II C ONFUSION MATRIX FOR THE
TABLE I P RESSURE T HRESHOLDS ( IN N/ MM 2) ON THE UNCOVERED 3 X 3 SKIN ELEMENT.
TOUCH - BASED CLASSIFICATION .
IV. C ONCLUSIONS AND F UTURE W ORK The conclusion of the presented work are manifold, and by no means definitive: they are indications on the ways to improve the performances of the several subsystems forming the hardware and software architecture for terrain evaluation of the robot. The observations can be summarized as follows:
7
Approach to terrain classification The use of texture analysis to extrapolate information from visual and tactile images is promising. Despite the fact that the classification results are not fully satisfactory by now, they are still encouraging. By improving the individual components of the hardware and software architecture as proposed below a reasonable enhancement of the overall classification results seems achievable. Vision-based classification The performances of the vision based classification are still too low for robust behaviour, and failing in particularly challenging situations, with strong contrasts, for example due to shadows. The approach to improve performaces is twofold: from one side the set of features is being enlarged adding other perceptive channels to the statistic one; from the other, a SVM-based classifier will be used for vision-based classification; Foot design and tactile foot skin The new tactile-sensing feet should have a significant degree of compliancy with the terrain. Future work will consequently focus to have a more flexible foot surface, assuring at the same time a uniform response capability of the QTC sensor array. In order to achieve these features, further efforts will be mainly devoted to: • establish a better silicon pouring process in order to have a uniform thickness of the layer and better adhesion and chemical stability; • create more reliable connections of the optical fibers to the LEDs; • introduce a significant up-grade based on the use of multi-frequency light sources to transmit signals on a single fiber in order to reduce the system complexity and space occupancy. Tactile feature selection The results of tactile classification highlight that the selected set of features is not rich enough to enable a good decision; two necessary steps to be done are a better normalization of their values and an improved pre-process of the input tactile image. Moreover, an enlarged set including morphological features, taking into account the geometry of the distribution of activated sensors would add important information. R EFERENCES [1] G. Reina, L. Ojeda, A. Milella, and J. Borenstein, “Wheel slippage and sinkage detection for planetary rovers lauron terrain essential,” Mechatronics, IEEE/ASME Transactions on, vol. 11, no. 2, pp. 185–195, 2006. [2] A. C. Larson, G. K. Demir, and R. M. Voyles, “Terrain classification using weakly-structured vehicle/terrain interaction,” Autonomous Robots, vol. 19, pp. 41–52, July 2005. [3] R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Autonomous Robots, vol. 18, pp. 81–102, Jan. 2005. [4] Y. Cheng, M. Maimone, and L. Matthies, “Visual odometry on the mars exploration rovers,” in IEEE Conference on Systems, Man and Cybernetics, The Big Island, Hawaii, USA, 2005, Proceedings of, October 2005. [5] A. Howard and H. Seraji, “An intelligent terrain-based navigation system for planetary rovers,” Robotics & Automation Magazine, IEEE, vol. 8, pp. 9–17, Dec. 2001. [6] K. Iagnemma, H. Shibly, and S. Dubowsky, “On-line terrain parameter estimation for planetary rovers,” in Robotics and Automation, 2002. Proceedings. ICRA ’02. IEEE International Conference on, vol. 3, pp. 3142–3147, 11-15 May 2002. [7] K. Iagnemma, S. Kang, H. Shibly, and S. Dubowsky, “Online terrain parameter estimation for wheeled mobile robots with application to planetary rovers,” Robotics, IEEE Transactions on [see also Robotics and Automation, IEEE Transactions on], vol. 20, pp. 921–927, Oct. 2004. [8] A. Howard and H. Seraji, “Vision-based terrain characterization and traversability assessment,” Journal of Robotic Systems, vol. 18, no. 10, pp. 577–587, 2001. [9] J.K.Hawkins, Textural Properties for Pattern Recognition. New York: Academic Press, 1970. in ‘Picture Processing and Psychopctorics’, Lipkin and Rosenfeld, pagg 347-370. [10] C. J. V. den Branden Lambrecht, Perceptual models and architectures for video coding applications. PhD thesis, Ecole polytechnique federale de Lausanne, 1996. [11] R.M.Haralick, “Statistical and structural approaches to texture,” Proceedings of the IEEE, vol. 67, may 1979. [12] H. Seraji and A. Howard, “Behavior-based robot navigation on challenging terrain: A fuzzy logic approach,” Robotics and Automation, IEEE Transactions on, vol. 18, pp. 308–321, June 2002. [13] B. E. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in Computational Learing Theory, pp. 144–152, 1992. [14] V. N. Vapnik, Statistical Learning Theory. John Wiley and Sons, Inc., 1998. [15] B. Gaßmann, K.-U. Scholl, and K. Berns, “Locomotion of LAURON III in rough terrain,” in International Conference on Advanced Intelligent Mechatronics, vol. 2, (Como, Italy), pp. 959–964, 2001. [16] B. Gaßmann, K.-U. Scholl, and K. Berns, “Behaviour control of LAURON III for walking in unstructured terrain,” in Proceedings of the 4th International Conference on Climbing and Walking Robots, pp. 651–658, 2001. [17] B. Gaßmann, T. B¨ar, J. Z¨ollner, and R. Dillmann, “Navigation of walking robots: Adaptation to the terrain,” in Proceedings of the 9th International Conference on Climbing and Walking Robots, pp. 233–239, 2006. [18] R. Arkin, Behavior-Based Robotics. The MIT Press, 1998.
8