Empirical Modeling and Comparison of Robotic Tasks - CiteSeerX

0 downloads 0 Views 213KB Size Report
Jane Mulligan. Dept. of Computer Science,. University of British Columbia,. Vancouver, B.C. CANADA mulligan@cs.ubc.ca. Abstract. How can ..... A. Terry Bahill.
Empirical Modeling and Comparison of Robotic Tasks Jane Mulligan Dept. of Computer Science, University of British Columbia, Vancouver, B.C. CANADA [email protected]

Abstract

How can we describe and measure real robotic systems? Implemented robotic systems for intelligent tasks are are often extremely complicated, depending not only on `interesting' theoretical parameters and models, but on many assumptions and constants which may be set almost arbitrarily. In order to evaluate and compare systems which employ diverse components and methods, we are obliged to employ empirical, performance based methods. We model task implementations as particular parameterizations of the task goals they represent. Through fractional factorial experiments and analysis of variance we establish the statistically signi cant parameters and parameter interactions for a pair of part-orienting tasks. Sensitivity analysis determines optimal parameter values and reveals relationships among parameters as they a ect task performance.

1 Introduction

There are many ways to represent a robotic task. Probably the most common descriptions are the symbolic models of STRIPS-like planners, con guration space approaches, or methods based on the principles of control systems. However, there is as yet no accepted formal description of complicated intelligent tasks, particularly in light of the variety of actuators, sensors and computations which compose them. The study of physical systems typically cycles through three steps: hypothesizing a model; performing experiments to verify it and forming an improved model based on these observations. Many roboticists analyze uncertainties and compare speci c aspects of their systems to the work of others. We argue for a much broader and more systematic analysis of the properties and performance of these systems, as a rst step toward developing more theoretical models for robotic problems. Viewing robotic systems as parameterizations of their task goals gives us a unifying representation common to such diverse elds as statistics,

control, and inverse theory, allowing the use of well established methods for analysis and comparison. As a step toward identifying such issues as how sensing, modeling, actuation, and computation are used and interchanged for particular tasks in different implementations, we present speci c experimental methods for evaluating and comparing real task implementations. The three critical aspects of this method are: a proposed task goal parameterization which describes the implemented system; fractional factorial experiments in conjunction with analysis of variance; and sensitivity analysis all based on a cost/performance metric related to the task goal. The experiments presented to demonstrate this methodology are based on a part orienting task implemented in two very di erent ways. One system uses a `sensorless' model-based push-orienting method, the other uses a real-time stereo vision system to localize objects in the workspace for sensor-driven part orienting. The parameters used to describe the two systems are similar, though not identical sets. Through detailed experiments we establish the statistically signi cant parameters and parameter interactions with respect to a probability of success cost/performance metric for each system. Finally we apply sensitivity analysis to set optimal parameter values and explore the nature of interactions. The premise of this investigation is that we have one task and two implementations representing di erent parameterizations of the same underlying physical `computation'. The similarities and di erences observed should reveal fundamental aspects of the task.

2 Evaluation and Comparison

Our hypothesis is that di erent task implementations can be described as di erent parameterizations of the same problem. Analysis and comparison of these parameterizations tells us something fundamental about the underlying task. More speci cally, for

any proposed task solution it is invaluable to have a clear analysis of system parameters and how they interact with each other and with task success. Just as in other physical sciences, we hope to begin the process of building theoretical models by collecting objective experimental data. In the absence of clear and detailed speci cations for complex robotic tasks, identifying and verifying models for robotic tasks will inevitably be a cyclical and experimental process. There are a number of techniques available in the experimental design literature which allow us to model and analyse the task requirements. We propose a method for task modeling based on 1) hypothesizing a task parameterization; 2) applying factorial experiments and analysis of variance to determine signi cant parameters with respect to a task goal cost/performance metric; and 3) sensitivity analysis to examine the behaviour of parameters and their interactions in more detail. Ultimately the only measurement on which to evaluate or compare complicated, multifaceted systems is some form of goal related cost or performance metric. Many possible metrics can be used to count the cost of a task implementation. Any value from a simple success count, as we use below, to a precise accounting of money, time, oor-space and positional error, can be used to evaluate a system with respect to its goals. Factorial experiments give us a starting point for determining which parameters, and their interactions, de ne the task subspace. Factorial experiments [2] are applied to systems where a number of parameters or factors may interact to a ect system outcome. They are designed to give the researcher information on the e ects and interactions of all factors simultaneously while limiting the number of experimental trials required. \Taguchi's method" is a version of such designs used to improve manufacturing processes [8]. Generally k factors are tested at each of l levels yielding an lk factorial experiment. The theory of experimental design also gives us analysis of variance techniques which allow us to determine whether e ects observed as a result of setting parameters to various levels are statistically signi cant. Thus for a system which we hypothesize to have k possibly interacting parameters, we can run speci cally designed experiments and use statistical tools to determine whether each factor or interaction has a signi cant e ect on outcome. Our use of factorial experiments to devise and re ne suitable models for robotic tasks has been described in [5] and [3]. Factor levels can be qualitative as well as quantitative. We will use 2 level factorial designs to allow

us to analyse, at least at a coarse level, the significance of each factor and interaction with respect to the observed task performance. Gaining a sense of the importance of factor interactions is useful when tradeo s occur in complex systems, as they are not always obvious without this sort of principled experiment. Sensitivity analysis [1] determines how a model varies with its parameters. Analytical or empirical methods are used to track system outcomes as parameters are modi ed. Dense sampling of outcomes as they vary with one or more factors eshes out the details of how our system behaves in regions of interest. It allows us to observe higher order behaviour and relationships, and thus hypothesize better analytical models for our systems.

3 Part Orienting Methods

To demonstrate our ideas about task parameterization and experimental analysis we have chosen the 2D part orienting task. Although it is generally quite simple (resolving only one degree of freedom of the part), it illustrates the variety in task performance that can be observed using empirical techniques. The sensorless push-orienting and vision-based part orienting systems implemented represent extremes in terms of the information used for the task, and therefore should provide insight on the underlying task goal.

3.1 Push Orienting

Peshkin [7] describes a planning method for generating sequences of push-alignments to transform a part from an unknown initial orientation to a desired nal orientation. The planner is based on a construct called the con guration map , a matrix which groups discretized initial orientations according to the nal orientations which result after a push at a particular fence angle . For some discretization of the full range of fence angles, these maps are computed. Planning is achieved by a search of the tree of all push sequences with pruning. We have augmented this early pushplanner slightly by allowing transitions to more than one outcome, when model uncertainty makes the exact transition point from clockwise to counterclockwise rotation uncertain. A robot arm implementation of the push orienter was built, but in order to test part variation in a controlled way, we also built a simulator based on the real system's performance. The data used below are derived primarily from the simulator.

3.2 Vision-Based Orienting

The visual part orienting system uses a stereo pair of cameras, calibrated to a robot arm's workspace to

localize the parts to be positioned. A high speed pipeline processor computes the zero crossings of 52 G for regions of both images. Calibration allows selection of subwindows and a xed range of disparities based on the robot's workspace. Calibration also allows recti cation of the right image into the left according to the epipolar transformation using an image warping device, for use in the correlation match algorithm. The computed disparities are passed to a transputer network where edge pixels are grouped and tted to a model. Simple arm sequences parameterized by the computed part location are executed to pick and place the part in a desired goal position [4].

4 System Task Parameterizations Anyone who builds systems knows that they contain many parameters and thresholds which must be chosen by the designer: sometimes based on theory, sometimes via intuition and limited testing. All of these design choices a ect the performance of the system. Below we enumerate parameterizations of the part orienting task represented by the described push planner and vision based systems.

4.1 Push Orienting

Push Orienting Parameters Label Parameter F  friction accuracy C C of M accuracy V part model vertex accuracy D  discretization for fence angles L limit range of fence angles S fence steepness (qualitative) P limits on push distance N minimize plan length A friction uncertainty compensation B C of M uncertainty compensation K vertex uncertainty compensation

Table 1: Parameters for the push planner. Eleven parameters were selected as representative for the push planner. These are described in brief in Table 1. The parameters F, C and V indicate the accuracy of the measured values of the friction coef cient, centre of mass and vertices respectively. D is the discretization factor for the fence angles and hence the branching factor for the tree search performed by the push planner, i.e. = fence angles are considered. Fence angles near 0 or  are essentially end-on to the part and are too steep to allow viable pushes,

we therefore set a limit L on the set of fence angles and search only from 0+ L to ? L . Our experience with executing push plans has shown that steeper fence angles are less reliable. As a result we have added a constraint S to the push planner which prefers plans with shallower fence angles. Peshkin [6] describes methods for bounding the required push distance to align a part with a fence. We have used his formulation to compute a bound on maximumpush distance for each push in a plan. Plans with the shortest total push distance or shortest maximum push are preferred (P). Each additional push is expensive because it requires a motion sequence, we have therefore added a constraint N which prefers plans with fewer pushes. Finally factors A, B, and K indicate whether an uncertainty compensation method, designed to avoid high uncertainty transitions, is turned on for friction, centre of mass and vertex uncertainty respectively.

4.2 Vision-based Orienting

Visual Orienting Parameters Label Parameter M camera position/orientation D fd1 ; :::;dm g range of disparities S 52G  for edge detector C calibration accuracy A warp accuracy L lighting Z delete points out of workspace P;Q model discretization factors

Table 2: Parameters for the Visual Orienter. Nine parameters were selected as representative for the vision orienting system. These are described in brief in Table 2. Two camera positions M were tested under the assumption that cameras placed nearer the work surface would localize parts better. D is the number of disparities tested in the winner-take-all phase of the correlation stereo algorithm. S is the level of  for 52G. Our calibration system typically acquires 100 to 180 image-world point pairs to compute camera parameters. To test whether this level of calibration accuracy C is required, we subsampled the calibration data to approximately 41 of the original set of points, and computed a less accurate camera model. For the piecewise epipolar warp we have allowed up to 12 subregions to be approximated separately. To test the need for this level of approximation accuracy A, we limited the number of regions to 2 for Alo .

(-2.0,6.0)

(0.4,6.0)

(-2.4,6.5)

(0.1,,5.1) (1.3,5.1)

(-1.1,4.8)

(2.4,4.6) (-2.1,4.2) (3.4, 3.8) (-2.9,3.2) (-3.5,1.9)

(-3.7,0.6) (4.6, -0.1)

(0.0, 0.0)

(0.0, 0.0)

(-3.7,-0.8)

(4.0,-1.4) (0.0,0.0)

(4.4, -1.4) (-3.2,-2.1) (3.8,-2.7) (-2.5,-3.2) (3.0, -3.6) (-1.6,-4.0) (1.9,-4.3)

(-2.4,-3.2)

(4.7,-3.2)

(-2.0,-3.9)

(4.0, -3.9)

(-0.4,-4.5)

(0.7,-4.6)

Figure 1: On the left is an im- Figure 2: On the left is an image Figure 3: On the left is an image of the Triangular part, on the of the L-shaped part, on the right age of the G-shaped part, on the right its model. its model. right its 20 vertex model. We originally set up special lighting for our vision System Triangle \L" \G" Best Initial (w/ err) 0.93 0.86 0.39 system. To determine how crucial this lighting L was, Push Average 0.48 0.44 0.24 we tested the system with it and with normal room Optimized (1% err) 1.0 0.97 0.45 lighting. One way of eliminating bad depth points is Best Initial 0.18 0.50 0.72 to threshold based on the known Z location of the Vision Average 0.08 0.28 0.19 work surface. We tested this constraint by setting the Optimized 0.34 0.65 0.83 threshold very close to the work surface and relatively far from it (20cm). Table 3: Probability of Success The model parameters denoted P and Q are dif5.1 Push Planner Observations ferent for di erent parts, since the localization methTable 4 summarizes the parameters which proved ods are di erent. For the triangle and \L", P is the signi cant for the three parts under the push planner discretization of the  dimension for the local Hough after our trials, and ANOVA analysis. Factors are transform and Q is the size of the orientation template listed from most signi cant (according to the ANOVA around the current edge pixel. For the \G", P is the F-test) to least. Factors and interactions for which discretization of 2 used to histogram the orientation the F value was less than F = 2:71 for = 0:1, of depth points with respect to the centroid. Q is the were pooled and are not listed. For full details of our window or number of bins in this histogram over which experiments and analyses see [3]. a minimum number of hits determines the angle of the It is interesting to note that although the system is \G"'s gap. the same, its performance with the various parts dif4.3 Test Parts fered signi cantly. Even though the triangle and \L" The experiments described in this paper were perare similar in shape their success rates and signi cant formed on the parts pictured in Figures 1, 2 and 3. In factors di er signi cantly. Evidently system perforterms of the push orienting system in particular, these mance for the push planner is signi cantly a ected by parts span a range of complexity. The triangle (T) has the nature of the part being manipulated. three stable sides. The \L" has two short sides which For the triangle and \L" the factor L (limiting steep tend not to act as stable resting positions and the \G" fence angles) and the interaction DL (where D is the has been modeled by a polygon with 20 vertices rather discretization factor for the range of fence angles) were than a true curve. signi cant, for the \L" and \G" D was signi cant as well. For the triangle and \L" we needed to explore 5 Experimental Results this tradeo , and for completeness we also plotted the DL relationship for the \G". The response surfaces for We have set up a 81 and 14 fractional two level factorial experiment for the push planner and vision system the three parts as D and L are varied are pictured in respectively, based on the parameterizations described Figure 4. Here the similarity of the \L" and triangle above. As important as which parameters prove sigis re ected in the similar periodic variation in success ni cant are those which have no signi cant e ect on rate as the resolution of fence angle discretization inoutcome, because these can be dismissed from considcreases. For the \G" the DL surface does not reveal eration, until another experiment indicates otherwise. a coherent pattern (predicted by the fact that the inOverall probability of success results are summateraction was not signi cant for \G"), and a more exrized in Table 3. Reported push-planner probabilities panded plot did nothing to improve our insight. are for conditions of measurement error in F, C and Finally based on the preferred signi cant factor setV , to re ect real-world parts. tings we ran a set of trials under the optimized system

Push Orienter

PART \L" L- Angle Limits LS- Angle/Steepness SK- Steepness/Vert Comp D- Discretization SP- Steepness/Distance LP- Angle/Distance DL- Discretization/Angle C- Centre of Mass N- Plan Length

Triangle L- Angle Limits SK- Steepness/Vert Comp DL- Discretization/Angle Lim LS- Angle/Steepness Signi cant C- Centre of Mass Factors LP- Angle/Distance S- Steepness Limits

\G" V- Vertex Position S- Steepness Limits VS- Vertex Pos/Steepness CV- Centre of Mass/Vertex Pos C- Centre of Mass LS- Angle/Steepness SK- Steepness/Vert Comp D- Discretization F- Friction FP- Friction/Distance DK- Discretization/Vert Comp FV- Friction/Vertex Position NK- Plan Length/Vert Comp VL- Vertex Position/Angle

Table 4: Push planner parameters in order of signi cance for the 3 parts. The varying importance of SK interaction is hilighted. Success for Discretization Fence Angles/Angle Limits

Success for Discretization Fence Angles/Angle Limits

25

26

30

27

28 29 30 31 32 Discretization of Fence Angles

33

34

25

26

31

32

33

Success

Success 1000 900 800 700 600 500 400 300

1000 900 800 700 600 500 400 300 200 34

29 28 27 Limit on Steepest Angles

25

26

Success for Discretization Fence Angles/Angle Limits 98.5 89 79.5 70 60.5

27 28 29 30 31 32 Discretization of Fence Angles

33

34

110 100 90 80 70 60 50

34 33 32 31 30 29 28 27 Limit on Steepest Angles 26 25

25

26

27

28 29 30 31 32 Discretization of Fence Angles

33

34

25

26

27

34 33 32 31 30 29Limit on Steepest Angles 28

Figure 4: Plot of orienting success versus fence angle discretization and angle limits for triangle, \L" and \G" parts with push planner. 1000

All Pertubations Centre of Mass Friction Vertices

1000

All Pertubations Centre of Mass Friction Vertices

800

800

600

600

600

Success

800

Success

Success

1000

400

400

400

200

200

200

0

0 0

0.01

0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 Standard Deviation of Features as Proportion of Part Radius

0.1

All Pertubations Centre of Mass Friction Vertices

0 0

0.01

0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 Standard Deviation of Features as Proportion of Part Radius

0.1

0

0.01

0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 Standard Deviation of Features as Proportion of Part Radius

0.1

Figure 5: Plots of probability of successfully orienting each part using the push planner, wrt a normally distributed error in model features with  a fraction of the max part dimension. Our experience with varying the camera position to observe the performance. The success rate with re(M high) was quite poor. The viewpoints we chose, spect to noise level for each part is plotted in Figure 5. although closer to the work surface were more oblique 5.2 Vision-based Orienter Observations and caused the parts to obscure themselves. In this case our analysis has pointed out that a camera posiTable 5 lists the signi cant (in the ANOVA sense) tion that we expected to improve depth measurements factors for the vision-based part orienter. Factors and actually performed much worse, evidently because our interactions for which the F value was less than F = systems were not tuned to it. 2:79 for = 0:1, were pooled and are not listed. Again The triangle localization using vision performed we see a variety in performance for the system which rather poorly, with a success rate of about 15 ? 18%. suggests that the part on which the computation is This may have been because of the low contrast bebeing performed has signi cant impact on success.

Vision Orienter

Signi cant Factors

Triangle ML- Camera pos/Lighting P- Part model disc L- Lighting CA- Calibration/warp accuracy

PART \L" S- Sigma M- Camera pos P- Part model disc SL- Sigma/lighting MQ- Camera pos/window MP- Camera pos/model disc PQ- Model disc/window LQ- Lighting/window MS- Camera pos/sigma D- Disparity range LP- Lighting/model disc

\G" M- Camera pos PQ- Model disc/window P- Part model disc MP- Camera pos/model disc Q- Window MQ- Camera pos/window ML- Camera pos/Lighting L- Lighting S- Sigma

Table 5: Visual orienter parameters in order of signi cance for the 3 parts. tween the grey triangle and the blue work surface. however, particularly because in each case there is an oscillating interaction between the two discretization Although S ( for 52G) at the high level produced parameters. If you don't look at the interaction of better results for the \L" our high-speed system didn't model parameters you can pick settings which perform easily allow for larger kernels, so we did not explore its badly together. sensitivity. We did however look at the e ect of varying the number of disparity values (D) searched in the matching phase of the vision system. Figure 6 illus6 Analysis of Results trates our search over D 2 (15; 35) for the best number Overall our observations of the push and vision of disparities. The best observed number of successes based part orienting systems revealed that the task is much more dependent on the particular part being manipulated than we would have previously expected. Success is not just a ected by tuning particular part model features, but is sensitive to parameters throughout the system, such as limits on fence angle steepness S or lighting L, that we would have assumed independent of the part to be localized. We can't just drop in a new part model and expect a part orienter to perform well. Figure 6: Plot of success for \L" part versus number We see from Table 3 that the vision system perof disparities for correlation stereo. forms poorly for the triangle and \L" but reasonably out of 120 trials was 66 for 33 disparities. It is unclear well for the \G", while the converse is true for the push why there were such erratic swings in the number of planner. Overall the vision system had much higher successes as the number of disparities increased, unerror rates than the push orienter. The push orienless more disparities caused greater fragmentation of ter requires at least 3 push motions per part oriented 3-D line segments. however and thus will always be more time consumFor the vision system only P, one of the discretizaing than the vision system. The push planner has tion factors for the part model parameters, proves sigperfect performance at orienting all three parts unni cant for all three parts. For the \L" and \G" the der noise-free conditions, however even small errors or interaction of the model discretization parameters PQ variations in the centre of mass or vertices quickly dewas also signi cant and for the \G" the factor Q was grade success rates. As parts become more complex signi cant in it's own right. To explore the part model this brittleness with respect to world variance from the discretization issue further we computed response surprede ned model becomes more pronounced. The vifaces for varying P and Q which are illustrated in Figsion system has much higher error rates, but is never ure 7. It is not strictly fair to compare the response of tested with perfect data. At each step (calibration, the system under variation in the part model for the epipolar approximation, edge detection, grouping, line triangle and \L" vs \G" since the models are di er tting) more noise is introduced. Nevertheless the vient. The idea that the vision based system is highly sion system appears to be able to produce comparasensitive to part model con rms an important insight ble performance to the push planner for the \G". If 68

Success for Number of Disparities

66

64

Success

62

60

58

56

54

15

20

25 Number of Disparities for Correlation Stereo

30

35

Success for Gap/Discretization

Success for Rho/Theta Discretization

Success (out of 120) 45 40 35 30 25 20 15 10

20

15 Theta Parameter

10

5

55 50 45 40 35 30 Rho Parameter 25 20 15 10

Success for Rho/Theta Discretization

Success (out of 119) 80 75 70 65 60 55 50 45 40

20

Success (out of 120) 100

50

15 Theta Parameter

10

5

55 50 45 40 35 30 Rho Parameter 25 20 15 10

0 10 Gap width in Bins 5

50 100 150 Discretization of 2*Pi

Figure 7: Plotting success rates for ranges of the P and Q part model parameters. In all 3 cases we can observe a pronounced cyclical trend, even though the part model di ered for the triangle and \L" vs the \G". the error rates were similar the vision system would References be preferable, however, only for the \G" part have we demonstrated that the vision system can approach the [1] William J. Karnavas, Paul J. Sanchez, and average error of the push orienter. A. Terry Bahill. Sensitivity analyses of continuous and discrete systems in the time and frequency The most outstanding observation for the two part domains. IEEE Trans. on Sys., Man and Cyb., orienting systems was the surprising variety of optimal 23(2):488{501, 1993. values occurring for di erent parts. For the three parts studied each of the push orienting and vision based [2] Oscar Kempthorne. The Design and Analysis of orienting systems demonstrated di erent behaviours Experiments. John Wiley and Sons, New York, and di erent optimal parameters. Clearly each of the N.Y., 1952. part orienting systems' interactions with the parts is [3] Jane Mulligan. Empirical Evaluation of Informadi erent; re ecting di erent aspects of the task. This tion for Robotic Manipulation Tasks. Computer is true even for a manipulation operation as basic as science, University of British Columbia, Vancouorienting a part. Although, for example, the trianver, B.C., August 1996. gle and the \L" would appear very similar under both [4] Jane Mulligan. Fast calibrated stereo vision for the pushing and vision models used, analysis demonmanipulation. Journal of Real-time Imaging, Ocstrates that the performance of each system for each tober 1997. part and the parameters which exhibit signi cance are quite di erent. [5] Jane Mulligan and Alan K. Mackworth. Experimental task analysis. In Proceedings ICRA'97, The most signi cant lesson learned from this analvolume 4, pages 3348{3353, Albuquerque, N. M., ysis is that extensive analysis of actual robotic im1997. plementations is a necessary step to understanding [6] Michael A. Peshkin and Arthur C. Sanderson. The such systems. This type of analysis reveals aspects motion of a pushed, sliding workpiece. IEEE J. of of the implementation which cannot be predicted a Rob. and Auto., 4(6):569{598, 1988. priori. In our visual orienter we expected the nearwork-surface camera position to perform better, and in [7] Michael A. Peshkin and Arthur C. Sanderson. fact it performed far worse. For the push-orienter we Planning robotic manipulation strategies for workhoped our uncertainty compensation methods would pieces that slide. IEEE J. of Rob. and Auto., improve performance, but they had no signi cant ef4(5):524{531, 1988. fect. Objective analysis allows us to focus on aspects [8] Ranjit K. Roy. A Primer on the Taguchi Method. of the system which are truly signi cant, and reveals Van Nostrand Reinhold, New York, NY, 1990. underlying relationships among parameters. Our goal is to build better analytical models for intelligent systems, in particular systems which integrate physical modalities such as sensing and action as well as reasoning. Experimenting with and building models for whole integrated systems, rather than studying their parts piecemeal, will allow us to improve our understanding of such complicated systems.

Suggest Documents