A Visualization Framework for Real Time Decision Making in a Multi ...

1 downloads 0 Views 3MB Size Report
IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008. 129. A Visualization Framework for Real Time Decision. Making in a Multi-Input Multi-Output System.
IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008

129

A Visualization Framework for Real Time Decision Making in a Multi-Input Multi-Output System Pradeepkumar Ashok and Delbert Tesar

Abstract—Human beings have the capacity to make quick and accurate decisions when multiple objectives are involved provided they have access to all the relevant information. Accurate visual measures/decision surfaces (maps) are critical to the effectiveness of this process. This paper introduces a methodology that allows one to create a visual decision making interface for any multi-input multi-output (MIMO) system. In this case, the MIMO is thought of in the broadest sense to include battlefield operations, complex system design, and human support systems (rehabilitation). Our methodology starts with a Bayesian causal network approach to modeling the MIMO system. Various decision making scenarios in a typical MIMO system are presented. This is then followed by a description of the framework that allows for the presentation of the relevant scenario dependent data to the human decision maker (HDM). This presentation is in the form of 3-D surface plots called decision surfaces. Additional decision making tools (norms) are then presented. These norms allow for single value numbers to be presented along with the decision surfaces to better aid the HDM. We then present some applications of the framework to representative MIMO systems. This methodology easily adapts to systems that grow bigger and also when two or more systems are combined to form a larger system.

Index Terms—3-D visualization, decision making, decision scenarios, decision surface norms, decision surfaces, large multi-input multi-output (MIMO) systems, performance maps.

I. INTRODUCTION HE of a multi-input multi-output (MIMO) system requires multiple performance parameters to be monitored in real time. The system performance parameters are often highly coupled among themselves and the control parameters. Managing such a system requires human intervention. Presenting relevant data to a human decision maker (HDM) in a user-friendly and understandable manner is critical for fast and effective decision making. Literature presents us with numerous techniques to present data to the user. What is lacking is a methodology that shows one how to create a visual decision making interface for a MIMO starting with the modeling stage. In this paper, we present a framework that would allow one to take any MIMO system, obtain the foundations maps/surfaces and create a comprehensive decision making interface.

T

Manuscript received October 11, 2007; revised December 15, 2007. This work was supported in part by the U.S. Office of Naval Research under Grant N00014-06-1-0213. The authors are with the Robotics Research Group, The University of Texas at Austin, Austin, TX 78758 USA (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/JSYST.2008.916060

II. LITERATURE REVIEW What is a good visual representation? How does one choose the visual representation that best aids in faster and accurate decisions? Lohse et al. [1] classify visual representations (from a set of 60) into 11 categories; structure diagrams, cartograms, maps, graphic tables, process diagrams, icons, time charts, network charts, pictures, tables, and graphs. The classification was based on evaluation by 16 individuals. The 16 evaluators also rated each visual representation on a 1–10 scale with regards to the following ten characteristics: spatial/nonspatial, temporal/nontemporal, hard/easy to understand, concrete/abstract, continuous/discrete, attractive/unattractive, parts/whole, nonnumeric/numeric, static/dynamic, and a lot/a little. These evaluations cannot be considered definitive by any means but this is the best literature available with regards to classification of visual representation and we use the data from this paper to support our contention that 3-D decision surfaces (classified as graphs in [1]) are best suited for visual representation of system data for interactive decision making. Rendering data in 3-D allows for a lot of parameters to be displayed simultaneously. The axes for the 3-D plots are the ones that are of most importance with regards to the current decision making scenario. Additional attributes can also be displayed on a color basis scale. Hanne and Trinkaus [2] create a novel multi-criteria decision support system that is visual and can be used by non-experts. They suggest a radar/spider chart visualization technique consisting of lines radiating from the center (the lines correspond to multiple criteria and are termed criteria lines) and lines connecting values chosen on each criteria line. They state that a decision making support system should transform complex situations into time-animated and spatial presentations. Dillon et al. [3] also aim to create a visual delivery system that is able to present information clearly so as to enhance the decision making process. They suggest a visual display system that would have a situation display at the very top of the screen, allow real-time interaction, allow access to details and clearly present the uncertainties associated with various decisions. They also suggest a course of action analyzer which allows them to choose from different options. They define metrics to quantify the effectiveness of a visual delivery system; some of them being the coolness factor, speed (time to take decisions), accuracy (quality of the decision), and cost (number of people needed to make the decision, etc.). Andrienko and Andrienko [4] present an interactive visual tool to support spatial multi-criteria decision making. The tool they provide allows the user to change the criteria and the relative importance of each criterion interactively. They use a vi-

1932-8184/$25.00 © 2008 IEEE

130

sualization technique called “Utility sign” to visually depict the relative merit of each option. They use a visualization method called parallel coordinates [5] to simultaneously consider more that two attributes. Decision making often reduces to an optimization problem. Different methods to visualize the optimization problem in 3-D have been suggested. Winer and Bloebaum [6], [7] present one method to visualize n-dimensional optimization problems in two and three dimensions. They call their methodology graph morphing. For 3-D representation, they first choose the two most important parameters to be the - and -axes. The objective functions and the constraints are then plotted for specific values of the other parameters (excluding the and parameters). The user has control over these other parameters and changes them using a slider. They show such visualizations to be useful for setting up and analyzing optimization problems. Simionescu and Beale [8] present a method to visualize func(the objective function tions of the type of an optimization problem) by performing global maximizations/minimizations of the function with respect to all the parameters excluding the ones on the - and -axes. The and -axis parameters are scanned from their minimum value to their maximum and the function is optimized for each , coordinate value. Messac and Chen [9] provide a methodology for visualizing an optimization process in real time using physical programming. They categorize visualization into artifact-based visualization (ABV) methods and non-artifact-based visualization (non-ABV) methods. In ABV methods the visualization is usually imposed on the physical object (such as stresses shown on a gear tooth). Non-ABV does not use a physical object. The authors further classify the non-ABV methods into those that are based on topographical view of the design space and ones that are not based on the topographical view of the design space. The authors, however, regard the topographical method as not desirable due to the large number of analysis runs that are needed before optimization begins. They instead prefer the non-topographical approach which includes 2-D plots (with and without scaling) and parallel coordinates. Additionally, they also introduce a new visualization approach based on physical programming. This graphical approach is good for conveying the status of an optimization process. Eddy and Lewis [10] argue that most visualization techniques are limited to a few 2-D/3-D plots. They introduce a technique called cloud visualization for visualization and continuous monitoring of the multi-objective optimization problem. Both the design space as well as the performance space are displayed simultaneously as 3-D plots. Where more than three dimensions are involved, first a 3-D plot is displayed for the HDM to make a choice (on the plot). Once the HDM chooses a point in this 3-D plot, those variables (in the 3-D plot) are fixed and a plot is displayed with the remaining dimensions. Kanukolanu et al. [11] present a methodology to create a visualization interface to aid in decision making when coupled subsystems under uncertainty are involved. Agrawal et al. [12] present a methodology to visualize pareto frontier for an an n-dimensional performance space. The hyperspace diagonal counting method described in their paper allows for lossless visualization.

IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008

Fig. 1. Bayesian causal network of a system.

III. RESEARCH OBJECTIVES The literature shows numerous ways to display information in 3-D format to the HDM. There is, however, a lack of a step-by-step procedure to create a visual decision making system for the real-time operation of a MIMO system. The visual display system needs to handle such diverse scenarios as when a single performance parameter (or objective) is to be maximized/minimized or when multiple objectives need to be combined and should allow for condition-based maintenance, fault management, and suggest alternative ways to movement from one system state to another. The framework to create such a visual display system should account for the nonlinearities and uncertainties in the system and highlight both the nonlinearities and the uncertainties to the HDM. This paper describes such a decision framework. The framework provides the means to generate 3-D surfaces (decision surfaces) to be displayed to the HDM for various operational scenarios that might occur in a MIMO system. IV. GLOSSARY OF KEY TERMS We briefly describe some key terms in this section for easy reference. Parameters are the inputs and outputs of a MIMO system. The example system used in this paper (see Fig. 1) has 12 parameters. Parameters can be either performance parameters (the outputs: intermediary or final) or control parameters (the inputs and disturbances). In Fig. 1, , , , and are the control parameters, , , and are the intermediary performance parameters, and , , , , and are final performance parameters. In 3-D plots, the term “ ” parameter is used to refer to parameters that are represented on the -axis. These are usually either performance parameters or a combination of performance parameters (in cases where multiple objectives need to be combined). The “ ” parameter and “ ” parameter refer to parameters that are represented on the - and -axes, respectively, of the 3-D plot. These can be either control parameters or intermediary performance parameters. Performance maps are 2-D/3-D plots that are plotted from data obtained directly from experimental measurements (i.e., they are plots of unprocessed data). When possible they may also be obtained from physical models. Decision surfaces are 3-D plots that are displayed to the decision makers in real time to enable them to make decisions. The main difference between a performance map and a decision surface is that a performance map is raw data that is displayed and a decision surface is processed data. Any point on a performance

ASHOK AND TESAR: VISUALIZATION FRAMEWORK FOR REAL-TIME DECISION MAKING IN A MIMO SYSTEM

map or a decision surface is also denoted by which is the probability density function of . Control Paths are paths on the decision surface from one point to another. They are mathematically represented as which is a matrix that has a 1 if the path passes through a parand a 0 if not. ticular point Norms are numerical values that are extracted from the 3-D decision surfaces. Decision criteria are numerical values that are obtained through mathematical manipulation of the norms. The decision maker will use decision criteria in concert with the decision surfaces to make his/her decision. Decision scenario refers to a decision making scenario that could happen in a MIMO system. V. MODELING THE SYSTEM In order to be able to generate decision surfaces in real time one needs to have a well-defined model of the system. It is quite possible to build system models purely from first principle but such models often neglect nonlinearity and operational uncertainty. We take a causal network approach to modeling the system. Data for the model requires experimentation. Fig. 1 shows a system with 12 parameters (that can be considered to be of interest during the operation of the system). This system is a generalization of a real system on which the techniques described in this paper were applied. The paper will use this simple model to illustrate the methodology to obtain decision surfaces and decision criteria in real time. The same methodology can then be applied to any MIMO system that is modeled on similar lines. The parameters in Fig. 1 are connected to each other with , , and arrows that signify causality. For example, affect the parameters , , and and therefore there are , , and towards , , arrows pointing from and . Similarly causes and hence the arrow from to . The full system model is represented in this way. A simple naming convention has been used in Fig. 1. The first digit in a child node (the tail end of the arrow) is always one more then that of the largest first digit of the parent node. For and are children of and . example, Once an initial causal network is put in place, it needs to be converted into a Bayesian causal network. This involves three steps [13]; resolving conditional independence and direct indirect relationships; resolving the direction of arrows; and eliminating circular relations. These steps need to be followed to obtain a model that is robust for the some of the mathematical manipulation required later. Creation of such a system model requires the interaction of both the system designer and the system user to ensure that the relevant parameters are used and also to be certain that causality among the parameters have been properly assigned. The model at this stage is purely qualitative. Data needs to be collected to properly represent both the nonlinearities and the uncertainties in the relationships between the linked parameters. The relationships among the parameters are obtained by conducting experiments (through parametrically structured tests) and/or analytical modeling. Fig. 2 shows the histogram of parameter that was obtained by giving a constant input value of (the experiment was repeated 1000 times). The uncertainties in data collected can be represented probabilistically

Fig. 2. Histogram of P 31 collected by giving an input of peating the experiment 1000 times.

131

P 23 = 2 and re-

Fig. 3. Conditional probability table representing the relationship between P 31 and P 23.

and stored in either a continuous or discrete form. In this paper, we store data in a discrete form and all our algorithms are applied to discrete data forms. The algorithms can be modified for a continuous data form. The data in Fig. 2 (relating the parameters and ) when discretized gives us the first two columns in Fig. 3. The third and fourth column represent the probability distribution obtained for . The experiment is repeated for different to arrive at the full conditional probability table values of (CPT). The CPTs for all the parameter relationships [see (1)–(8)] needs to be obtained similarly. These relations completely describe the model in Fig. 1 (1) (2) (3) (4) (5) (6) (7) (8) Now that the full model has been defined, we are in a position to generate decision surfaces as demanded by a decision

132

IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008

scenario. We will briefly introduce various decision scenarios before moving on to obtaining the decision surfaces. As each possible scenario is described, we cross reference it to the relevant section that contains the mathematics for the generation of the decision surfaces. VI. DECISION SCENARIOS The following are some sample decision making scenarios in a MIMO system. A. More Than Two

Parameters

Sometimes a parameter of interest may be dependent on more that two parameters. In such events, what is the procedure for displaying 3-D decision surfaces? In the discussed MIMO is dependent on , , and , i.e., there are model, three – (control) parameters. A decision surface that can be displayed will keep one of the control parameters constant and have the other two parameters on the - and -axis. Fig. 6 shows a decision surface where and are the - and -axes, respectively. has been kept constant in Fig. 6 to get the surface. can have different values. If our requirement was to select so as to get the maximum overall , how do we go about doing it? Section VIII-G value for addresses this question. B. Relating Distant Parameters Assume a scenario where the decision maker wants to control the performance parameter by varying the control paramand . The data relating to and is eters readily available and a 3-D plot can easily be drawn to aid the HDM. Now if the HDM instead wanted to control with and , then he needs to be shown a decision surface with as the -axis and and as the - and -axis. This is not directly possible as data relating to and is not available [see (1)–(8)]. This involves combining performance maps. This procedure is illustrated in Section VII-A. C. Percentage Knowledge Some scenarios may require that we know where we are operating in terms of the capability of the system. In such scenarios, it may be desirable to display a normalized decision surface. The process to arrive at such surfaces is discussed in Sections VII-B and VII-C. D. Adding Similar Phenomena Performance Parameters (Fault Tolerance) Some scenarios may require us to add similar phenomena occurring in different parts of the system. For example, often the parameter “Loss” in a MIMO is due to various causes. A mechanism is desired to add all the losses (associated with different causes) and the uncertainties associated with these losses. The mechanism is illustrated in Section VII-D. Also, in fault tolerant systems, there is often a provision to obtain an output (say for example power output or force) from multiple subsystems. Section VII-D provides the means to serve this purpose as well.

Fig. 4. Different system operational envelopes shown on a representative input/ output plot.

E. Dependent Control Parameters In some scenarios, the HDM might want to keep two or more control parameters dependent on each other. For example, the HDM may desire that when one control parameter goes up, another control parameter also goes up. The decision surfaces that are desired needs to account for the relationship between the two control parameters. A method to obtain decision surfaces in such a scenario is illustrated in Section VII-H. F. Performance Parameter Envelope Systems are usually operated or run in a very conservative performance region (see Fig. 4). With additional information this envelope can be expanded. This is the enhanced performance envelope. This envelope needs to be shown to the HDM so as to get the most from the system. This is discussed in Section VII-I. The system can be operated beyond this envelope as well at the risk of reducing the life of a system. At close to the envelopes the uncertainty is generally much larger. Norm developed in Section VIII-H help the HDM keep track of the uncertainty in the system. G. Multiplying Phenomena Phenomena such as torque and speed are more easily measurable than say power. There may occur situations in a MIMO system where the desired parameter that may not be easily measurable may instead be obtained through the multiplicative combination of two or more measurable parameters. A methodology is therefore desired to obtain decision surfaces through multiplication of other performance maps. This is illustrated in Section VII-J. H. Maximum and Minimum Sometimes it is necessary for the HDM to know what the maximum performance limit is for the system at a particular point in time. Norms such as those discussed in Sections VIII-A and VIII-B allows one to do just that.

ASHOK AND TESAR: VISUALIZATION FRAMEWORK FOR REAL-TIME DECISION MAKING IN A MIMO SYSTEM

133

I. Operational Region In some scenarios, the HDM may like to know how much operating region he/she will have when faced with a tight constraint. An example could be where the HDM may want to operate a chemical plant and keep the emissions of certain gases (which vary during the course of the day) to certain limits. The HDM will want to know how much operational flexibility is lost when the constraints are tight. Section VIII-C discusses the tools for obtaining such operating regions. J. Control Range Sometimes it is desirable to know the range of control at the hands of the HDM. Section VIII-E deals with a norm that calculates the control range. K. Sensor Problem Most MIMO systems have lots of sensors. There is a need to know if and when a sensor fails. Section VIII-H presents a simple means to detect sensor failure.

Fig. 5. Causal flow combination.

L. Multi-Objective Decision Making Some scenarios may involve combining decision surfaces with different phenomena on the -axis (Such as say output, loss, and noise). Sections VII-E–VII-G describe three different 3-D surface combinations involving multiple objectives. Section VII-F discusses combining the uncertainties of multiple performance parameters. M. Condition-Based Maintenance

Fig. 6. Performance map no. 1 (P 23 versus P 11 and P 12).

In a large system one might want to monitor different performance degradations and estimate how healthy the system is or how much longer the system is expected to survive the task. Norms discussed in Sections VIII-D and VIII-F allow us to do this. N. Move From One State to Another Given a decision surface, the HDM is able to decide the best point on the surface to move to. Often there is more than one path to move from one point on the surface to another. The HDM needs guidance in choosing the appropriate path. Sections VIII-I and VIII-J discuss some tools to help in choosing paths. VII. TOOLS TO OBTAIN DECISION SURFACES Assuming that the system has been modeled completely and the data is available, we will now proceed to show the different techniques that can be used to obtain the required decision surfaces for the different decision making scenarios. A. Causal Flow as the -axis and To display a decision surface with and as the - and -axes, we need to combine four performance maps (see Figs. 5–9). The uncertainties (these uncertainties can occur either due to measurement/process uncertainties or due to modeling uncertainties in the physical model) also have to be combined. We modeled the system as a Bayesian

Fig. 7. Performance map no. 2 (P 31 versus P 23).

causal network so as to be able to use Pearl’s belief propagation algorithm [14] to propagate the uncertainties through the parameters and relate to and and also retain the uncertainties in the relationship. Note that in order to apply Pearl’s algorithm, complex MIMO networks will need to be converted into a polytree structure using algorithms such as the Junction tree algorithm [15], [16]. The combined map (decision surface) is shown in Fig. 10. The algorithm used to arrive at this map is given in the Appendix (Algorithm 1). The parameter is obtained as a probability density function for each coordinate. Each of the maps has top and bottom layers (which are lighter than the middle layer). These layers correspond to the 6 limit (3 on either side of the mean). If the distribution is a normal distribution

134

Fig. 8. Performance map no. 3 (P 41 versus P 14 and P 31).

IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008

Fig. 11. Performance map no. 5 (P 23 versus P 11 and P 12).

Fig. 9. Performance map no. 4 (P 52 versus P 14 and P 41). Fig. 12. Decision surface (performance map no. 5 after it has been normalized).

Fig. 10. Decision surface (obtained by combining performance maps 1–4). Fig. 13. Performance map no. 6 (P 21 versus P 11 and P 12).

then the value of for a particular will lie within the upper and lower layer with a certainty of 99.73%. B. Direct Normalization When the axis of a map is a desirable criteria (such as efficiency), we apply the following (9) to obtain a decision surface that is scaled and has values from 0 to 1. This normalization uses two norms; the min norm (see Section VIII-B) and the range norm (see Section VIII-E)

(9) Fig. 11 shows an original performance map and Fig. 12 is its normalized version.

surface that is scaled and has values from 0 to 1. This normalization uses two norms: the max norm (see Section VIII-A) and the range norm (see Section VIII-E)

(10) Fig. 13 shows the original performance map and Fig. 14 is its normalized version. Note that after normalization (both direct and inverse), regions in the decision surface close to 1 represent the desirable regions of operation and regions close to 0 are undesirable and should be avoided. D. Additive

C. Inverse Normalization When the axis of a map is an undesirable criteria (such as loss), we apply the following (10) to obtain a decision

Our objective is to add versus and (see Fig. 15) and versus and (see Fig. 16). Here and are the same phenomenon. Note that in order to get

ASHOK AND TESAR: VISUALIZATION FRAMEWORK FOR REAL-TIME DECISION MAKING IN A MIMO SYSTEM

Fig. 17. Decision surface no. 3 (P

135

52 + P 22 versus P 11 and P 12).

Fig. 14. Decision surface (performance map no. 6 after it has been normalized).

Fig. 18. Uncertainty bands for decision surface no. 1 (see Fig. 15).

Fig. 15. Decision surface no. 1 (P

52 versus P 11 and P 12).

Fig. 19. Uncertainty bands for decision surface no. 2 (see Fig. 16). Fig. 16. Decision surface no. 2 (P

22 versus P 11 and P 12).

versus and we need to use the causal addition tool (see Section VII-A). In both Figs. 15 and 16, only the expected value of is shown. The uncertainty bands are shown in Figs. 18 and 19. For this combination, we use the theorem [17] that states that when two distributions are added, the expected value of the sum is the sum of the individual expected values

(11) Combining and , we get the decision surface shown in Fig. 17. Similarly the uncertainties can be added using the theorem [17] that states that when two variables are added, the variance of the sum is the sum of the individual variances as long as the two variables are independent

(12)

Fig. 20. Uncertainty bands for decision surface no. 3 (see Fig. 17).

The uncertainty bands in Figs. 18 and 19 when added give Fig. 20. E. End Task There will be times when we may need to combine perfor(noise), (loss), and mance parameters such ( (output) (see Table I).

136

IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008

TABLE I EXAMPLES OF PERFORMANCE PARAMETER COMBINATIONS

Fig. 22. Uncertainty map for the parameter P 22 with respect to P 11 and P 12.

Fig. 23. Uncertainty map for the parameter P 23 with respect to P 11 and P 12.

Fig. 21. Decision surface no. 4 (obtained by combining normalized surfaces; P 21 and P 23).

Let us say that our objective is to operate the system at high output and minimize loss. Then we need to combine the map (see Fig. 11) and the map (see Fig. 13). The first step in this combination is to apply direct normalization on and inverse normalization on . Once they have been scaled they are added together and averaged. The decision surface is then plotted (see Fig. 21). The decision surface (see Fig. 21) allows the HDM to make intelligent judgements on regions of operation. All the other objectives in Table I can be represented as decision surfaces. Rows 1–3 involve only one objective. F. Uncertainty The uncertainties of the performance parameters may also be combined to give the HDM insight into the uncertainties when multiple objectives are involved. Fig. 22 is an uncertainty map . The uncertainties of [represented by their standard for deviation (SD)] are normalized (inverse normalization) to obtain the surface shown in Fig. 22. Similarly Fig. 23 is the uncertainty map for parameter . Note that after normalization, values close to 1 are desirable and values close to 0 are undesirable. So, if our objective is to minimize uncertainty, then the system should be operated in those regions where the surface is closer to 1. The uncertainties are combined by adding the two normalized maps (see Figs. 22 and 23) and averaging them. Fig. 24 shows the resulting surface.

Fig. 24. Uncertainty map for the parameters respect to P 11 and P 12.

P 22 and P 23 combined with

G. Region Partition Sometimes it is desirable to know which of the performance objectives dominate or reach close to optimality in a particular operational region. Algorithm 2 in the Appendix is a methodology to obtain a surface that conveys this information to the , , HDM. Fig. 25 shows three performance parameters ( and ) that have been normalized. After the application of Algorithm 2, we get the surface shown in Fig. 26. This surface makes it easy for the HDM to visualize operational areas when one performance parameter performs better than the others. Note that Algorithm 2 gives us two outputs. First, it gives us which is the new combined surface. Second, it also tells us which of the surfaces that were combined dominate and in what regions the individual performance parameters dominate. In the algorithm is the variable that keeps track of the surface that dominates at any given position .

ASHOK AND TESAR: VISUALIZATION FRAMEWORK FOR REAL-TIME DECISION MAKING IN A MIMO SYSTEM

137

Fig. 25. Normalized P 21, P 22, and P 23 surfaces.

Fig. 28. Surfaces relating P 52 to P 11 and P 13 for different values of P 12.

Fig. 26. Decision surface showing which parameter dominates in which region.

Fig. 29. Surface relating P 52 to P 13 (Y axis) and P 12 and P 11 (X axis).

Fig. 27. Surfaces relating P 52 to P 12 and P 13 for different values of P 11.

Fig. 30. Envelope of the performance parameter P 23.

H. Control Parameter Sometimes instead of allowing control parameters to be independent, one may tie two parameters together through a relation. In such cases, two control parameters will be depicted on the same axis. Fig. 27 represents and its relation to and . Fig. 28 represents and its relation to and . Fig. 29 is the combined surface. The algorithm for this combination is provided in the Appendix (see Algorithm 3).

the top surface and the bottom surfaces and shows performance limits. Algorithm 4 in the Appendix details the procedure to obtain the top and bottom surface.

I. Envelope Generation Fig. 30 shows maps relating to and (for different values of and . The envelope corresponds to

J. Multiplicative Phenomena that are conditionally independent (verified by separation in the causal network) can be multiplied to obtain various decision surfaces. Figs. 31 and 32 ( and ) are conditionally independent. The probability distribution for point A in Figs. 31 and 32 is given in Table II. Multiplying each of the rows in column 2 with each of the rows in column 4 (see Table II), we get Table III. Table III after consolidation gives Table IV which is the distribution of point

138

IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008

TABLE IV PROBABILITY DISTRIBUTION OF POINT A (CONSOLIDATED FROM TABLE III)

Fig. 31. Plot of P 23 versus P 11 and P 12 (along with the uncertainty bands).

Fig. 32. Plot of P 51 versus P 11 and P 12 (along with the uncertainty bands).

TABLE II PROBABILITY DISTRIBUTION OF POINT A (IN P 23 AND P 51 MAPS)

Fig. 33.

P 23 2 P 51 decision surface.

A in Fig. 33. The rest of the points on the decision surface are arrived at similarly. The generalized algorithm is presented in the Appendix. VIII. DECISION SURFACE NORM TABLE III PROBABILITY DISTRIBUTION OF POINT A (IN P 23

2 P 51 MAP)

In addition to the surfaces, the HDM may also need single number values to help make quick decisions. Single number summaries of these decision surfaces can be extracted using a number of mathematical tools. In this section, we discuss some of the most relevant. Mathematically obtaining norms boils , where is a matrix down to mapping representing a decision surface. A. Maximum Norm The maximum norm is given by

(13) where

is the expected value of the distribution .

B. Minimum Norm The minimum norm is given by

(14)

ASHOK AND TESAR: VISUALIZATION FRAMEWORK FOR REAL-TIME DECISION MAKING IN A MIMO SYSTEM

139

Fig. 34. Area norm example.

where

is the expected value of the distribution .

C. Area Norm The area norm is given by

(15) The area norm gives the HDM an indication of the size of the region where the parameter of interest ( parameter) lays between an upper and lower bound with a probability given by . Fig. 34 is an example of the use of the area norm. In Fig. 34, PC is the parameter of interest and it is required to find out the region where PC would be less than 70 with 99% certainty. ABCD is a projection of the region that meets this criterion. Fig. 35 is another example of the use of the area norm. Let us say we need to find out operational regions that meet the criteria that should be greater than 80 and should be less than 68, we use a modified form of the area norm to calculate this. In Fig. 35, the first two charts are contour plots of and while the third chart shows the first two plots superimposed. Only 12% of the total operational region allows for operation of the system with and .

Fig. 35. Area norm example 2.

D. Difference Norm The difference norm is given by

Fig. 36. Use of difference norm for condition-based maintenance.

(16) The difference norm is very useful for condition-based maintenance of the system. If is assumed to be an important parameter then it’s initial performance map (the map collected when the system is built; Nominal performance map in Fig. 36) is compared with it’s current operational map (assessed map)

to obtain its health degradation. Comparing the assessed performance map with the required performance map will give the HDM an indication of the remaining useful life. The required performance map is built based on the duty cycle needs of the system [18].

140

IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008

Fig. 37. Decision surface P 52 versus P 12 and P 13 (for P 11 = 11).

Fig. 39. Decision surface P 52 versus P 12 and P 13 (for P 11 = 14).

Fig. 38. Decision surface P 52 versus P 12 and P 13 (for P 11 = 20).

E. Range Norm

Fig. 40. Plot demonstrating use of volatility norm.

The range norm is given by

This norm gives the HDM an indication of the range of performance that can be controlled.

Applying (19) to the three decision surfaces (see Figs. 37–39), we get the RMS norms of the three surfaces to be 62.89, 62.30, and 63.44. If our objective was to minimize , then using this norm we would choose the decision suras the basis for further decision making. face with This norm thereby guides us to hold some of the parameters ). constant (In this example,

F. Volume Norm

H. Volatility Norm

(17)

The volume norm is given by

The volatility norm is given by

(18) This norm can also be used for condition-based maintenance as it allows one to compare the volume under the decision surfaces. G. Root Mean Square Norm The root mean square norm is given by

(19)

(20) Volatility is a measure of the risk associated with using a decision surface. We base this on the length of the uncertainty band. The bigger the uncertainty band of a map, the bigger the volatility. We base this norm on SD. If the volatility of a decision surface is high then its use could lead to considerable uncertainty in the decisions made using this map. This is usually the case when the operation is beyond the versus . rated operational regime. Fig. 40 is a plot of The curve corresponding to is within the safe envelope is inand less uncertainty is expected in this case. When creased to 10, the system is pushed to perform at a higher level, outside the conventional envelope. Usually the process is a lot less predictable when operated beyond the rated values. uncertainty band is definitely larger than the The uncertainty band and hence the decision surface is considered more volatile that the surface. Any

ASHOK AND TESAR: VISUALIZATION FRAMEWORK FOR REAL-TIME DECISION MAKING IN A MIMO SYSTEM

141

Fig. 42.

P 21 versus P 11 and P 12.

Fig. 41. Increase in volatility due to sensor faults.

Fig. 43.

P 21 + P 23 versus P 11 and P 12.

decision based on the will be much more uncertain than the one based on . For condition-based maintenance new performance data is collected in frequent intervals and the system model is updated frequently. If the maps generated from new models are more volatile than the maps generated from the original model, then it could signal possible degradation of actuator or existence of a faulty sensor (see Fig. 41).

) and output Also, consider the surface where the noise ( ( ) are combined (equal weights of 0.5) (Fig. 43). It is useful to know which of the decision surfaces (obtained through end task combination: Section VII-E) is easiest to navigate. Applying the monotonicity algorithm to the two surfaces, we get 0.0769 for Fig. 42 and 0.0435 for Fig. 43. Monotonicity of Fig. 42 is greater than that of Fig. 43). Therefore, it is easier to move from one position to another on Fig. 42 than it is on Fig. 43.

I. Monotonicty Norm Monotonicity is a measure of the number of ups and downs in a map. It gives us a measure of how difficult it is to move from one point on a decision surface to another. The monotonicity norm is given by

(21) where SIGN is a Boolean operator that returns a 1 if the operand is positive and returns a 0 if the operand is negative or 0. XOR is an exclusive OR operator. Consider the surface where the objective is just to minimize noise ( with weight 1 and with weight 0) (see Fig. 42).

J. Control Path Norm The control path norm allows the HDM to choose an appropriate path from among the many paths available to move a system from one state to another (see Fig. 44). There can be many definitions of a control path norm. Two simple norms are as follows. 1) The number of discrete point in the path can be used to differentiate between different paths

(22) where is a matrix that has a 1 if the path passes through a particular and a 0 if not. 2) The sum of values corresponding to different paths may also be used as a differentiator. Mathematically defined as

(23)

142

IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008

Fig. 44. Different control paths for movement from one point to another.

IX. APPLICATIONS OF THE FRAMEWORK In this section, we show some example applications of the framework. The basic methodology is the same for all the example cases and can be summarized as follows. 1) Identify the various decision making scenarios for a particular MIMO system (see Section VI). This requires interviews with people who operate the system. 2) Identify the key performance and control parameters that are associated with each of those scenarios. 3) Identify and quantify the relationship between the performance and the control parameters into a Bayesian causal network (see Section V). This requires interactions among the system operator, system designer, and the system validator. 4) Identify what visuals to display to the system operator to aid real time decision making. This would be through interactions with the system operator and the visual interface designer. 5) Use techniques in Section VII to display appropriate decision surfaces and use techniques in Section VIII to display numbers extracted/condensed from these decision surfaces. The applicability of this methodology in a selected set of domains is shown next. A. Actuator Operation An actuator is a MIMO system (see Fig. 45). The performance parameters torque, noise, loss, actuator temperature, speed, etc., can be managed by varying control parameters such as voltage, current, turn on angle, turn off angle. Parameters such as load and external temperature are disturbance parameters but can also be considered as control parameters. The first step to using the visual framework described in this paper is to envision all the different possible actuator operational scenarios that could occur during the life of the actuator. Examples of such scenarios being the following: 1) run the actuator quieter than it’s normal operational quietness; 2) operate the actuator at torque of say more than 70% peak load and an efficiency of at least 60% Scenarios such as these help us list all the parameters (performance parameters and control parameters) that are of importance. A causal network is then built encompassing all these parameters and the relationship among the parameters (including

Fig. 45. Intelligent actuator.

the uncertainty in the relationships) is arrived at either through experiments or analytically. Now one may use the framework described in this paper to provide visual data to the HDM to aid the operator to make decisions. For example, the visual plots that may be shown in relation to the scenarios described previously are possibly as follows. 1) A noise envelope (see Section VII-I) with respect to turn on angle and turn off angle. (The envelope is created varying the other control parameters; voltage and current for the load acting on the actuator at the instant under consideration). The decision surface is normalized (see Section VII-C) and the current operation point is shown on the surface. The HDM makes a choice of a quieter operational point by looking at the surface. 2) A surface showing regions in the operational regime where both the torque is greater than 70% peak and the efficiency is greater that 60%. We use the area norm (see Section VIII-C) to arrive at this surface. B. Unmanned Ground Vehicle An unmanned ground vehicle (UGV) can be considered to be made up of multiple actuators (the UGV in Fig. 46 uses nine actuators). The actuators may not all have the same performance capabilities. Causal models need to be built for each of the actuators. Each actuator causal model becomes a subset of the UGV causal model. Here also, one needs to conduct a UGV operational scenario analysis to begin with, to identify the various parameters of interest on the UGV level. Some examples of scenarios that a UGV would be faced with are as follows. 1) Perform a certain set of tasks (such as move to a location, drill a hole in a rock, place a sensor in the hole, cover it, and then head back to it’s base) with limited battery reserves. 2) Is the health of the UGV sufficient to allow it to perform the previously mentioned tasks and return to the charging base? What is the probability that the UGV will fail during its mission? To handle the previous two scenarios, the HDM may be provided with the following visuals to help make decisions.

ASHOK AND TESAR: VISUALIZATION FRAMEWORK FOR REAL-TIME DECISION MAKING IN A MIMO SYSTEM

143

Fig. 47. Commercial aircraft actuators.

Fig. 46. UGV.

1) Assume that battery consumption for each task depends on the speed at which it performs the task. Then a decision surface with “Battery Consumption” on the axis and the “Task” (move, drill, place, cover, etc.) and “Speed of Completion of the Task” on the - and -axes can be shown to the HDM. This will allow the HDM to decide on what tasks can be slowed down so that battery lasts for the duration of the mission. 2) “Nominal,” “Assessed,” and “Required” decision surfaces (Section VIII-D) for all the actuators, coupled with remaining useful life (RUL) information for each of the components may be displayed to the HDM to help make decisions about the health of the UGV. Calculation of RUL requires an analysis of the task to be performed as well. The previously mentioned decision surfaces can easily be created using the tools presented in this paper once a causal model is constructed that incorporates all the relevant parameters.

Fig. 48. Some smart car features.

The decision surfaces that may be displayed are as follows. 1) A plot with “Fuel Consumed” on the axis and “Load” and “No. of Generators” on the - and -axes. 2) Plots showing performance degradation of various tasks due to the failure of the generator. Plots showing excess capacities of other generators that could be used to carry on with the tasks.

C. Aircraft Power Management

D. Smart Car Operation

The aircraft has two power sources; the first is its primary source associated with generators on the jet engines (turbines) to supply to the primary power demands on the aircraft (control surfaces, landing gears, electronics, etc.) and the second is the auxiliary power unit (APU) which normally supplies power for lower level auxiliaries, cabin environment, etc. (see Fig. 47). The duty cycle demands for power of each subsystem remains uncertain, the sequencing or overlapping power requirements can create unusually large peak demands and the question of managing partial or total failure of a power source, its distribution network, and its converters can create a demanding power management requirement. Military aircraft must protect their ability to fight while civilian aircraft must be able to reconfigure their power systems to maximize safety. Example scenarios of power management in an aircraft are as follows: 1) meet load requirements and maximize fuel economy during cruise; 2) complete flight safely after failure of one power generator.

Future automobiles will be designed to expand human choice (see Fig. 48). These might include maximizing acceleration, improving gas mileage, or enhancing the overall safety of the vehicle. This will lead to multi-speed electric drive wheels, active suspensions, intelligent brakes, reconfigurable tires, etc. All this eliminates passive systems (present drive trains, damper/spring suspensions, unique purpose tires, bevel gear differentials, etc.) all with minimal choices for the human operator. Gear shifts are now either human controlled or by means of computers; brakes are now also more intelligent. Example scenarios are the following: 1) safely cross a terrain using active suspensions, electric drive wheels, and brakes; 2) balance fuel economy and acceleration in urban traffic conditions. The following decision surfaces may be displayed corresponding to the previous scenarios are: 1) “speed” on the -axis and “Safety Factor” and “Terrain Condition” on the - and -axes;

144

IEEE SYSTEMS JOURNAL, VOL. 2, NO. 1, MARCH 2008

2) “Distance Covered per Hour” on the -axis and “Fuel Consumed” and “Acceleration” on the - and -axes. The objective of this section was to show applicability of the framework to integrate data from heterogeneous systems to provide a visual decision making system for the HDM. The creation of a causal model is vital to this framework.

Loop through from to Set Set Set Loop through all the maps from If then

to and

X. CONCLUSION This paper lays the groundwork to enable a system designer to create a visual decision making system for a MIMO system from scratch. In this paper, ten methods to create decision surfaces and ten norms to arrive at decision criteria are developed. These form the base of the framework which may be extended. Different MIMO operational scenarios are presented along with means to visualize such scenario to structure decisions. The framework was tested on a MIMO system at the Robotics Research Group at The University of Texas at Austin and software is being built around this framework. Due to the proprietary nature of the work the software system developed to date is not shown in this paper. As more decision scenarios concerning a MIMO system are considered, this framework will be tested for robustness. Additional methodologies to create decision surfaces and extract norms will be developed as needed. Due to the modular nature of the framework, it is expected that new methodologies can easily be built into the framework. The framework shows promise for use in system of systems and a preliminary indication of that is given in this paper. APPENDIX

Exit map loop. Exit loop . Exit loop . Algorithm 3—Control Parameter Combination: be maps to be Let be discretized into combined. Let each of equal parts between their minimum and maximum Loop through Loop through Set Set

from to from 0 to in increments of 1 Set

… Update beliefs to get Exit loop Exit loop

. .

Algorithm 4—Envelope Generation Combination:

Algorithm 1—Causal Flow Combination: 1) Once the -, -, and -axes of the desired decision surface is decided on, if there are other parameters that affect , then hold them as constants and enter them as evidence in the Bayesian causal network. 2) Transform the network into a polytree using junction tree algorithm. and ) based 3) Set step sizes for and ( on how fine the CPT’s involving them have been discretized. Also set the maximum and minimum values for and ( , , and ). Loop through from to in increments of Loop through from to in increments of Perform belief updating on that polytree with the current and . Record the distribution of as that corresponding to the current and . Exit loop . Exit loop . Algorithm 2—Region Partition Combination: be the Let normalized maps and let the combined map be represented by . Loop through from to

Assume that we need to find the envelope for the map and let represent the other parameters that affect . Then the algorithm is as follows. Loop through from to of Loop through from to of 1. Set 2. Set 3. Loop through from in increments of 4. Loop through from in increments of 5. … 6. Loop through from in increments of Update beliefs to get If

then

If

then

7. Exit loop 8. Exit loop 9. … 10. Exit loop Exit loop . Exit loop .

in increments in increments

to to

to

ASHOK AND TESAR: VISUALIZATION FRAMEWORK FOR REAL-TIME DECISION MAKING IN A MIMO SYSTEM

Algorithm 5—Multiplicative Combination: Let be the maps to be multiplied and let the combined map be represented by . Loop through Loop through

Exit loop Exit loop

from from

to to

. . ACKNOWLEDGMENT

The authors would like to thank their colleagues at the Robotics Research Group for the many discussions which resulted in this manuscript. REFERENCES [1] G. I. Lohse, K. Biolsi, N. Walker, and H. H. Rueter, “A classification of visual representations,” Commun. ACM, vol. 37, no. 12, Dec. 1994. [2] T. Hanne and H. L. Trinkaus, “KnowCube for MCDM—Visual and interactive support for multicriteria decision making,” Fraunhofer ITWM, Kaiserslautern, Germany, 2003 [Online]. Available: http://publica.fraunhofer.de/documents/N-46487.html [3] K. M. Dillon, P. J. Talbot, and W. D. Hillis, “Knowledge visualization: Redesigning the human-computer interface,” Technol. Rev. J., vol. Spring/Summer, pp. 37–55, 2005. [4] G. L. Andrienko and N. V. Andrienko, “Interactive visual tools to support spatial multicriteria decision making,” in Proc. 2nd Int. Workshop User Interfaces Data Intensive Syst., 2001, pp. 127–131. [5] A. Inselberg, “Visual data mining with parallel coordinates,” Comput. Stat., vol. 13, no. 1, pp. 47–63, 1998. [6] E. H. Winer and C. L. Bloebaum, “Development of visual design steering as an aid in large scale multidisciplinary design optimization—Part 1: Method development,” J. Structural Multidisciplinary Opt., vol. 23, no. 6, pp. 412–424, Jul. 2002. [7] E. H. Winer and C. L. Bloebaum, “Development of visual design steering as an aid in large scale multidisciplinary design optimization—Part 2: Method validation,” J. Structural Multidisciplinary Opt., vol. 23, no. 6, pp. 425–435, Jul. 2002. [8] P. A. Simionescu and D. Beale, “Visualization of hypersurfaces and multivariable (objective) functions by partial global optimization,” Visual Comput., vol. 20, no. 10, pp. 665–681, Dec. 2004. [9] A. Messac and X. Chen, “Visualizing the optimization process in realtime using physical programming,” Eng. Opt. J., vol. 32, no. 5, pp. 721–747, May 2000. [10] J. P. Eddy and K. Lewis, “Visualization of multi-dimensional design and optimization data using cloud visualization,” presented at the ASME Des. Tech. Conf. Des. Autom. Conf., Montreal, QC, Canada, 2002.

145

[11] D. Kanukolanu, K. E. Lewis, and E. H. Winer, “A multidimensional visualization interface to aid in trade-off decisions during the solution of coupled subsystems under uncertainty,” J. Comput. Inf. Sci. Eng., vol. 6, no. 3, pp. 288–299, Sep. 2006. [12] G. Agrawal, K. Lewis, K. Chugh, C. H. Huang, S. Parashar, and C. L. Bloebaum, “Intuitive multidimensional visualization for MDO,” presented at the 10th AIAA/USAF/NASA/ISSMO Symp. Multidisciplinary Anal. Opt., Albany, NY, 2002. [13] S. Nadkarni and P. P. Shenoy, “A Bayesian network approach to making inferences in causal maps,” Euro. J. Oper. Res., vol. 128, pp. 479–498, 2001. [14] J. Pearl, Probabilistic Reasoning in Intelligent Systems. San Mateo, CA: Morgan Kaufmann, 1988. [15] C. Huang and A. Darwiche, “Inference in belief networks: A procedural guide,” in Int. J. Approximate Reason.. Amsterdam, The Netherlands: Elsevier, 1994, vol. 11, pp. 1–158. [16] L. Skelar, “Bayesian belief network propagation engine in Java, ” Kent University, Kent, U.K., Proj. Rep. No. CO600/CO620, 2004 [Online]. Available: www.cs.kent.ac.uk/pubs/ug/2004/co600/bbn/report.pdf [17] H. Jeffrey, Theory of Probability, Third ed. Cambridge, U.K.: Oxford Univ. Press, 1961. [18] L. Koran and D. Tesar, “Duty cycle analysis to drive intelligent actuator development,” IEEE Syst. J., accepted for publication. Pradeepkumar Ashok received the B.Tech. degree in production engineering and management from N.I.T, Kozhikode, India, in 1998, and the M.S. and Ph.D. degrees in mechanical engineering from the University of Texas at Austin, Austin, in 2002 and 2007, respectively. From 2000 onwards, he was a Graduate Research Assistant with the Robotics Research Group (RRG), University of Texas at Austin, Austin, where he is currently the Program Manager of the Actuator Development Effort. From 1998–1999, he was with an automotive manufacturing company, T.E.L.C.O., Pune, India. He is a coauthor of 13 reports sponsored by the Office of Naval Research. He specializes in prime mover design and actuator intelligence.

Delbert Tesar has been extremely active in robotics for over 40 years. He currently leads the largest Robotics Research Group in Mechanical Engineering, University of Texas at Austin, Austin. His program has generated 55 Ph.D. and 129 M.Sc. degrees. He has written 90 position papers, 215 refereed conference and journal papers, and given more than 500 invited lectures. He also holds several U.S. patents. He has established a unique open architecture for robots and manufacturing cells to be assembled on demand using standardized actuators as building blocks on one universal system software to operate all assembled systems. He has shown a true commitment to national service as well, having served on various U.S. Air Force and National Research Council committees for 30 years. His research has focused on the need to produce higher performance production machines at lower costs for value added manufacturing to create new opportunities for U.S. business development and employment.

Suggest Documents