Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
Interactive Visualisation for Decision Support and Evaluation of Robustness – In Theory and In Practice I.S.J. Packhama*, M.Y. Rafiqb, M.F. Borthwickb and S.L. Denhamc a
Lightwave Technologies Ltd, Innovation Centre, NovaUCD, University College Dublin, Belfield, Dublin 4, EIRE.
b
Department of Engineering, University of Plymouth, Drake’s Circus, Plymouth, PL4 8AA. UK.
c
Centre for Theoretical and Computational Neuroscience, University of Plymouth, Drake’s Circus, Plymouth, PL4 8AA. UK. *
Corresponding author: Tel: +353-1-716-3625, Fax: +353-1- , email:
[email protected].
Abstract: An interactive visualisation system for engineering design incorporating a method to evaluate the robustness of solutions is described. The system uses genetic algorithms to generate a large number of alternative design solutions to the problem and an interface that supports multidimensional visualisation, allowing the designer to interact with the data. A clustering technique based on kernel density estimation is described that identifies clusters in terms of the design variables. The clustering technique combined with ‘negative’ genetic algorithm search is shown to successfully allow the user to evaluate the robustness of regions in continuous domains. The technique is illustrated on a continuous engineering design problem: rainfall runoff modelling. Most engineering design problems contain discrete and discontinuous variables and so the approach needs modification to be successful on such problems. Visualisation and user interaction is shown to be useful on discrete problems, particularly when the user creates their own clusters in a multiobjective problem. Evaluation of robustness on such problems is often only possible due to the knowledge of an experienced engineer. A practical example from the design of reinforced concrete biaxial columns illustrates how the system promotes decision support in discrete domains, in particular the implicit knowledge of the engineer, that is difficult to express in a model, can be used to make high-level design decisions. Thus knowledge discovery and evaluation of robustness is shown to be successfully achieved using visualisation and interaction in either continuous or discrete domains.
Keywords: Interactive Visualisation, Knowledge Discovery, Robustness Evaluation, Genetic Algorithms.
1
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
1. Introduction Effective visualisation of search and solution spaces in multidimensional and multicriteria engineering design has had limited attention in the past. This paper reports on the theoretical and practical implementation of an interactive visualisation system that not only generates a large number of optimum design alternatives, using evolutionary computing, but also gives designers more confidence in their decision making while assessing the suitability and robustness of designs. As such the system supports engineering design using evolutionary computing as envisaged by Parmee et al. [1] and Rafiq et al. [2] by placing the emphasis on human computer interaction rather than using only the computer to drive the search.
The system uses genetic algorithms (GAs) [3,4] to generate solutions to the problem, but instead of merely returning a number of ‘optimal’ solutions, all the data generated by the algorithm is presented in a visual interface. Multidimensional visualisation and interaction techniques [5,6] are supported allowing the user or designer to conduct concentrated and focussed search in order to examine specific regions of interest within the search and solution spaces. This ability to generate a diverse set of pertinent design solutions greatly aids better understanding and hence the decision making process. A clustering mechanism is incorporated into the system which helps to identify regions of high fitness solutions within the search space. In addition the genetic algorithm is used in a novel way to assess the robustness of solutions using ‘negative’ search. In many engineering design situations ensuring the robustness of a design is an important part of the performance requirements [7]. Visualisation of the process has been given some attention [8], but more is needed. Here the GA is run in a ‘negative’ way inside a region of the search space in an attempt to find the worst solution.
The technique is shown to be successful in continuous domains and can improve understanding of problems such as rainfall-runoff modelling. In this example a simple computational model for the transformation of rainfall to streamflow is used to calibrate a small number of parameter values. The response surface of such problems is often multi-modal, non-smooth (even discontinuous) and the parameters exhibit varying degrees of interaction and sensitivity; a number of techniques have been developed for the automatic calibration of such models [9]. There is evidence to suggest that rather than
2
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. search for a global optimum set of calibrated parameters there may be several different sets with similar fitness, which Beven and Binley [10] refer to as the concept of equifinality. The visualisation system described in this paper allows the response surface and robustness of candidate parameter sets to be explored interactively, enabling the user to impart knowledge of the physical processes of a given river catchment to the model calibration procedure.
Most engineering design problems have discrete or discontinuous domains with multiple objectives. For these problems modifications to the clustering and robustness evaluation procedure were required. In this paper the design of biaxial columns using reinforced concrete [11] is used to illustrate the challenges encountered and solutions found to overcome these problems. Even for these more complex problems the system’s ability to generate solutions in problematic regions, such as on the edge of a feasible constraint, enhances knowledge of the problem. The opportunity to interact with the data in a multiobjective environment and generate new solutions in pertinent regions identified by the engineer improves understanding and allows the engineer to choose between diverse solutions to the problem. Because of the discrete nature of the biaxial column design problem, parameters such as the diameter and position of a reinforcement bar do not allow a nice concept of ‘neighbouring’ solutions that a continuous clustering and robustness algorithm can deal with. In such problems, however, robustness concepts are often learnt by experienced engineers who evaluate robustness due to external knowledge such as ease of manufacture and varying cost of products. It is shown that this system is a ideal in these circumstances where the ability to view many design solutions is invaluable to making high-level design decision.
The paper is organised as follows. The next section provides a brief review of interactive visualisation techniques for engineering design and explains how the system described in this paper differs from other techniques. Section 3 then gives a full description of the system illustrated by some theoretical examples from the optimisation literature. Section 4 describes the rainfall-runoff application and how the system provides further understanding of the problem. Section 5 explains the biaxial column design problem and the modifications needed for the system to be applied in this domain. Examples of the diverse solutions found due to user interaction are given. The paper is concluded in Section 6 and areas for future work are discussed.
3
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
2. Review of Interactive Visualisation Techniques for Engineering Design using Evolutionary Computing Many engineering design problems can be modelled to a certain extent using mathematical relationships. Any discrepancy between the mathematical model and the physical processes are known to experienced engineers who will take them into account in the final design. Rather than attempting to automate the entire design or modelling process in a computer, many researchers have realised that a human is invaluable in the design loop, notwithstanding the power of computers to create a large number of designs much faster than a human can. In order to combine the strengths of the human and computer, it has become increasingly obvious that visualisation and interaction in the search space of solutions generated by a computer would enhance design scenarios [1,8].
Wong & Bergeron [12] provide an overview of 30 years of scientific visualisation. During the development of computers, researchers were forced to display data on paper or using simple displays. However the groundwork for the best data analysis techniques grew out of this era. Exploratory Data Analysis written by Tukey [13] introduced new ways of thinking about decoding information from data. This started by writing down numbers in such a way that reveals the relationships between data (called stem and leaf displays, similar to histograms), then introducing scatterplot displays and regression lines to understand relationships between variables, to alternative displays such as boxplots and understanding the importance of residuals. As computers got faster so more data could be visualised and statistics computed faster, but since the 1970s only a few completely novel high dimensional visualisation techniques have been invented. The most intuitive and easy to use displays are the scatterplot matrix (sometimes known as the generalise draftsman display [14] and parallel coordinate [15]. These techniques are described in more detail in the next section (see Figures 1,2). Variations on the scatterplot matrix allow multidimensional brushing [16], distortion and zooming [17,18], such as Hyperslice [19]. If the scatterplot matrix is not used, creators of visualisation techniques need to find a way of representing high dimensional data on a two dimensional display. This will necessarily lead to a distortion of some of the variables or occlusion of information, for example the Hyperbox display [20] and Dimension Stacking [21] are attempts to incorporate many dimensions into a single display. The problem with these views is that they take some time to understand the information and require learning. Even after learning a user could get lost in the
4
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. maze of information as they try to relate one high dimensional location with another. Another attempt to represent high dimensional data are abstract or iconic displays, that represent a value of a parameter by the size of an attribute on a picture, for example the Starglyph [22] and Chernoff faces; with a large number of variables or data points these displays become impractical. Parallel coordinates is the only visualisation technique in which it is theoretically possibly to view all the information on a single display without distortion of the information. However, even parallel coordinates suffer from “darkening”, or too much information, if a large number of variables and data points are viewed on the same picture. In conclusion, parallel coordinates and scatterplot matrices are the most intuitive displays that do not distort the data and readily allow interaction for the user.
Therefore most interactive visualisation systems for engineering design use either scatterplot matrices or parallel coordinates, or a combination of both to represent the data. One of the most comprehensive tools is the “Influence Explorer” [8] that evolved out Robert Spence’s long experience in human computer interaction for engineering design. This technique mixes parallel coordinates, histograms, brushing with colours and allows multiobjective satisfaction. Colour is used to identify which solutions satisfy the performance criteria and those that fail. Tolerances can be identified by changing the input parameters until the solutions change colour, in this way the robustness of solutions are evaluated. Ensuring the robustness of a design is an important part of the performance requirements [7,23] as a solution can rarely be manufactured to the exact specifications suggested. If a manufacturing parameter changes slightly, it is important that the new solution is not likely to fail. In the Influence Explorer, solutions are generated randomly in the entire search space, if a user zooms in or inspects a particular region of the search space, more solutions are generated within this regions. As an alternative to this procedure, many researchers have used evolutionary computing to generate optimal solutions, or at least ‘high performance’ regions of the search space, thus reducing the need to manually search the whole space of solutions.
A comprehensive review by Takagi [24] lists a number of approaches to and applications of “Interactive Evolutionary Computation”. One of the first preliminary design systems by Pham & Yang [25], called TRADES (TRAnsmission DESsigner), incorporated a genetic algorithm (GA); the GA produces configurations for the user to evaluate and individual designs can be viewed to make decisions about
5
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. further redesign and other GA runs performed if required. Jo [26] discovered that adding human interaction to his evolutionary design system meant domain knowledge could be incorporated online; solutions can be independently visualized in a space layout problem and the user is allowed to modify individual elements of the design. The interaction of a user has also been considered in a multiobjective environment: Fonseca & Fleming [27] proposed a decision-maker (DM) that controls which objectives have more importance within a non-dominated set of solutions, they suggested the DM could be a human or an expert system. Horn [28] points out that there are three different approaches to decision making in multicriteria problems: make a multicriteria decision before search, make a decision after search or integrate the search and decision making. The latter approach would appear to be the most powerful, incorporating iterative search and decision making.
Ian Parmee proposed an Interactive Evolutionary Design System (IEDS) based on a system of iterative redefinition of variable and objective space by a designer as search progresses [29]. These ideas come out of many years’ research in using evolutionary computing to aid engineering design, starting from general ideas to locate and analyze robust regions of the search space and using directed search to define feasible regions in multiobjective problems. These ideas resulted in the development of a number of modules that could be combined in an interactive evolutionary system, such as cluster-oriented genetic algorithms (COGAs) and including user preferences between objectives to direct co-evolutionary search (see Parmee et al [1] for details and further references to individual techniques). The technique that is most closely related to the work presented in subsequent sections of this paper is the COGA module [30] which extracts regions of good solutions from a genetic algorithm run by filtering high performance solutions as the search progresses. The results can then be visualized by choosing any two variables or objectives and viewing them on a scatter diagram [29]. This representation is very informative and the authors’ claim that visualizing the results of different filtering parameter settings can lead to the identification of mutually inclusive regions between multiple objectives. More recently work on visualization of COGA data has included the development of a novel technique called parallel coordinate box plots [31], that uses the parallel coordinate technique by Inselberg & Dimsdale [15] to visualize many variables at once and compare the distribution of solutions between objectives using statistical analysis.
6
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. The work described in this paper begins from the same philosophy as COGA: allowing the user to generate data and interact with the data for further search. However the way the system executes this philosophy is different in many ways. Firstly all the data generated by the genetic algorithm is available immediately to the user who can then either use the automatic clustering routine to identify interesting regions of the data or perform this step manually. The user is then given the opportunity to directly interact and create further data in those regions as required. If the user wishes to evaluate the robustness of a particular region of the search space, ‘negative’ GA search can be performed (see Section 3.?) that identifies the ‘worst case’ scenario within the region. Such capability allows a user to compare the ‘best’ and ‘worst’ within a region, thus allowing direct comparison between regions. In this case the GA is not being used to find a single, ‘optimal’ solution to the problem, but suggesting different designs and allowing further interactive search for more designs. Use of the GA is also an improvement on Tweedie et al.’s “influence explorer”, as good solutions can be directly discovered and the likelihood of bad solutions within a region directly evaluated with the negative GA search. For a further review of interactive visualisation methods and systems for engineering design and evolutionary computing, see the thesis by Packham [32].
7
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
3. Description of System with Theoretical Examples The design of the interface follows well-grounded guidelines by Shneiderman [5] and suggestions given by Spence [6] for interactive visualisation. To support engineering design within an evolutionary computing environment, the system was designed with the following features (as described in [33]): •
Fast and effective exploration of the search space
•
Easy to use interface and high dimensional visualisation
•
Identification of clusters using colour
•
Allow interaction to search for further data and clusters
•
Evaluation of robustness
3.1 Fast and Effective exploration of the search space Fast and effective exploration of the search space is achieved using short genetic algorithm (GA) runs and a mutation scheme is incorporated that discourages the duplication of solutions. GAs are inspired by the theory of evolution – using selection, crossover and mutation to generate new data and quickly for search near optimal solutions in a problem. The application of GAs to engineering design was popularised by Goldberg [9]. The system uses mostly standard simple GA operators and parameters which can be modified by users that are familiar with them., see Table 1; the default mutation and crossover rates are taken from the GA for Matlab Toolbox [34]. The only departures from the usual simple GA parameters are the low number of generations and mutation scheme that encourages diversity in such a short GA run. A high level of mutation is given to individuals that duplicate other members of the population at the chromosome level. This design ensures that a good variety of diverse solutions can be generated in a short time. Rather than leaving the GA to run for many generations searching for the ‘optimal’ solution, the philosophy of this system is to allow the user to choose where to concentrate search. Due to the lack of complexity of many modelling environments, where speed is more important than accuracy, there may not be an ‘optimal’ solution in a mathematical sense; in most design environments (particularly at the conceptual stage of engineering design), a number of high quality design choices is more valuable.
8
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. Table 1. Default system parameters for genetic algorithm
Number of Generations
20
Population Size
100
Number of bits per variable
16
Selection Type
Stochastic Universal Selection
Crossover Type
Double Point
Crossover Rate
0.7
Mutation Rate
0.7 / {Chromosome Length}
Mutation Rate of Duplicates
5 / {Chromosome Length}
3.2 Easy to Use Interface and High Dimensional Visualisation The graphical user interface of the system was designed to allow direct manipulation of the data and allow the user to exploit the fast generation of data by running further GAs as required. The interface was created in Matlab with Ben Shneiderman’s mantra [5] in mind: Overview, Zoom and Details on Demand. Figure 1 shows the visualisation system working on a mathematical function of four variables where the goal is to maximise a single objective. The data was generated after 20 generations of the GA. The information is viewed in the ‘Overview’ window (above-left), which holds all the data the user desires to keep. The user can select a region of data (by simply drawing a box around the selected region using the mouse) and choose an action from the ‘Navigator’ window underneath – such as Run GA, Find Clusters, Zoom In and so on. Overall statistics of regions of the data are available and clicking with the right mouse button over a data point can access individual details of solutions.
Alternative views and new data can be seen in other ‘Moreview’ windows. The Overview window in Figure 1 shows the data using 2D scatter plots of the variables with one enlarged view, in this view each variable is shown just once. The user can choose the order variables are viewed in through the ‘AxesOrder’ menu, in particular this can be used to choose which axes are viewed on the enlarged plot. The ‘Moreview2’ window in Figure 1 shows the same data in fitness landscape view, so that the objective
9
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. values are shown in the z-axis (vertical). This view allows the user to compare the relative fitness of solutions and regions. However the problem with both of these views is that only two or three dimensions can really be concentrated on at the same time, sometimes causing users to forget about the affect of interaction due to other variables.
Other views are available that allow the user to give equal importance to all variables, these are the scatterplot matrix [14,16] and parallel coordinates [15] views shown in Figure 2. The scatterplot matrix is shown in the left hand figure; each variable is drawn against the other variables in a series of two dimensional plots. For example, the objective values are shown against all the other variables in the righthand column as well as the bottom row. The parallel coordinate view, shown in the right hand side of Figure 2 depicts each variable in a vertically and the data ‘point’ from the scatter view is shown as a line passing through each variable. In this view all the data can be viewed in one plot, again the order of parameters and which parameters to view can be changed through the ‘AxesOrder’ menu.
Fig. 1.
The visualisation system.
10
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
Fig. 2.
The scatterplot matrix (left) and parallel coordinate (right) views.
3.3 Identifying Clusters using Colour Colour strongly helps the perceptual understanding of the design space and enables linking between the many available views. Firstly the fitness of a solution is represented by the intensity of colour of a point or line; intensity is controlled by the saturation and brightness attributes of colour, so a higher fitness is given a more intense and pure colour. The complete data set is shown in black-grey, the higher fitness is a darker and more intense black. In addition the hue of colour is used to highlight important regions of the search space, in a similar way to brushing [16]. For example in Figures 1 and 2 the clusters in the the data are highlighted in a different colour hue; in the online version of these figures the main cluster is shown in blue, whilst the top of the peak is green.
Regions of interest can either be chosen manually by the user or through the in-built clustering facility that identifies regions of high quality designs. The clustering algorithm is based on kernel density estimation [35]. In this example the clustering is performed in variable space; a smooth density estimate of each variable is computed, this information is aggregated to find the main partitions in the data. The partitions are defined in each variable and the data falling between these values define the clusters in the data. The first cluster that is highlighted is the one that contains the fittest individual, this data is temporarily removed from the data and the second cluster is the remaining clusters that contains the fittest individual and so on. [need picture of clustering in this particular data???] The data can also be viewed and clustered from the point of view of the principal or independent components, often revealing natural
11
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. partitions in the data, the clustering procedure in alternative coordinate systems is described in more detail in [33].
The definitions of clusters can be redefined by the designer, using domain knowledge, at any time and used to influence further GA runs as required. This flexibility means the user is in complete control of the exploratory search and decision making process.
3.4 Allow Interaction to Search for Further Data and Clusters The user can search for further data and clusters using the ‘Run GA’ and ‘Find Clusters’ button on the Navigator window. The user can use the mouse to zoom in on a region and run a new search inside that region, alternatively search can be undertaken inside a highlighted region. Search can also be performed ‘outside’ a region by penalising solutions that fall inside that region. This is a useful facility if an acceptable solution is not found or the designer wants to explore solutions outside the defined regions. Figure 3 gives an example of running a GA ‘outside’ the coloured regions defined by the system. The region is identified by the dashed boxes on the var1 / var2 and var3 / var4 plots. The resulting data is shown in Moreview3; another peak has been identified by the system, this can be seen as darker data in the upper range of all four variables. This data can be ‘saved’ to the Overview and further interaction performed. Figure 4 shows the result of adding the two data sets shown in Figure 3 and highlighting the new peak (shown as yellow in the online version of this paper).
Fig. 3.
More data generated by the GA ‘outside’ the box in Moreview3 (right).
12
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
Fig. 4.
Data sets combined, second peak highlighted.
3.5 Evaluation of Robustness A novel mechanism to evaluate the robustness of solutions using ‘negative’ search and redefine the regions to meet performance requirements was devised. Figure 5 shows the procedure. Initially the highlighted regions are redefined so that (say) 10% of the fittest solutions are kept (left). A ‘negative' GA is then run inside these regions to look for the worst solution, in this case minimising instead of maximising the objective. It can be seen that a number of solutions within the highlighted clusters fall below the 10% value in the right-hand picture of Figure 5. The region can be further redefined and the negative GA used to ensure all solutions inside the region are above the desired fitness.
Fig. 5. Robustness evaluation: filtering (left) and after ‘negative’ GA search (right).
13
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. This system and the method for assessing the robustness of solutions have some parallels to the excellent visualisation systems designed by Tweedie et al. [8] (called the Influence Explorer and Prosection Matrix). The advantage of the system described in this paper is that the robustness of solutions can be immediately validated using the negative GA search.
4. Continuous Design Application: Rainfall Runoff Model 4.1 Rainfall-runoff model Practical use of the visualisation system is demonstrated by the calibration of a simple lumped, 6parameter rainfall-runoff model for the 24km2 Goss Moor catchment in SW England. The catchments are characterised by their relatively rapid response to storm events. The model is developed from Tsykin’s [36] time-series approach to incorporate two basic terms - slow response for baseflow and quick response for runoff:
Ri = Ri −1e − u + vPi Pi −1 Pi − 2 Pi −3 . w
x
y
z
(1)
where R is the stream flow, P is precipitation, i is time and u,v,w,x,y and z are the model parameters. The units of the daily mean river flow (R) have been converted from discharge rate (m3/s) to the equivalent daily runoff depth over the catchment area to be consistent with the rainfall units. The calibration is carried out for a selected year of observed precipitation and stream flow, the data is shown in Figure 5. The model fitness is measured using the efficiency r2 according to the Nash-Sutcliffe formula [37] where r2=1 implies a perfect match between the observed and predicted flows:
s.t.
r2 =
F02 − F 2 F02
F02 =
∑ (R N
i =1
(2)
− Robs )
2
obs
and
F2 =
∑ (R N
i =1
where: F02 = variance of the observed river flow
14
− R pred )
2
obs
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. F2 = describes the difference between predicted and observed river flow Robs = the observed river flow given daily (mm over catchment area) Rpred = predicted daily river flow (mm over catchment area) N = number of days (365), i = time (in days), r2 = efficiency.
Goss Moor 50
River Flow (outflow over catchment area)
45 mm/day
Rainfall (over catchment area)
40 35 30 25 20 15 10 5
Aug-94
Jul-94
Jun-94
May-94
Apr-94
Mar-94
Feb-94
Jan-94
Dec-93
Nov-93
Oct-93
Sep-93
0
Time (days)
Fig. 6. Goss Moor riverflow and rainfall data.
4.2 Simulations Figure 7 shows the visualisation after the GA has been run for only 20 generations. The result reveals the flatness of the fitness landscape (or response surface), indicating that many different parameter settings return similar results, consistent with the concept of equifinality [10]. The user can apply a clustering tool to identify regions containing solutions of high fitness (r2). A slight improvement in the best fitness can be obtained by running a further 20 generational GA inside these regions. In addition the robustness of the region can be evaluated by running a ‘negative’ GA inside the region. The resulting visualisation shown in Figure 8 reveals that there are low fitness solutions in the vicinity of good solutions. Inspection of the parameter values indicates the insignificance of x, y and z (i.e. the time-lagged rainfall terms in the model).
15
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
Fig. 7. Visualisation of parameters to response surface (efficiency r2).
u=.15; v=.30; w=.57; x=.011; y=.005; z=.039: r2 =.72
u=.22; v=.19; w=.90; x=.005; y=.006; z=.037: r2 =.77 Fig. 8. Interaction used to find the best and worse solutions in alternative regions. The parameter and fitness (r2) values of the best solutions of found in each region are given.
16
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. The relatively low values of efficiency found by the GA and lack of robustness of those solutions indicates that the model does not fully describe the physical system. Many different parameter sets can be found that return similar fitness values. The parameter sets shown in Figure 8 suggests that the terms with exponents y and z (and possibly x) could be dropped from the model. Further simulations of the four parameter model are shown in Figures 9 and 10, a better value of efficiency is found as would be expected. After further interaction, the user has concentrated on two regions of the search space that indicate particularly fit and robust regions. To enable fair comparison the variable definitions of the defined clusters were changed slightly so that they were about the same size. Strict filtering (less than 1% of the original) and negative GA search in both regions revealed that cluster 1 contains worse solutions than cluster 2 (Figure 9). In the online version of this article, cluster 1 is red and cluster 2 is green, thus the green region (cluster 2) is more robust than the red. Here the system has provided a piece of knowledge that could only be assumed by previous visualisations and would be very time consuming to confirm by iterative search.
The difference in robustness is confirmed in Fig. 10, however the robustness of the green region is still questionable – a change of 0.02 in any parameter will result in a loss in efficiency. Fig. 9 also shows the individual details of the best solutions. It can be clearly seen which parts of the data have not been well modelled by any of the solutions found, at around 50-80, 230, 270-300 and 340+ days (November, end of April, June and August). This implies reduced model is good at modelling a lot of rainfall and fast runoff behaviour that is usual in this catchment, but this model is not suitable during unusual or changeable weather patterns, such as times of year where rainfall is slow to pass through the catchment area or the ground is already very dry causing faster baseflow.
17
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
Fig. 9. Check robustness by equalising volume of regions, filtering and running a negative GA. Analysis suggests the Cluster 2 has better Min Fit so is more robust.
18
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
Fig. 10. Two alternative solutions and regions with similar fitness, but the relative robustness is still not good; a change in variables by ~.02 will ‘fall off’ either peak.
4.3 Conclusions of rainfall-runoff application Further work could be undertaken with the system. Because the objective function can be easily changed in MATLAB, it would be instructive to find the optimal number of parameters needed to model the data. However the model actually changes when parameters are added or removed (because different raw data is being used), but this is a valid form of testing the model; traditional engineering sensitivity checking involves fixing all parameters but one and optimising each one separately until the best is found. This is unnecessary when using the GA and within the system as described in the previous sub-section where wide initial limits were set and then interesting regions were ‘homed in’ on. Comparing the results of the two strategies would be instructive for this problem and general engineering design practice.
19
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. The system partially answered the robustness question by the visualisations and negative GA experiments. The conclusions were that changes in parameter settings will significantly affect some if not all solutions to the problem. From these results the lack of robustness of the model and sensitivity to variables would indicate that a parameter set from one year would not correctly predict the flow for another year. Normally the split records approach or another technique [38] is used to determine the optimal number of parameters to achieve the most consistent results; this study shows that visualisation could be used to speed up these validation processes.
The usefulness of the interactive visualisation system in helping to improve understanding of rainfallrunoff calibration process has been demonstrated. This section indicates the potential for the application of the system to the calibration of other parameter based models and most continuous engineering design problems.
5. Discrete Design Application: Reinforced Biaxial Column Design The previous section gave an example of the theoretical uses of the system applied on a well-behaved mathematical problem. This section describes the problems that needed to be overcome when the system was applied to a civil engineering design task with discrete variables and multiple objectives and constraints.
5.1 Initial Biaxial Column Design Model The design of reinforced concrete biaxial columns is used as an example. The details of the problem and genetic algorithm (GA) implementation are given in [11]. The problem involves finding the optimum positions for reinforcement steel bars inside a reinforced concrete column (Figure 11). Input parameters include the width and breadth of the column, the applied axial load (N) and bending moments about the x and y axes (MX and MY). The fixed design variables for the initial biaxial column design problem are shown in Table 2. The decision variables are the x and y positions and diameter d of the reinforcement bars in the column. The main outputs or objectives of the problem are minimising the area of the
20
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. reinforcement steel used (%As) (and thus cost) while maximising the resisting capacity of the column for axial load and biaxial bending. To ensure the columns are safe the Bresler check [39] is used such that the load contour equation LC is satisfied:
α
α
⎛ M ⎞ ⎛ My ⎞ ⎟ ≤ 1 , where α = 2 + 5 N . L = ⎜⎜ x ⎟⎟ + ⎜ ⎜ ⎟ 3 3N uz ⎝ M ux ⎠ ⎝ M uy ⎠ C
MY
ex
β
N
MUY
NUZ %As
ey MX
MUX
MX=N ex My=N ey
(a) Inputs
(b) Outputs
Fig. 11. Design of a reinforced concrete biaxial column.
Table 2: Fixed input parameters used in the first biaxial column design problem.
Axial load (kN) 4000
Moment in x Moment in y (kNm) (kNm) 450 750
Breadth (mm) 750
Depth (mm) 500
5.2 Initial Simulations The main issue for the visualisation system is that the decision variables (position and diameter of the bars) take on discrete values. In this version of the problem a number of bars could be placed anywhere inside the column. A visualisation of the data generated by the GA is shown in Figure 12. The parameters are shown in the ‘Axes Order’ dialog to the right. Here the objectives are shown before the variables. On the enlarged plot the capacities CapX and CapY are shown (MUX and MUY in Figure 11). The actual fitness of solutions is to maximise the load contour without exceeding the constraint LC=1. Some of the decision variables are shown on the bottom three right hand plots on the Overview window (Figure 12). For this problem the column is symmetrical, so a quarter section is used to represent a design and hence the
21
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. diameter and x and y positions of bars are given for the quarter section. It can be seen that the discrete nature of these variables makes understanding of what is a ‘good’ design very difficult. Additionally the clustering algorithm returns very small regions (sometimes singular values) because of this discreteness. So for this problem evaluation of robustness using the negative GA is impractical.
Fig. 12. Visualisation of parameters for biaxial column problem.
However interesting information can still be extracted from the system by allowing interaction in objective space and supporting direct access to the design solutions. The system was modified so that clustering could be performed in objective space either automatically or manually by the user. A GA run can be performed ‘inside’ such a region by penalising any solution that falls outside that region. To aid the interaction and knowledge discovery process details of individual solutions were made available when the user clicked near a solution using the right mouse button.
In Figure 13 the user has highlighted the pertinent part of objective space – at LC≈1 and minimising the area of reinforcement used. After running a GA inside this region a number of new designs are found (Moreview2). The low number of generations used and complexity of the problem means that suboptimal solutions are returned. However the diversity of solutions allows the user to employ domain knowledge to choose between designs using visualisation and the interactive power of the system.
22
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
Fig. 13. Result after interaction in objective space (quarter sections shown).
Looking at the designs shown at the bottom of Figure 13, a knowledgeable engineer will know how to rearrange the bar details for optimal performance, for example moving larger bars into the corner or moving bars from inside to the edges of the column. Many different configurations are suggested by these options. For a novice engineer interaction with the displays shown in Figures 12 and 13 encourages understanding of the design problem and extent of the solution space.
5.3 The Need for Another Biaxial Column Design In theory the columns found in the initial biaxial column experiment (Section 5.2) are the most efficient and will withstand the applied load with minimal amount of material used. But in practice column design experts will ensure safety by simplifying the manufacturing process (called bar detailing), using one bar size and reducing the possible choices for placing the bars. The default design is to place reinforcement bars around the edge of the column, including the sides, to stop it ‘exploding’ due to a large axial load.
23
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. These safety factors on column design are highly recommended by practitioners and common practice in the UK and elsewhere, so the constraints were coded directly into the objective function.
Thus domain knowledge was increased and design options decreased in the new problem. However the designer still has a number of decisions to make due to the “buildability” of the column. The columns are built using a form and the bars are held in place by shear links. The arrangement of the bars has an impact on the ease of detailing the column and keeping the bars in place with the links. Optimal arrangements are difficult to describe and depend on size and number of bars present in the column, so the more design choices that can be generated, the more likely an experienced engineer would be able to choose a safe and easy to build column.
The axial load N and design moments Mx and My are fixed in this model, but the column depth (h) and breadth (b) are allowed to take certain discrete values. The other variable inputs are the number of reinforcement bars (whose positions are determined by the depth and breadth of the column) and the diameter of all bars (12, 16, 20, 25, 32 or 40mm). As well as the objectives listed for the first biaxial column design, the cost of making the column is also an objective. The cost is a sum of the cost of concrete, steel and form used to build the column. The required links between reinforcement bars are also returned as dependent variables and are used in the calculation of cost. Fixed input parameters for the column design problem discussed in this section are given in Table 3.
Table 3. Fixed input parameters used in the second biaxial column design problem.
Axial load (kN) 5000
Moment in x (kNm) 2100
Moment in Cost of y (kNm) Concrete 1000 60
Cost of Steel 750
Cost of Formwork 20
5.4 Optimising Column Cost and Evaluating Feasibility Robustness In this function the main objective is to minimise the cost of building the column. The cost is a function of the cost of concrete, reinforcement steel and formwork. Because the column can have any size the cost is a fairer way to compare columns. Obviously the small columns will have very small cost but they
24
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. violate the load contour equation constraint. There are alternative ways of tackling this problem, either constrain the search space in the system and try to force the GA into finding feasible solutions (the strategy used in Section 5.2) or change the objective function directly so that infeasible solutions are given low fitness. The second option is easier to visualise because good solutions are at the ‘top’ of the design space surface and the clustering algorithm will return the best clusters taking the constraint into account.
Figure 14 shows the result of optimising the cost of column but penalising infeasible solutions (LCeqn=1.02 is just feasible), the penalised solutions have low fitness so are of light colour. Clustering in variable space reveals the ‘best’ solutions are those of average size with few, large sized reinforcement bars (note the complete column is shown in the individual view). This reflects a trade-off between the amount of concrete and steel used that both contribute to the cost of the column, however the fact that all the highlighted designs use 40mm bar sizes is also an anomaly of the clustering algorithm working in discrete space. The algorithm splits the designs into diameter size, finds the best solution (in this case a column with 40mm bars), and thus returns a cluster with only 40mm bars. An engineer may have reasons for discounting the solutions with large, but fewer, bars, especially if a certain number of bars are needed to stop the column exploding.
Figure 15 shows the result of undertaking a similar set of actions in this problem as those that resulted in Figure 13. GA search ‘inside’ the pertinent region of the search space chosen by the engineer created cluster in objective space shows the variety of solutions that could be used formed to make ‘good’ solutions (Figure 15); designs have been colour coded according to the reinforcement bar diameter.
25
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
Fig. 14: Optimising column cost, but penalising feasible solutions (with LCeqn>1.02). Clustering in variable space, the cheapest feasible solution is shown.
Fig. 15. GA search in the required region of objective space. Solutions are coloured according to reinforcement bar size (see second plot down on right hand side).
26
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
Analysing the robustness of solutions is made more complex with the inclusion of the constraint. As well as evaluating the sensitivity of variables due to the change in cost of the solution, it is also necessary to ensure that changes in variables do not cause the solution to become infeasible. This form of investigation is known as “feasibility robustness” [7,23]. The system can be used to assess robustness in the usual way using the filtering mechanism. Figure 16 shows the result of filtering the clusters found in variable space, shown in Figure 14; very strict filtering is used (less than 1% of the fittest solutions are kept). There are a small number of solutions remaining in the clusters, all of fairly low cost, however some of the solutions are also infeasible. In fact a slight change in the column size of a near-optimal solution (Figure 16a) has resulted in an infeasible solution (Figure 16b), here the full column is displayed. Looking at the constraint boundary on this figure shows that there are many solutions between the ‘neighbouring’ solutions of different column sizes, so neighbouring solutions in variable space do not map to neighbouring solutions in objective space (as is generally the case). It is easier to analyse robustness of solutions for this problem than the first biaxial column design problem, but the same difficulties with discrete inputs still arise.
27
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
b a
Size=1200x800mm LCeqn=1.05 Cost=208
Size=1250x825mm LCeqn=0.999 Cost=217
(b) (a) Fig. 16 Filtering of clusters identified in variable space (Figure 13) to assess feasibility robustness. The pertinent objectives are displayed in the main plot. A feasible solution shown (a), but a neighbouring solution in variable space is infeasible (b).
28
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. The visualisation system was shown to column design experts who stated that rather than performing robustness evaluation in variable space, experienced engineers use knowledge of the overall design situation to make decisions, sometimes on site. It was suggested that different designs could be discovered if the cost of the materials used to build the column were to vary. For example in some countries steel is very difficult to obtain and thus the relative cost is much higher. This scenario can be simulated by halving the cost of concrete and leaving all the other input parameters unchanged. Figure 17 shows a similar trade-off picture to that given for the original scenario in Figure 16. The cost of all columns is obviously reduced, but optimal columns (in terms of cost) are a lot larger with more concrete used to fulfil the design constraint (compare size of the best columns with 32mm bars, details shown to the left of the figures). However the best columns in terms of cost are again those containing a small number of 40mm bars. The amount of steel used is relatively low for these designs, but the larger bars have just enough moment affect in the corner of the column. Another reason why the best designs contain large bars may be due to the representation of the column in the GA; the depth of the column can be a certain fraction of its breadth so there are many more small column sizes available than large sizes (see depth-col against bread-col trade off in figures). In this scenario, where larger columns with less steel are theoretically optimal, there are less design choices available. But for smaller columns with fewer (larger) bars there are more design choices, so these are more likely to be found by the GA. An alternative representation of the problem is advised for future investigation.
Conversely if the cost of concrete is increased relative to the original parameters, much smaller columns with more steel reinforcement are optimal (Figure 18). There is a complete absence of columns with 20mm bars amongst these good designs because there is not enough room in the small columns to contain the required number of small bars. In this case the columns with 32mm are optimal in terms of cost; they beat the columns containing 40mm bars because a change in the number of bars causes less change in fitness, so they have more chance of being an optimal design. However there may be other reasons for all these effects, including convergence to sub-optimal designs by the GA.
29
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
32mm bars Size=1400x925mm (1.295m2)
Fig. 17. Changing design scenario: reduce concrete cost from 60 to 30. Result is bigger columns result with less reinforcement.
32mm bars
Size=950x750mm (0.7125m2) Fig. 18. Changing design scenario: increase concrete cost to 120. Smaller columns with more reinforcement, no column with 20mm bars is available at all.
30
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
5.5 Conclusions to Second Biaxial Column Design problem The second biaxial column design contained more domain knowledge and returned solutions that could be used in practical situations. Many design options could be obtained by concentrating search in the desired region of objective space, allowing the engineer to make choices between designs based on feasibility, cost and more subjective constraints such as “buildability” and ease of manufacture. The problem was more suited to theoretical evaluation of objective robustness due to fewer decision variables and a more continuous search space. Some conclusions on the robustness of relative bar and column sizes could be drawn from the analysis. Optimising column cost whilst penalising solutions that violate the design constraint demonstrated that the system could be used to evaluate the feasibility robustness of solutions. It was shown that changes in the discrete variables caused relatively large changes in the objective space so that designs on the edge of the constraint boundary easily become unfeasible. Further analysis could be undertaken to assess the robustness of feasible designs away from the constraint boundary. However it was concluded that the representation of solutions was still too coarse grained to enable realistic analysis of robustness for this problem.
In reality columns are over-designed to ensure the specifications and safety requirements are met, so for individual designs the robustness issue has already been satisfied. For this problem, engineers evaluate robustness using their intuition and experience, this may be true in many engineering problems, particularly those with discrete variables. Nevertheless conclusions on the behaviour and robustness of the objective function itself can still be drawn from assessing the relationship between neighbouring solutions. It was suggested that alternative design scenarios could be demonstrated with the system, further simulation confirmed the varying emphasis on column sizes and amount of reinforcement needed to satisfy different environmental and financial conditions. From discussions with experts it is clear that engineers use their knowledge and experience in the decision making process, supported by the visual information rather than driven by it.
31
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. For further work and results on the representation of the column more in line with current practice, see [40].
6 Conclusions and Further Work The power of the genetic algorithm to supply a vast number of design alternatives was fully demonstrated by the system, a theoretical and practical example has been given showing how the system promotes knowledge discovery and encourages decision-making. By interacting with the data many near-optimal solutions with very different attributes can be assessed, allowing engineers to make their own choices between designs based on their knowledge and experience. This capability is particularly useful in complex design scenarios that are difficult to model completely, as is the case in many engineering tasks [41].
The procedure to evaluate the robustness of solutions was shown to be effective in continuous design domains but more work is needed to investigate clustering and evaluation of robustness in discrete domains. However the ability to generate data at the boundary of constraints in objective space supplies an alternative means of providing feasible and robust solutions. The system supports an experienced engineer’s intuitive understanding of robustness by providing a large variety of alternative solutions and visualisations that explain the relationships between them. The inclusion of outside influences that cannot be modelled in the design problem can therefore be included in the design choice.
Such a visualisation and interaction system is essential to exploit the exploratory power of evolutionary computing for both optimisation and robustness evaluation. The tool can be used for knowledge discovery using explicit evaluation of robustness (possibly as a teaching tool) and for decision support where experienced engineers choose between designs using personal and task specific knowledge. Clustering in natural coordinate systems such as the principal components of the data is also under investigation [32,33].
32
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346.
References [1]
Parmee I.C., Cvetković D., Watson A.H. & Bonham C.R. Multiobjective Satisfaction within an Interactive Evolutionary Design Environment, Evolutionary Computation, 2000. 8(2): p.197-222.
[2]
Rafiq M.Y., Mathews J.D. & Jagodzinski P., “Interactive Role of the Human Computer Interaction in Design”, 8th Int. Workshop of the European Group for Structural Engineering Applications of Artificial Intelligence (EG-SEA-AI), Loughborough, UK, 20 – 22 July 2001.
[3]
Holland J.H. Adaptation in Natural and Artificial Systems, Ann Arbor: The University of Michigan Press, 1975.
[4]
Goldberg D.E. Genetic Algorithms in Search, Optimisation, and Machine Learning, AddisonWesley, Reading, MA, 1989.
[5]
Shneiderman B., Designing the User Interface, Reading, Massachusetts: Addison-Wesley, 1998.
[6]
Spence R. (2001), Information Visualization, Harlow, England: Addison-Wesley. See also the URL, “The Acquisition of Insight”, Imperial College, London, SW7 2BT: http://www.ee.ic.ac.uk/research/information/www/Bobs.html.
[7]
Du X. & Chen W. Towards a Better Understanding of Modeling Feasibility Robustness in Engineering Design, Transactions of the ASME, Journal of Mechanical Design, 2000. 122(4): p.385-394.
[8]
Tweedie L., Spence R., Dawkes H. & Su H. Externalizing Abstract Mathematical Models, Proceedings of CHI’96, ACM, 1996. p.406-412.
[9]
Sorooshian, S. & Gupta, V.K. Model calibration. IN Singh, V.P. (ed.) Computer Models of Watershed Hydrology. Water Resource Publications, CO, 1995. p.23-68.
[10] Beven, K.J. & Binley, A.M. The future of distributed models – model calibration and uncertainty prediction. Hydrol. Processes, 1992. 6(3): p.279-298. [11] Rafiq, M. Y., and Southcombe, C. Genetic algorithms in optimal design and detailing of reinforced concrete biaxial columns supported by a declarative approach for capacity checking, International Journal of Computers and Structures, 1998. 69: p. 443-457. [12] Wong P.C. & Bergeron R.D. (1997), “30 Years of Multidimensional Multivariate Visualization”, in Scientific Visualization: Overviews, Methodologies, Techniques, Nielson G.M., Hagemn H., Müller H. (eds.), Los Alamitos, California, IEEE Computer Society Press, Ch. 1, pp. 3-33. [13] Tukey J.W. (1977), Exploratory Data Analysis, Reading, Massachusetts, Addison-Wesley. [14] Chambers J.M., Cleveland W.S., Kleiner B. & Tukey P.A. (1983), Graphical Methods for Data Analysis, New York, Chapman & Hall. [15] Inselberg A. & Dimsdale B. Multi-dimensional Lines I: Representation & “Multi-dimensional Lines II: Proximity and Applications, SIAM J. Appl. Math. 1994. 54(2): p.559-596. [16] Becker R.A. & Cleveland W.S. Brushing Scatterplots, Technometrics, 1987. 29(2): p.127-142. [17] Leung Y.K. & Apperley M.D. (1994), “A Review and Taxonomy of Distortion-Orientation Presentation Techniques”, ACM Transactions on Computer-Human Interaction, Vol. 1, No. 2, pp. 126-160. Reprinted in Card et al. (1999a) pp. 350-367. [18] Carpendale M.S.T., Cowperthwaite D.J. & Fracchia F.D. (1997), “Extending Distortion Viewing from 2D to 3D”, IEEE Computer Graphics and Applications, Vol. 17, No. 4, pp. 42-51. Reprinted in Card et al. (1999a) pp. 368-380. [19] van Wijk J.J & van Liere R. (1993), “HyperSlice: Visualization of Scalar Functions of Many Variables.”, Proceedings of IEEE Visualization ’93, Nielson G.M. & Bergeron r.D. (eds.), Los Alamitos, California: IEEE Computer Society Press, pp. 119-125. [20] Alpern B. & Carter L. (1991), “Hyperbox” in Neilson G.M. & Rosenblum L. (eds.), Proceedings of IEEE Visualisation ’91, San Diego, California, October 1991, pp. 133-139.
33
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. [21] Feiner S. & Beshers C. (1990), “Worlds within Worlds: Metaphors for Exploring n-Dimensional Virtual Worlds”, Proceedings of ACM Symposium on User Interface Software and Technology (UIST ’90), Snowbird, TY, 3-5 October 1990, pp. 76-83. [22] Ward M.O. (1994), “XmdvTool: Integrating Multiple Methods for Visualizing Multivariate Data”, Proceedings of IEEE Visualization ’94, Bergeron R.D. & Kaufman A.E. (eds.) Los Alamitos, California: IEEE Computer Society Press, pp. 326-336. [23] Parkinson A., Sorenson C. & Pourhassen N. (1993), “A General Approach for Robust Optimal Design”, Transactions of the ASME, Journal of Mechanical Design, Vol. 115, No. 1, pp. 74-80. [24] Takagi H. (2001), “Interactive Evolutionary Computation: Fusion of the Capabilities of EC Optimization and Human Evaluation”, Proceedings of the IEEE, Vol. 89, No. 9, pp. 1275-1296. [25] Pham D.T. & Yang Y. (1993b), “Optimisation of Multi-modal Discrete Functions using Genetic Algorithms”, Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering, Vol. 207, No. 1, pp. 53-59. [26] Jo J. (1998), “Interactive Evolutionary Design System: Process and Knowledge Representation”, Fourth International Round-table Conference on Computational Models of Creative Design, Gero, J.S. & Maher, M.L. (eds.), Heron Island, Australia, December 1998, published by University of Sydney, 1999, pp. 215-224. [27] Fonseca C.M. & Fleming P.J. (1995), “An Overview of Evolutionary Algorithms in Multiobjective Optimization”, Evolutionary Computation, Vol. 3, No. 1, pp. 1-16. [28] Horn J. (1997), “Multicriteria Decision Making and Evolutionary Computation”, Handbook of Evolutionary Computation, Bäck T., Fogel D.B. & Michalewicz Z. (eds.), Institute of Physics Publishing, Bristol, UK. [29] Parmee I.C. & Bonham C.R. (1998), “Supporting Innovative and Creative Design using Interactive Designer / Evolutionary Computing Strategies.”, Fourth International Round-table Conference on Computational Models of Creative Design, Gero, J.S. & Maher, M.L. (eds.), Heron Island, Australia, December 1998, published by University of Sydney, 1999, pp. 187-214. [30] Parmee I.C. (1996), “Cluster-Oriented Genetic Algorithms (COGAs) for the Identification of High-Performance Regions of Design Spaces”, 1st International Conference on Evolutionary Computation and its Applications (EvCA96), Moscow, 24-27 June 1996, pp. 66-75. [31] Abraham, J.A.R. & Parmee, I.C. (2004), “User-centric Evolutionary Design Systems – the Visualisation of Emerging Multi-Objective Design Information”, Xth International Conference on Computing in Civil and Building Engineering, Weimar, June 02-04, 2004, Proceedings on CD. [32] Packham, I.S.J. (2003), “An Interactive Visualisation System for Engineering Design using Evolutionary Computing”, PhD Thesis, December 2003, University of Plymouth. [33] Packham I.S.J. and Denham S.L. Visualisation Methods for Supporting the Exploration of High Dimensional Problem Spaces in Engineering Design, Int. Conf. on Coordinated & Multiple Views in Exploratory Visualization, Roberts J.C. (ed.), London, UK, 15 July 2003. IEEE Computer Soc., p.2-13. [34] Chipperfield A.J., Fleming P.J., Pohlheim H. & Fonseca C.M. (2002), “Genetic Algorithm Toolbox for Use with Matlab ®”, Department of Automatic Control and Systems Engineering, University of Sheffield, last updated October 2, 2002: http://www.shef.ac.uk/~gaipp/ga-toolbox/ [35] Silverman B.W. Density Estimation for Statistics and Data Analysis, London: Chapman and Hall, 1986. [36] Tsykin, E.N. Multiple nonlinear statistical models for runoff simulation and prediction, J. Hydrol., 1985. 77: p.209–226. [37] Nash, I.E. and Sutcliffe, I.V. River flow forecasting through conceptual models, part 1. J. Hydrol., 1970. 10: p.282-290. [38] Davidson, J.W., Savic, D.A. & Walters G.A. Rainfall Runoff Modeling using a New Polynomial Regression Method, 4th Int. Conf. on Hydroinformatics, Iowa City, Iowa, USA, 23-27 July 2000. (On CD), p.8.
34
Prepublication version of the paper to appear in: Advanced Engineering Informatics 19 (2005) 263–280. ISSN: 1474-0346. [39] Bresler, B. Design Criteria for Reinforced Columns under Axial Load and Biaxial Bending, J. ACI, 1961. 32(8): p.481-490. [40] Rafiq M.Y., Packham I.S.J. Easterbrook D.J. & Denham S.L. Visualising search and solution spaces in the optimum design of biaxial columns. Submitted for publication to J. Computing in Civil Engineering, ASCE. [41] Domer B., Raphael B., Shea K. & Smith I.F.C. A Study of Two Stochastic Search Methods for Structural Control, J.l Computing in Civil Engineering, 2003. 17(3): p.132-141.
35