Robust Shape Optimization for Crashworthiness via a Sub-structuring Approach Milan Rayamajhi and Stephan Hunkeler* Queen Mary University of London Fabian Duddeck† Technische Universität München and Malek Zarroug and Laurent Rota‡ PSA Peugeot Citroën This paper presents an approach to reduce the computational effort of shape optimizations for crashworthiness design. The work presented here focuses on the reduction of computational effort associated with Robust Design Optimization (RDO) of large industrial models. For the robust design study, the computational effort is determined by the following two aspects: (i) time for the computation of crash and (ii) the number of robustness evaluations required for each design. To reduce the computational effort in aspect one, a sub-structuring method is presented where only the sub-region of a computationally expensive model is considered in the optimization. For the second aspect, a modified double loop approach is presented where only the designs in the Region Of Interest (ROI), a subset of the total design space, are considered for robustness analysis. For the designs outside the ROI, assumed robustness values can be used or, better, surrogate models (physical or mathematical) are used for these values. At present the Equivalent Static Loads Method (ESLM) (Park (2011) and Kim et al. (2010)) is under investigation for the latter approach. First the three approaches, the sub-structuring and the two simplified robustness evaluation methods, are implemented and validated separately for shape optimization problems based on simpler test cases (reduced number of design variables and design complexity). These approaches will be integrated into one optimization loop to solve a large scale industrial optimization problem in the future.
I.
Introduction
R
ecent studies have utilized more and more complex geometries and detailed finite element models to capture the true behavior of the structures with respect to crashworthiness. Such model complexity with the required high non-linearity (material, geometry and contact) leads to high numerical effort for simulation, which is in particular challenging for optimization studies for industrial-sized problems with their high number of design variables, constraints and objectives, e.g. Duddeck (2008). Standard approaches have therefore only a restricted ability to explore sufficiently well the design space, which becomes even more crucial in cases where the robustness of the derived optima has to be assured (robustness in the means of insensitivity of the design with respect to inevitable fluctuations in design and noise variables). Nevertheless an approach where robustness analysis is included into the optimization loop is necessary because the design is normally driven to the limits during an optimization. The objective of the study at-hand is therefore to investigate methods to reduce computational effort for the single simulations such that a sufficiently high number of design evaluations can be performed and a real robust optimization can be established for crashworthiness. Communicating Author:
[email protected]. * PhD student, School of Engineering and Materials Science, Queen Mary University of London, Mile End Road, London, E1 4NS, UK. † Chair of Computational Mechanics, Technische Universität München, Arcisstr. 21, D-80333 Munich, Germany and Reader for Computational Mechanics at the School of Engineering and Materials Science, Queen Mary University of London, Mile End Road, London, E1 4NS, UK. ‡ PSA Peugeot Citroën, DRIA/SARA/STEO/MAST, Route de Gisy, Bat. 62-1, F-78943 Vélizy-Villacoublay, France. Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
In the literature, e.g. Sun (2011), robust optimization has been based on surrogate modelling techniques using response surface approaches to replace finite element computations by evaluations on mathematical functions. In contrast to this, it is here explored how physical surrogate models can be used, i.e. models based on a concise sub-structuring. By this approach, the physical characteristics can be still represented sufficiently well even though the numerical effort for each design evaluation is reduced remarkably. This method is in particular beneficial in situations where only a small part or sub-system of a large complex system has to be optimized to improve performance of the overall system, see Figure 5. The gain in computational time allows for the realization of a true robust design optimization (RDO), a so-called double-loop approach where in the optimization loop an additional loop is embedded for stochastic analysis to assess the robustness. This leads, in contrast to a single loop approach where robustness is only controlled at the end of the optimization, to optima under the robustness constraint. In the work presented here, a modified double loop RDO approach has been implemented for the first time where the explicit robustness analysis is only made for designs that are in a special Region of Interest (ROI), see Figure 7. Explicit robustness analysis is only made on special design points, which are both feasible and in a certain respect optimal (designs closer to the boundaries of the feasible design space). For the designs outside the ROI, assumed robustness values can be used or, better, surrogate models (physical or mathematical) are used for these values. At present the Equivalent Static Loads Method (ESLM) (Park (2011) and Kim et al. (2010)) is under investigation for the latter approach. Through this approach the number of explicit design evaluations required in a RDO can be significantly reduced and large industrial problems can be optimized rarely achieved before. The sub-structure optimization approach presented here is based on the idea that the sub-structure and the remaining structure are coupled to each other. Hence the interaction between the sub-structure and the remaining structure is taken into consideration, in the optimization process, by controlled updating of the boundary conditions at the interface of the sub-structure, see Figure 6, c.f. also to Averill (2004). Here a nodal based interface boundary condition is used which is updated at each new iteration. For validation of the new approaches, first the three approaches are validated separately on various analytical functions and small problems, taking into consideration the shape and thickness modifications as design variables. This paper ends with an outlook and a proposal for the final robust design optimization loop where the three approaches are coupled together.
II.
Shape Optimization for Crashworthiness
A. Introduction Recently, the research in crash optimization shifted from simple size optimization considering the panel thicknesses as design variables, e.g. Duddeck (2008), to more powerful design tasks how to identify the best topology and shape of a car body structure with respect to crashworthiness, e.g. Zimmer et al. (2009). These methods address str uctural aspects in earlier design stages where the structural concept is not as frozen as at the end of the product development process (PDP). To realize shape optimizations several techniques have been discussed in the literature. A classical approach to modify the shape of a structure is based on so-called morphing techniques, e.g. Georgios et al. (2009). For most of these approaches, morphing boxes have to be defined, which can be a rather cumbersome process and demands for relatively high expert knowledge to assure a successful optimization. Special morphing parameters or handles are used, which are then linked to the changes of the structural shape. This requires a very careful selection of the parameters and it is, to the authors’ knowledge, not always possible to obtain the intended alterations of one structural component without unwelcomed side-effects. Modifying a box and the related parameters does not give the required freedom in design variables to really obtain optimal shapes. It seems to be more appropriate to use parameters directly linked to the structure like cross-sectional corner points, angles and radii or in threedimensional parameterizations curvature, joint positions or other connection types of several components. A mesh-based morphing approach seems to be more appropriate, but there remain two problems, first the assurance of mesh quality in the morphing process for larger geometrical changes and second the maintenance of the connectivity between the parts of the vehicle. The connections (tied, spotwelds, glue etc.) should follow automatically the variations
Figure 1: Example for a complex connectivity situation.
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
even in cases where the shape modification swaps from a two-component to a three-component connection or flange (see Figure 1). The implicit parameterization technique, the mapping methodology, the re-meshing algorithm, and the appropriate definition of variables in SFE CONCEPT (www.sfe -berlin.de) now allow larger geometrical changes to explore the full potential of geometry optimization. This offers not only the possibility to vary shape but also topology. For example the number of beams, holes, beads, reinforcements, spotwelds etc. can be changed easily, which is difficult via morphing technology. To improve further the usability in the frame of industrial product development, the integration into CAD- modeling (e.g. integration into CATIA) and the extension to package parameterization was realized recently. This enables now a very efficient design process especially in the earlier phases of product development. The full optimization chain can be realized, starting from layout/package definition and optimization, proceeding to topology and shape optimization and finishing in fine tuning by size and material optimization. The results from recent academic and industrial studies for components as well as for larger parts of the vehicle are presented in Duddeck et al. (2012). Special emphasis lies here on appropriate and efficient parameterizations for crash leading to optimal shapes of car body structures (e.g. bumper, front rail and full front end). B. Parameterization Technique for Shape Optimization & Crash In the literature, there are a high number of different approaches to parameterize geometry for shape and topology optimization. Often a CAD (Computer Aided Design) software is used to define the parameters, e.g. CATIA or NX (Siemens), e.g. Dietrich (2013). As an alternative, some CAE tools (Computer Aided Engineering) offer the possibility to use a parametric description for optimization, e.g. ANSYS Parametric Design Language APDL, e.g. Hou et al. (2007). The CAD-CAE integration is today an important aspect to assure smooth transitions and a good simulation data management (SDM) between the different departments involved in car development. Ideally we have a parametric description of the geometry which can be used in both areas. Regarding the publications related to crashworthiness and shape optimization, first works on CAD parameterization were published by Chiandussi and his co-workers, see Avalle et al. (2002); Chiandussi et al. (1998, 2000, 2002). The changes in geometry (2D and 3D) were first realized in a CAD software and then transferred via a boundary surface displacement field to the CAE model. By this approach, they optimized a single component (tapered beam in Chiandussi et al. (2002), and tapered beam and crash box in Avalle et al. (2002)). The changes in geometry were small and the complexity of the parameterization was moderate (e.g. length and smaller diameter of the tapered zone). Both, the range of geometrical change and the definition of parameters, did not lead to problems with respect to connectivity between parts, which normally occur in more complex cases. Farkas et al. (2009) used the explicit parameterization offered by the CAD tool CATIA to establish a CAD-integrated optimization procedure for the bumper. For this the CAD model was preprocessed (e.g. defeaturing of small geometrical details to avoid small elements and small time steps) and an automated meshing was used. The parameterization was carefully chosen such that the connectivity was not touched, i.e. the issue of adapting adjacent components and joints did not become relevant. This restriction in applicability can be overcome by implicit parameterization where the structure or package parameters are defined implicitly, which is discussed in the next section. C. Implicit Parameterization via SFE CONCEPT for Crash SFE CONCEPT uses implicit parametric geometry models to realize easily shape or topology changes. From this the FE meshes and models are generated, hence a re-meshing procedure can be included into the optimization process to enable large geometrical changes. This model can then be computed with all important commercial FE solvers (not only crash). As shown in Figure 2, the parametric model consists of Influence Points (IP), which are defined for Base-Lines (BL) and the corresponding Base-Sections (BS). Together, the beams/members of the structure are then defined. The BSs can be derived from the crosssections of existing vehicle FE models or are defined from scratch. Their shape can be modified via these influence points, changing size, proportions, connection locations, angles etc. More complex shape variations can be realized by using special mapping techniques. The mapping of IPs to other geometrical objects is essential to the implicit parameterization because it defines that certain geometrical features follow the changes of other objects. Hereby it is not necessary that the objects correspond to real parts of the structure. Thus a high flexibility in defining geometrical variation and structural dependencies is obtained.
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
To create more complex structures, joints are generated at the connection points of beams based on the cross-sections of the adjacent members. Here additional variability (e.g. tangents at the interfaces) can be introduced. Again, by the appropriate definition of IPs and mappings, the connected components follow the changes of each involved part controlled by the optimizer. Finally Free Surfaces can be included to represent the large panels of the automotive structures or to model more complex geometries. Because all connectivities are defined implicitly, the adjacent parts follow the changes realized during an optimization for one component. This is essential for the success of automated optimization procedures of more complex structures.
Figure 2: SFE CONCEPT objects, Zimmer et al. (2009).
For all components, the joints and (multi-)flanges can be defined, which also follow the geometrical changes. The number of discrete joints adapt automatically to the changes of the structure. Even in cases where the shape optimization moves a component ”A” being initially connected to a part ”B” such that it becomes adjacent to a third component ”C”, the connection is maintained due to the mapping definitions. The optimization is finally prepared by defining interactively so-called Parametric Records, which are based on the upper and lower bounds of the shape parameters. Because the optimization model is defined as a SFE CONCEPT model, it can be used to generate different finite element models for different functional assessments (all crash load cases, stiffness, durability, structural dynamics, acoustics, etc.). Large geometrical changes are possible because the mesh is only generated after the shape modification (this can be regarded as re-meshing during the optimization). Hence in contrast to morphing approaches, no critical mesh distortions occur. The re-meshing is challenging for the optimization algorithm because it introduces jumps in the objective functions and constraints in cases the results are mesh dependent. For crash optimization this is not really relevant because it is well known that crash simulations are not exactly repeatable (in particular when computed in parallel). The numerical or physical noise due to stability or contact bifurcations and/or numerical rounding errors also occurs in optimizations without re-meshing procedures. This is not discussed in detail here, and the reader is referred to the review paper Duddeck (2007). During the optimization process, further aspects are important for the applicability in the industrial context, which are here just mentioned and easily realized by the implicit parameterization and the mapping technology (SFE CONCEPT): Establishing a fully automated optimization loop (eventually a double loop for robust design optimization): Here a complete chain of virtual design steps is needed with interfaces coupling automatically parametric modeling, FE mesh generation, FE model assembly, model check, FE computation and post-processing, optimization algorithm and optimization monitoring together to an overall process, ideally for multi-criteria and multi-dimensional optimization process, see Figure 4. Embedding a parametric sub-model into an existing FE model: It is often important to realize a shape optimization defined for a sub-structure and to be able to update the sub-structure interface conditions via a full model computation. For this, the parametric sub-structure model should be mapped into the complete model. Then, optimizations of a full vehicle family (platform approach) can be realized and the corresponding constraints can be included into the optimization, Figure 5. Assembling several parametric and non-parametric models: For the crash load cases, additional models for dummies, barriers, seats, airbags should be added to the structure to be optimized. Other
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
components (driveline, exhaust system etc.) may be added as well. Again it is here important that the different parts are defined by implicit parameters when the connections and the interfaces vary during the optimization. Realizing a modular approach via a library function: Via a library function, pre-optimized components, geometrical features (beads, holes, reinforcements, crush initiators etc.) can be stored and re-used for new vehicle concepts. They can adapt automatically to the new geometrical dimensions if they are defined via an implicit parameters (Figure 3 gives an example where local structural features are mapped into an existing floor structure). Coupling parametric structure and package definitions for optimization: New conceptual geometries are first defined via the package, which can be modeled as well via an implicit parameterization approach presented here. They describe the geometrical boundaries in which the vehicle structure can be created. The parametric description enables sensitivity studies of package decisions and the mapping of library components into newly defined package volumes, which in fact corresponds often to the design spaces for the optimization.
Figure 3: Inserting parameterized details into vehicle platform, Zimmer et al. (2009).
D. The Optimization Environment In this study, an optimization loop is used with SFE CONCEPT, SFE CONCEPT (2010), as the geometry and mesh generation CAE tool, RADIOSS, RADIOSS (2012), as the explicit finite element solver and optiSLang, optiSLang (2010), as the optimizer. Design geometry is created and meshed in SFE CONCEPT. The FE definitions are exported in RADIOSS format and complemented by additional external FE definitions such as material parameters and impactor models using Perl scripts. This compiled file is then solved explicitly using RADIOSS. For optimization, variations of the shape Figure 4: Optimization environment (Volz et al. (2007)). and thickness parameters are recorded in SFE CONCEPT. OptiSLang is used as the optimization tool that controls the optimization loop. The parameter bounds (allowed upper and lower bound of each parameter) are fed into the optimizer. The optimizer uses the records of parameters (created in SFE CONCEPT) to generate new designs within the defined parameter range. Designs are created in SFE CONCEPT corresponding to the parameters generated by optiSLang. These designs are meshed and a loop is created as shown in Figure 4.
III.
Sub-structuring Approach for Crash
A. Introduction There have been several studies to reduce the computational effort for crash simulations. Some of these approaches include, Simplified/Reduced model Stigliano et al. (2010), Macro element method Abramowicz (2006), Multi-body modeling Sousa et al. (2008), Crash mode matching Karim (2008), Equivalent static loads method Park (2011) and Averill (2008). These approaches are best suited to certain type of problems; e.g. Macro Element Method can be applied to problems in early design phases. Similarly sub-structuring is the best Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
approach for the problem faced in this study. The application of all other methods listed above are limited in this study due to the fixed FE model provided from the industry, see Figure 5. A sub-structure method is useful where only a sub-region of a computationally expensive problem is of interest for the optimization study. This means that we are interested in optimization problems where the design variables are only defined in a smaller part of the total structure. The usage of sub-structures for crash optimization is rarely discussed in the literature. The only comparable work found so far in the literature was realized by people from Red Cedar Technology, Averill (2004) and Averill et al. (2008), although the publications do not discuss thoroughly careful validation or documentation.
Figure 5: Structure and boundary condition update loop.
As an example for the sub-structuring process, a half car FE model is taken into consideration, see Figure 5. The goal of this study is to optimize the performance of the B-pillar in the event of lateral crash. Hence the presentation and the discussion of the sub-structure approach are based on this example. B. Description of the Approach In this paragraph the methodology for sub-structure optimization presented in the literature, Averill (2004) and Averill et al. (2008), is discussed briefly. We believe that this method is advantageous to reduce computational effort for Robust Design Optimization studies where numerous computational runs are required. In this method, only a sub-structure of a large structure is taken for optimization. The coupling of the substructure and the remaining structure is taken into consideration by interface boundary conditions (where the cut is made for the sub-structure). This interface boundary condition is obtained by the analysis of the large structure which contains the sub-structure. Generally the boundary conditions at the interface of the substructure are given by the imposed displacements of the nodes over time. The method then tries to find the optimal sub-structure for certain criteria and - in addition - the associated interface conditions at which this substructure exhibits optimal performance. In Averill (2004) and Averill et al. (2008) it is stated that the performance of the optimization can be improved by controlling the magnitude of boundary conditions change in the sub-model from one iteration to another and by making sure that an optimized sub-model design has a robust performance under a set of varying interface conditions. The second point means that it is advantageous to perform a robust design optimization on sub-structure level where the uncertainty of the interface conditions is taken into account during optimization by a stochastic variation of this set of boundary conditions. The stochastic variation of the boundary conditions also contributes to the robustness of the final optimum design in-terms of the interface. In the method published by Averill (2004) and Averill et al. (2008), the sub-model is optimized with the initial boundary conditions until a desired improvement is made (or until convergence). A relatively high number of sub-model (i.e. local) evaluations are made during this step of optimization. The optimization is restarted each time the interface conditions on the sub-structures have to be updated. The author recommends 5-15 global iterations, i.e. interface updates. Hence 5-15 local optimizations are performed (depending on the problem), Averill (2011). Since the literature does not document the particularities of the approach, there are few critical steps that need investigation and discussion. These particularities include: use of start/stop approach or continued approach for the optimization, automated update for the boundary conditions for the continuous approach, parameter coupling, impact scaling, location of the cut for the sub-structure and the insertion of this approach to the overall loop. Further discussion can be found in Rayamajhi (2011b).
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
C. Proposed Sub-structure Optimization Approach Here we propose a continuous sub-structure optimization (as opposed to start/stop optimization in literature) through an automatic boundary condition update criteria. The algorithm considered here is the ARSM (adaptive response surface method) whose settings are explained in Paragraph 3 below. 1. Definition of the Sub-structure For the definition of a sub-structure, a slightly bigger part, has then to be defined to avoid that the interface conditions between the sub-structure and the total structure are correlated too strongly with the design modifications during optimization. The computation of the sub-structure should be much faster than that of the total problem. As an example for the industrial optimization problem, the parameterized B-pillar including the bottom and the top part are defined here as the sub-structure, Figure 5. We have to make sure that this sub-structure is sufficiently large such that it is reasonable to assume that the interface values between the sub-structure and the rest structure do not vary too strongly during optimization. On the otherhand, the sub-structure is small enough to increase numerical efficiency. To assure that the simulation of the sub-structure is sufficiently correct, the boundary nodes of the sub-structure are subjected to interface conditions mimicking the deformation behavior of the total structure. These interface conditions are obtained from an initial simulation of the full structure, in which the SFE CONCEPT B-pillar has replaced the original PSA B-pillar. A special connection process was used here to connect the parametric B-pillar model to the PSA§ FE model (here the option "FE CONNECT" of SFE CONCEPT was used). The interface between the parametric B-pillar model and the PSA FE model is located at the sections where the model was cut out from the main structure to generate the SFE CONCEPT model. In our study, the interaction between the model and sub-model is taken into account using the imposed displacement on the nodes of the substructure at the interface sections. The interaction values are crucial for the correctness of the simulations and have Figure 6: Sub-structure optimization loop. to be taken into consideration in the optimization. Depending on the interaction intensity between the model and the sub-model, slight changes made to the submodel, during optimization, could influence the performance characteristics of the main model. 2. Sub-structure Optimization Loop In the sub-structure optimization loop, the B-pillar geometry shape is changed, according to the SFE CONCEPT records, and the sub-structure is optimized. During this sub-structure optimization, the interface conditions will lose their correctness as long as they are not updated by a computation within the total model. Hence, the modified sub-structure should be inserted again, after a certain optimization phase, into the rest of the structure to obtain the new interface boundary condition values. The corresponding insertion and evaluation process is shown in Figure 5. 3. Adaptive Response Surface Settings To realize the sub-structure approach for optimization, adaptive response surface method (ARSM), provided by optiSLang, is considered here. At first the focus was set to test the sub-structure approach through a continuous optimization loop rather than the start-stop method implemented in Averill (2004). Hence the ARSM optimization was slightly modified to support the sub-structure approach. The study to characterize the sensitivity (coupling analysis) of boundary conditions to the changes in the design variables revealed the need to update the boundary conditions in optimization **, see Rayamajhi (2011a). Hence for the optimization to be continuous, the update of the boundary conditions had to be considered. The §
PSA Peugeot and Citroën. Since there is a moderate coupling between the sub-structure and the remaining structure.
**
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
update in a continuous optimization loop will bring in various uncertainties in the performance of the optimization since the updated interface boundary condition will confuse the optimization algorithm. Therefore either an algorithm that can handle and is aware of these external influences or an algorithm that does not sustain information from iterations to iterations is required. The second option is preferable as the possibility to write a whole new algorithm is avoided. The idea here is to introduce the boundary condition update at the start of each new iteration. This way the algorithm, ARSM, takes the external influence as the effect of the new design parameters in the new design sub-region. This is why the ARSM method was chosen since the information from one iteration to another is not sustained except for the sub-space optimum. Also the already available information on the optimum of each iteration is of importance. To get such information, additional modifications have to be made to other algorithms such as evolutionary algorithms (EA)††. The significance of the sub-space optimum is that the update in the boundary condition is made to this design at each iteration. The current sub-structure optimization loop is given in Figure 6. The update criterion at present is dependent on the new iteration. This is not the best update criterion since the chance of missing an update point within the iteration is high (depending on the amount of parameter changes). Also making too many unnecessary updates can increase the computational effort due to the need of full model evaluations for interface boundary condition extraction. Therefore other measures for updates have to be investigated and implemented in the future. Also the applicability and the generality of the method have to be analyzed. The sub-structure optimization method can be generally applied to any problems that fall in the category defined in Section (III)B. However the optimization algorithm at present is limited to the use of ARSM. Hence the possibility to adapt this method to other algorithms should be looked at in the near future. Due to limitation on the length of work to be presented here the reader is directed to Rayamajhi (2011a) for the application of this approach to an industrial problem. D. Conclusion of the Part on Sub-structuring The sub-structuring technique is promising for the problem at-hand to reduce the simulation time of each design in robust design optimization. However for the half car model the simulation time was reduced from 12 hours to 3 hours. The simulation of the sub-structure is still expensive but it can be argued that using the computational resources available in the automotive industry this is a major improvement in the simulation requirement. However in our case the computational effort for the robust design optimization has to be reduced further to be able to investigate more design cases. Hence we present in the next Section an approach to reduce the overall computation effort for simultaneous robust design optimization.
IV.
Robust Design Optimization for Crash
A. Introduction In previous design approaches a deterministic approach was taken for structural optimization. A deterministic model of a structural system is obtained by considering only the nominal values of the parameters. This means that the objective functions and the constraints are also calculated using these values. The system variation that causes the variability in the performance is normally considered only in terms of safety factors. Which means that the larger the uncertainty the larger the safety factor has to be. Due to the lack of knowledge of the scatter in the structural performance, the safety factors specified could have been too conservative or too dangerous. The optimum design from deterministic optimization performances well under ideal conditions however in presence of uncertainties the design may actually exhibit poor performance. While another design with non optimal performance in ideal case may be less sensitive to variations in parameters. Therefore the latter design would be more robust than the optimal design. Due to such reason, the deterministic approach does not yield the best design solution in the presence of uncertainties. Also in structural engineering, the interest should be on an improved design that exhibits robust characteristics in all conditions rather than an optimal design which performance deteriorates with changing conditions. Hence a robust design approach has to be implemented to search for designs that are less sensitive to uncertainties, e.g. Kang (2005). Some of the most relevant sources of uncertainties to consider in crashworthiness are presented in Duddeck (2007) and Hunkeler et al. (2010). There have been very few studies that have included uncertainties in crashworthiness optimization. One of the reasons is that the underlying computational effort is enormous. Since crashworthiness optimization is already expensive, including robustness to the optimization adds significantly to the computational effort. A ††
E.g. the use of Perl script to find the optimum of each generation. Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
comprehensive survey on the robust optimization for crashworthiness is presented in Duddeck (2007), Stocki et al. (2007) and Hans et al. (2007). The application of some of these approaches to industrial problems can be found in Will et al. (2006), Lee et al. (2006), Koch et al. (2004) and Hunkeler et al. (2010). Previous studies in robust design optimization have been based on small scale problems with low number of design variables. The approach presented in this paper could facilitate robust design studies of complex and large scale problems with large number of design variables. B. Double Loop or Simultaneous RDO for Crash Design robustness has been generally considered through three different approaches and they are termed as Taguchi's method (Design for Quality), Robustness analysis and double loop RDO. This study here focuses on the double loop RDO approach since we want to simultaneously optimize the design while assessing its robustness. For information on the Taguchi's method and robustness analysis, reader is referred to Hans et al. (2007) and Hunkeler et al. (2010) respectively. Robustness analysis can only assess and classify the design as being robust or not, it does not assist in finding a design that is robust. Hence a combination of optimization and robustness in the same study, a Robust Design Optimization (RDO) approach, is required. This method is beneficial in assessing the robustness while optimizing the designs at the same time. The final outcome of the study would be an optimum design that is insensitive to variations without the need of the extra robustness analysis study. The RDO approach, also known as parallel robust design optimization, works in a double loop. The inner loop is concerned with the robustness analysis of the designs and the outer loop performs the design optimization. In a classical approach, the robustness of each design at each iteration is analyzed. This method is however overshadowed by the number of robustness analyses (evaluation points) required for each design in the optimization. Hence the next section provides a modification of this approach to focus just on a sub-set of the total design space to perform the explicit robustness analysis. C. Modified Double Loop RDO RDO method tries to find the region within the design space where the objective is minimum (or maximum) and also insensitive to unavoidable fluctuations in design and noise variables. In a constrained optimization problem the design space can be divided into feasible and infeasible regions. Promising solutions and also the optimum lie within the feasible design space. The designs in the infeasible region are of no significant interest interms of robustness since they have already failed to satisfy the performance criterion. Hence the extra computational effort should not be spent on the evaluation of robustness for these infeasible designs. The performance of this approach however depends on the type of optimization algorithm used. If response surfaces are used for RDO then the robustness evaluation of all the points in the design space are necessary. This is because these robustness points are required to build the response surfaces µ and σ (mean and standard deviation) for the robustness behavior. Avoiding the robustness of infeasible designs will reduce the number of points used to generate these response surfaces (especially at the beginning of the optimization since lots of designs may be infeasible) and hence reduce the quality of the generated response surfaces. Therefore the proposed RDO approach is investigated using an Evolutionary Algorithm (EA). For the proposed RDO approach, the feasible design space is further divided into two regions; the designs that are close to the feasible region
Figure 7: Region Of Interest (ROI) for robustness analysis.
Figure 8: Different cases for the decision on dummy robustness values.
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
boundary and the designs that are far away from the feasible region boundary. Since the optimization tries to push the designs to the boundary of the feasible design space, the sub-optimum solutions lie within this region. Because we want designs that are optimal and robust, we only evaluate the robustness of these sub-optimal designs. For this an offset distance has to be created for the constraint boundary. The space between the constraint boundary and its offset defines the ROI for the robustness evaluation, see Figure 7. The robust optimum design lies within this ROI. For the designs that are within the ROI, true robustness analysis is made whereas for designs that are outside the ROI dummy robustness values are created. To generate the dummy robustness values, the output value of the optimization point is used. Robustness outputs are created around the optimization output using uniform distribution. In case no robust design is found within the defined ROI, the ROI can be relaxed by using bigger offset distance. This could occur in cases where the constraints are not really active in optimization. Hence it is important to formulate a good ROI depending on experience or available data. The proposed approach should reduce the computational effort for RDO significantly especially at the beginning of the optimization where lots of designs may be infeasible or non-optimal. Hence most of the robustness analysis will be performed only at the end of the optimization when the algorithm has converged with most sub-optimal solutions. There are certain specificities to this approach which are discussed in the following paragraphs.
Figure 7: Modified double loop RDO approach.
1. Characteristics of the approach Choice of algorithm This approach cannot be easily used with the ARSM as explained in earlier paragraph. Dummy robustness values Dummy robustness values are created for designs that are outside the ROI. These dummy robustness values are generated using the nominal value of the optimization point and a constant sigma value. The generated dummy robustness values are uniformly distributed around the optimization point. In a RDO the designs are ranked in-terms of both the mean and the standard deviation (sigma) values. Here since the sigma value is the same for all designs that are outside the ROI, these designs are ranked depending only on the nominal values. This assists the algorithm to make decisions for design selections for future generations depending on real outputs rather than dummy outputs. The size of the dummy sigma values plays a vital role for the robustness characterization of the designs outside the ROI. If the dummy sigma value is too big then in a situation where a design which is in the feasible space but outside the ROI may have most of the robustness points ending up in the infeasible space making the design non-robust, see Figure 8a. This is not true since dummy values are used. Also the dummy sigma value must be bigger than the true sigma values to avoid exchange of the designs at the offset boundary making the non-robust design robust. For example if at the boundary of the offset there are two designs; design A: outside ROI and design B: inside ROI whose objective values are similar. If design A has a smaller sigma than design B, then design A may be taken as the robust optimum in comparison to design B. This is however not correct due to the use of dummy robustness values, see Figure 8b. Hence a preliminary robustness study on some designs or experience can be used to assign dummy sigma value. Also the distribution of dummy robustness outputs is important. They have to be well distributed around the optimization point for correct characterization of robustness, see Figure 8c. Uniform distribution or normal distributions of dummy outputs around the design can be made.
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
Constraints The offset distance of the constraint has to be made depending on how active the constraint is in the optimization. The more active the constraint, the smaller the offset distance. Also multiple constraints have to be considered. This is useful in cases where one output is within the ROI and the other output is outside the ROI. In such cases, the constraints can be scaled in-terms of their importance. And the decision can be made for the robustness analysis depending on the driving constraint. Another approach would be to combine the constraints using their importance factor. Different offset can be formulated for different constraints depending on their activeness in the optimization. Hence a constraint with active behavior would have a small offset whereas a constraint with less activity will have bigger offset. If this approach is implemented, a criterion can be set such that all the constraint offset has to be satisfied for a design to qualify for the robustness analysis. To make a decision on how active the constraint is and what offset to take, a preliminary study can be made on few designs. A general rule for the offset distance is given in equation 1, where F is dummy value and R is real value. Offset distance 2 F 2 R (1) Convergence In this approach computational effort reduction depends on how quickly the optimization converges. In case of fast convergence, the robustness analysis has to be made for almost all designs in the following generations. This reduces the computational advantage that can be achieved through this RDO approach. Hence it is advantageous to implement an adaptive ROI concept during the optimization. This concept is presented in the Paragraph Adaptive ROI. Sample recycling Sample recycling cannot be used in this approach due to the generated dummy values for robustness. If these dummy values are chosen as the recycled samples then the evaluated robustness will become not correct. Do not solve duplicate designs The "do not solve duplicate designs" option available in optiSLang (or similar option in any other optimization package) has to be disabled. This is because the generated dummy robustness values may be used by actual optimization points or vice versa making the robustness evaluation incorrect. Adaptive ROI As the optimization algorithm converges, most of the designs fall within the ROI, see Figure 11. This means that the number of evaluations required at the end of the RDO increases with this approach. To address this issue, an adaptive approach to the ROI can also be implemented. The adaptive approach changes the ROI as the optimization proceeds. Such adaptation of the ROI helps to reduce the number of robustness evaluations at the end of the optimization. To implement the adaptation of the ROI all designs have to be monitored in-terms of their location (within or outside the ROI), their robustness values and also the information on the optimum so-far achieved. Once this information is available, the number of robust designs that are within the ROI, for each generation, is known. If the number of robust designs within the ROI is less than 20 % - 50 % of the population then the ROI can be increased by taking a bigger offset. Also the optimum so far has to be taken into account while adjusting the ROI to make sure that the ROI is not too small. Hence the size of the ROI should be maintained at 2 or 3 times the size of the real sigma value at all times. Such size of the ROI accommodates the robust designs that are close to the constraint boundaries. However this adaptation approach cannot be implemented to the optiSLang RDO workflow as the required information cannot be obtained from the optimizer. D. Validation Cases To validate the proposed robust design approach, different analytical functions and simple impact cases were used. Some results are presented in the following paragraphs. 1. Analytical function tests To test the newly implemented RDO approach in optiSLang, some analytical function tests were first used. The first analytical function is shown in Figure Figure 8: Constraint offset and ROI for a 1-D problem.
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
8. The function has a global minimum at x = 0.78; y = -1.12323 however to really test the RDO approach the function is constrained at y = -1. Due to the applied constraint the new optimum is at x = 0.39; y = -0.8535. We try to see if the modified RDO can find this new robust optimum. Different offset distances are tested to see if the size of the ROI has any affect on the performance of the new RDO approach. The results with different settings are given in Table 2. Most of the optimization runs were successful in finding the global optimum. The size of the ROI in this case has not affected the performance of the algorithm to find the optimum.
Figure 9: An analytical function to test robust design.
The first analytical function was a general test to see if the global optimum of a constrained problem could be found. The second analytical function, shown in Figure 9, is more specific to robustness test. The function has 3 local and 1 global minima. A constraint on the function is set at y = -3.5 which eliminates the global minimum at x = 4.2. Hence now the global optima is at x = 3.3. However this peak is pretty narrow and hence we expect this minimum to be non-robust. This leaves a robust optimum at x = 2.2. The ROI is defined by creating an offset boundary at y = -1. The results of different RDO runs for function 2 are listed in Table 3. Looking at the results, it can be seen that the algorithm has managed to find the robust optimum at x = 2.2 in most of the cases. Since the validation of the approach on analytical functions is successful, we now move into structural problems. 2. Small impact case For test and validation purpose, a very simple impact case is used which is computationally cheap, simulation time approx. 2 minutes, Figure 10. The beam is formed of two similar profiles which are laser welded together with a reinforcement inside the beam. The optimization parameters are the thickness of the individual profiles and the thickness and position of the reinforcement within the beam. The objective is to reduce the overall mass of the beam while respecting the deflection constraint, d ≤ 30 mm. An offset is created for the deflection at d = 20 mm, creating a ROI with size 10 mm, see Figure 11. The design variables and the uncertainty parameters are listed in Table 1.
Figure 10: A simple bending case.
Table 4 compares the parameters and the outputs of the initial and the optimum designs. The mass and the deflection have improved by 21.7 % and 28.6 % respectively. The optimum was found after 2249 evaluations at the 14th iteration (149th design). Figure 11 shows the deflection history and the ROI. It can be seen from Figure 11 that at the beginning of the optimization many designs are outside the RIO. This is because the optimization algorithm is exploring the design space. However after about the 4th iteration most of the designs fall within the ROI, hence we entered the exploitation stage of the algorithm. In this case the computational efficiency was roughly 28 % since out of 2249 designs, 645 designs were not evaluated but instead dummy values were generated. This was because 43 designs out of the 178 designs were outside the ROI. Hence as mentioned in Paragraph (adaptive ROI), adaptive ROI can be implemented to further reduce the computational effort.
Thickness variables Position variable Noise variables
Variable Ttop (mm) Tbottom (mm) T (mm) P (mm) Vo (ms-1) Mo (kg)
Lower bound 0.5 0.5 0.5 -20 2.12 7.9
Mean 1.0 1.0 1.0 0 2.22 8.0
Upper bound 2.0 2.0 2.0 20 2.32 8.1
Table 1: Design variables and uncertainty parameters overview.
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
E. Conclusion The modified double loop RDO approach was successfully implemented into an existing optimization software package (optiSLang). This was then evaluated with some analytical functions and a simple impact case. Through these test cases, the approach has managed to reduce the overall computational effort associated with robust design optimization. As discussed earlier the efficiency of the approach can be enhanced by implementing an adaptive ROI approach. One of the major difficulties faced with this approach was to predict the approximate ROI which is directly related to the activeness of the performance constraints. Although this can be judged through some design runs before the optimization however to exactly formulate the rule for each optimization case is difficult. Hence, rather than using the dummy robustness values for the robustness analysis of the designs outside the ROI, the approach could benefit from the use of other approximation methods such as Equivalent Static Loads Method (ESLM) Park (2011) and Kim et al. (2010). The substitution of this approximation method adds the physical relevance to the overall approach, which could not be provided by the dummy robustness values.
V.
Figure 11: Deflection output history and the imposed offset distance.
Overall Conclusion
In this paper two approaches to reduce the overall computational effort of double loop robust design optimization were presented. The first approach was to reduce the simulation time for each design. For this a sub-structuring approach was presented. Then a modified double loop RDO was presented to reduce the number of design evaluations required for robust design optimization. For the application of the sub-structuring approach, the readers were referred to previous work form the authors. Here some of the validation cases for the modified RDO approach are presented. The implemented RDO reduces the computational effort in comparison to the classical double loop RDO. However to couple these two approaches to deal with industrial sized RDO problems needs further investigation. Some of the open questions for the coupling of these two approaches are: use of sub-model or full model for optimization points, which algorithm to use, investigation of the ESLM to be implemented into the overall RDO loop.
Acknowledgments The authors would like to thank for the help and support of a large group of people, in particular Hans Zimmer from SFE GmbH, Ronald Averill form Red Cedar Technology, Paul Sharp form Altair HyperWorks and Johannes Will form Dynardo GmbH.
References Abramowicz W: Macro element fast crash analysis of 3D space frame. SAE International, 2006. Altair Engineering. Radioss starter/engine manual, Ver. 11.0, 2012. Avalle M, Chiandussi G, and Belingardi G: Design optimization by response surface methodology: application to crashworthiness design of vehicle structures. Struct Multidisc Optim 24: 325-332, 2002. Averill R: Efficient shape optimization of crashworthy structures using a new sub-structuring method. In 3rd German LSDYNA Forum, Bamberg, Germany, 2004. Averill R: Personal communication, February 2011. Averill R, Goodman E and Sidhu R: Evolutionary computation in practice, ch. Multi-level decomposition for tractability in structural design optimisation. Springer, 2008. Chiandussi G, Fontana R and Urbinati F: Design sensitivity analysis method for multi- disciplinary shape optimization problems with linear and non-linear response. Eng Comput. 15: 391-417, 1998. Chiandussi G, Bugeda G, and On ˜ ate E. Shape variable definition with C0, C1 and C2 continuity functions. Comput Meth Appl Mech Eng 188(4): 727-42, 2000. Chiandussi G and Avalle M: Maximization of the crushing performance of a tubular device by shape optimization. Computers & Structures 80: 2425-2432, 2002. Dietrich T: PhD thesis, Technische Universität München, Munich, Germany, to be submitted 2013. Duddeck F: Multidisciplinary optimization of car bodies. Struct Multidisc Optim 35(4): 375–389, 2008.
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
Duddeck F: Survey on robust design and optimisation for crashworthiness. Proc EUROMECH Coll 482 Efficient Methods for Robust Design and Optimisation, Queen Mary University of London, London, UK, 2007. Duddeck F and Zimmer H: New achievements on implicit parameterization techniques for combined shape and topology optimization foe crashworthiness based on SFE CONCEPT. Proc ICRASH Conf, Milano, Italy, 2012. Farkas L, Canadas C, Donders S, Langenhove T van, and Tzannetakis N: Optimization study of a parametric vehicle bumper subsystem under multiple load cases using LMS Virtual.Lab and OPTIMUS. 7th European LS-DYNA Conf., Salzburg, Austria, 2009. Georgios K and Dimitrios S: Multi-Disciplinary design optimization exploiting the e fficiency of ANSA-LSOPT-META coupling. Proc 7th European LS-DYNA Conf, Salzburg, Austria, 2009. Hamza K: Design for vehicle structural crashworthiness via crash mode matching. PhD thesis, University of Michigan, 2008. Hou S, Li Q, Long S, Yang X, and Li W: Design optimization of regular hexagonal thin-walled columns with crashworthiness criteria. Finite Elements in Analysis & Design 43: 555–565, 2007. Hunkeler S, Duddeck F and Rayamajhi M: Robustness analysis and shape optimisation for crashworthiness of passenger cars with SFE CONCEPT. Proc 8th ASMO UK / ISSMO Conf on Engrg Design Optim, Queen Mary University of London, London, UK, 2010. Kang Z: Robust design optimization of structures under uncertainties. PhD thesis, Universität Stuttgart, 2005. Kim Y and Park G: Nonlinear dynamic response structural optimization using equivalent static loads. Comput. Methods Appl Mech Engrg 199: 660–667, 2010. Koch P, Yang R and Gu L: Design for six sigma through robust optimisation.: Struct Multidisc Optim 26: 235–248, 2004. Lee K and Bang I: Robust design of an automobile front bumper using design of experiments. Proc. IMechE Part D: J. Automobile Engineering 220(9): 1199-1207, 2006. optiSLang. Ver. 3.2.1, Dynardo GmbH, Weimar, Germany, 2009, Documentation. Park G: Technical overview of the equivalent static loads method for non-linear static response structural optimization. Struct Multidisc Optim 43: 319–337, 2011. Rayamajhi M: A sub-structure approach for shape optimisation for crash. Pres 4th GACM Colloquium, Technische Universität Dresden, Dresden, Germany, 2011a. Rayamajhi M: A new methodology for shape optimisation and its application for crashworthiness. Transfer thesis, Queen Mary University of London, London, UK, 2011b. SFE CONCEPT. ver. 4.2.5.2, SFE GmbH, Berlin Germany, 2009, Reference Manual. Sousa L, Veríssimo P and Ambrósio J: Development of generic multibody road vehicle models for crashworthiness. Multibody Syst Dyn 19: 133-158, 2008. Stigliano G, Mundo D, Donders S and Tamarozzi T: Advanced vehicle body concept modelling approach using reduced models of beams and joints. Proc ISMA Conf, Belgium, 2010. Stocki R, Rutjes N and Delcroix F: Methodologies and algorithms for robust optimisation. Tech. Rep. AP-SP72-0011-D, ASPROSYS, 2007. Sun G, Li G, Zhou S, Li G, Hou S and Li Q: Crashworthiness design of vehicle by using multi-objective robust optimization. Struct Multidisc Optim 44: 99–110, 2011. Volz K, Frodl B, Dirschmid F, Stryczek R and Zimmer H: “Optimizing topology and shape for crashworthiness in vehicle product development”, International Automotive Body Congress, Berlin, Germany, 2007. Will J, Baldauf H and Bucher C: Robustness evaluations in virtual dimensioning of passenger safety and crashworthiness. Weimar Optimisation and Stochastic Days 3, Weimar, Germany, 2006. Zimmer H, Prabhuwaingankar M and Duddeck F: Topology and geometry-based structure optimization using implicit parametric models and LS-OPT. Proc 7th European LS-DYNA Conf, Salzburg, Austria, 2009.
Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)
VI.
Appendix
The tables below show the results from the modified double loop RDO approach. Run no. Normal RDO
ROI (y) LB UB NA -1
Start Point
Optimum Variable x
Design Number
Stagnation Reached‡‡
0.61
0.39
2087
/
1
0
-1
0.39
0.39
1
/
2
0
-1
0.94
0.39
964
/
3
0
-1
0.53
0.39
483
/
4
0
-1
0.22
0.39
1763
/
5
0
-1
0.63
0.39
808
/
6
-0.5
-1
0.13
0.39
802
/
7
-0.5
-1
0.33
0.39
641
/
8
-0.5
-1
0
0.41
488
Y
9
-0.5
-1
0.73
0.38
1606
Y
10
-0.5
-1
0.46
0.39
969
/
11
-0.7
-1
0.79
0.38
10
Y
12
-0.7
-1
0.77
0.39
3
Y
13
-0.7
-1
0.35
0.72
10
Y
14
-0.7
-1
0.13
0.38
5
Y
15
-0.7
-1
0.8
0.39
3842
/
Table 2: Results of test function one with different ROI and start points.
1
ROI (y) LB UB -3.6 -1
2
-3.6
-1
4.2
2.2
490
No
3
-3.6
-1
2
2.2
969
No
4
-3.6
-1
0
2.1
2
Yes
5
-3.6
-1
0.5
2.2
164
No
Run no.
Start Point
Optimum Variable x
Design Number
Stagnation Reached
4.4
2.2
803
No
Table 3: Results of test function two with varying start points.
Initial Design Outputs Mass Deflection (kg) (mm) 1.2 37.4
T (mm) 0.5
Robust Design Parameters Outputs TB TT P Mass Deflection (mm) (mm) (mm) (kg) (mm) 1.16 0.5 -15 0.94 26.7
Table 4: Comparison of initial design and robust design using modified RDO.
‡‡
Stagnation: No improvement in the objective for 10 generations. Association for Structural and Multidisciplinary Optimization in the UK (ASMO-UK)