Computer-Aided Tool Path Optimization for Single Point Incremental Sheet Forming M. Bambach1 , M. Cannamela1 , M. Azaouzi2 , G. Hirt1 and J.L. Batoz2 1
2
Institut f¨ ur Bildsame Formgebung, Intzestr. 10, 52056 Aachen, Germany,
[email protected],
[email protected],
[email protected], GIP-InSIC, 27, Rue d’Hellieule, 88100 Saint-Di´e des Vosges, France,
[email protected],
[email protected]
Summary. Asymmetric Incremental Sheet Forming (AISF) is a new sheet metal forming process for small batch production and prototyping. In AISF, a blank is shaped by the CNC movements of a simple tool. The standard forming strategies in AISF lead to severe thinning and an inhomogeneous wall thickness distribution. In this paper, several new types of forming strategies are presented that aim at a more homogeneous distribution of material. A forming strategy suitable for computeraided optimization was identified by finite element analyses. A “metamodel” was constructed by 162 finite element calculations in order to test different optimization algorithms off-line for their performance: a Genetic Algorithm (GA), a Particle Swarm Optimization (PSO) algorithm and the simplex search method. The GA was found to be better at detecting the global optimum but lagged behind the PSO in terms of speed.
Key words: incremental sheet forming, forming strategies, finite element analysis, computer-aided optimization.
1 Introduction Asymmetric Incremental Sheet Forming (AISF) is a sheet metal forming process that uses CNC technology to produce complex sheet metal parts. The conventional forming strategies in AISF are an adaptation of z-level surface machining: the part is split into a series of two-dimensional layers and the plastic deformation is accomplished layer-by-layer through the movements of a simple CNC-controlled forming tool. A layer at constant z-position is formed by an in-plane movement of the tool. On completion of each layer, the tool moves down a small increment along the z-axis and continues to process the subsequent layer until all layers are formed. In order to achieve a good surface quality, the step-down is usually very small, e.g. in the range of 0.2 mm.
234
Advanced Methods in Material Forming
blank holder
blank
forming tool
full positive die
rig
SPIF
TPIF
Fig. 1. Process variants in AISF
Generally, a distinction can be made between “Single PoInt Forming” (SPIF), where the bottom contour of the part is supported by a rig, and “Two-PoInt Forming” (TPIF), where a full or partial positive die supports critical regions of the part (Fig. 1). (Amino et al., 2002) have realized a dedicated forming machine based on TPIF. The conventional forming strategy described above leads to the so-called “sine law” relation between the initial (t0 ) and final (t1 ) sheet thickness for a given wall angle α: t1 = t0 sin(90◦ − α) (1) As a consequence, the forming kinematics inherent in AISF entail the following drawbacks: –
The maximum wall angle is limited to 60–70 degrees for the most commonly used sheets of 1.0–1.5 mm thickness. This limitation restricts the potential scope of shapes and applications. – The strong dependence on the feature angle can lead to an inhomogeneous thickness distribution in the final part. 1.1 State of the Art for Non-Conventional Forming Strategies in AISF Attempts to mitigate the thinning limit were presented by several authors: (Kitazawa 93) introduced four kinds of multistage strategies for the multistage forming of a dome and compared the distributions of sheet thickness obtained. (Junk et al., 2003) described a multistage strategy to produce a rectangular pyramidal frustum with right wall angles using TPIF. An optimization of the parameter settings for this strategy was performed experimentally by trialand-error, yielding a forming strategy with an initial wall angle of 45 degrees and 15 intermediate stages with a 3 degree increase in wall angle per stage. A drawback being the large forming time, this strategy was further developed by (Hirt et al., 2005) to a strategy combining bending and stretch-deformation. With this strategy, complex industrial parts were realized, again by experimental trial-and-error. (Jeswiet et al., 2001) manufactured a car headlight
Single Point Incremental Sheet Forming
235
using scaled versions of the original CAD model as intermediate shapes. (Kim et al., 2000) presented an attempt to calculate the optimal intermediate shape in a two-stage forming process. (Giardini et al., 2005) put forward the idea of a two-stage forming process: using a large size tool, a preform was created by punching straight down into the flat sheet. A smaller tool diameter was then used for the finishing stage. (Kun Daja et al., 2000) show that the strain distribution of an axisymmetric cup can be improved by carefully balancing the action of the tool between areas with a high stiffness (edge of the cup) and regions with a low stiffness (centre of the cup). An early attempt to use computer-aided optimization in incremental hammering to optimize the part geometry was presented by (Mori et al., 1996). So far, no efforts were made to parameterize forming strategies for AISF and to apply optimization algorithms to the problem of tool path optimization, neither experimentally nor using process models. In this paper, attempts are made to – – –
define a new class of fully parameterized strategies for SPIF, assess their potential to allow for a homogeneous thickness, single out suitable strategies for computer-aided tool path optimization and – compare the performance of different optimization algorithms for the optimization of the sheet thickness distribution in SPIF.
2 New Forming Strategies for SPIF 2.1 Definition of Strategies In this section, several types of non-conventional forming strategies are presented. The strategies aim at a more homogeneous distribution of wall thickness. They are motivated by the fact that the conventional z-level tool paths in SPIF only cover the side walls while leaving e.g. the bottom of the workpiece undeformed. This type of strategy is shown in the upper part of Fig. 2. In order to also deform the flat bottom of the part of the depicted part, a new “conical” strategy is proposed, where the tool movement starts at the centre of the part and opens up with increasing depth until the desired diameter at
r0
α
d conventional strategy
conical strategy
Fig. 2. Conventional and conical z-movement
236
Advanced Methods in Material Forming
maximum depth is reached. With this strategy, the shape of the side walls is determined indirectly by means of the final contour traced by the forming tool. For the in-plane movements, two new strategies are proposed (Fig. 3): –
A “contour” strategy: starting at the centre of the part, the tool moves along self-similar contours of increasing diameter, until the edge of the part at the present depth is reached. The diameter at a certain depth is prescribed by using either the conventional or conical z-movement. This type of strategy uses a radial pitch “dr” to connect subsequent contours. – A “radial strategy”: radial forming consists of radial, star-shaped movements. Each stroke starts at the centre of the part, i.e. after each radial movement, the tool moves back to the centre. The strategy is defined by means of a circumferential pitch “dϕ”. As indicated before, each of the two strategies for the z-movement can be coupled with each of the strategies for the in-plane movement, yielding four types of parameterized strategies. Restricting our attention to conical parts, the forming of a specific part can be specified by geometric features (radius of the part at z = 0 mm, cone angle and depth), and the combination of a z-movement (parameter dz) with an in-plane movement (parameter dr or dϕ). The conventional z-level strategy can be recovered by combining the “conventional” z-movement with a contour-type in-plane movement using a single radial pitch per z-level. 2.2 Experimental Results and Discussion The feasibility of the proposed strategies has been tested using a single benchmark geometry and one generic parameter combination for each of the four strategy types. The selected geometry is an axisymmetric cup with a radius r0 of 60 mm, a wall angle of 80 degrees and a depth of 25 mm. The latter value was identified through test series with increasing forming depth as the depth for which the part starts to break with the conventional forming strategy. As
dϕ dr
contour strategy
radial strategy
Fig. 3. Contour and radial in-plane strategy
Single Point Incremental Sheet Forming
237
sheet material, DC04 blanks of 1.5 mm thickness were used. The tool diameter was 30 mm. A relatively large pitch of 5 mm per cycle was employed. For the two strategies involving radial punch strokes, a circumferential (angular) pitch of 4◦ was used. The combination of conventional z-movement and inplane “contour” strategy used a radial pitch “dr” of 20 mm. Figure 4 shows a top view on the parts after forming. For evaluation of the thickness distribution, the parts were cut along a radial section, and the thickness was measured every 2 mm using a micrometer gauge (Fig. 5), starting from the centre of the part. The following conclusions can be drawn from the conducted experiments: –
All of the considered strategies enable to form the part in contrast to preliminary tests using the conventional z-level strategy that lead to rupture. – The strategies involving radial strokes allow for a considerable reduction of sheet thickness in the centre of the part. However, the forming time is quite high (∼ 9 min. for the “cone/radial” and ∼ 40 min. for the “conventional/radial” strategy). – The “conventional/contour” strategy provides a thickness distribution comparable to those obtained with the radial movements at a reduced forming time (∼ 2 min.). – Although the “cone/contour” strategy yields a very short manufacturing time of less than 1 min., the sheet thickness could not be reduced in the centre of the part, probably because it lacks the repetitive loading of this region. Furthermore, the part shows a considerable springback and hence the largest deviation to the target geometry among the given strategies.
cone/contour
cone/radial
conv./contour
conv./radial
Fig. 4. Top view on parts manufactured using the four new strategies
238
Advanced Methods in Material Forming 1.5
cone/contour cone/radial conv./contour conv./radial
sheet thickness [mm]
1.4 1.3 1.2 1.1 1 0.9 0.8 0.7
0
10
20
30 40 50 60 distance along section [mm]
70
80
Fig. 5. Sheet thickness distributions for the four strategies
–
While the huge pitch values are possibly beneficial for the thickness distribution, a smooth surface would require a dedicated finishing stage.
3 Finite Element Analysis for the New Strategies The experimental results show that the proposed strategies offer possibilities to take influence on the sheet thickness distribution of a given part. However, a homogeneous thickness distribution requires a rigorous optimization of the parameter settings, and a parameterization that encompasses the optimal strategy. Given the size of the parameter space, experimental trial-anderror optimization of the tool path should be avoided. In an attempt to assess the feasibility of computer-aided optimization with the new strategies, finite element analyses were performed for the “cone/contour” strategy, which is the strategy with the shortest tool path (among the investigated). Based on earlier finite element analyses of the process presented in Bambach et al., the model was realized both in ABAQUS/explicit and ABAQUS/standard. To ensure that the FE calculations are comparable with the experiments, the tool paths that were used for the CNC machine were translated into input file format for ABAQUS using a dedicated MATLAB routine. A fictitious forming time of 5.31 s was calculated, which corresponds to the duration of the process in reality at full tool speed, i.e. if acceleration and deceleration are neglected. The DC04 sheet was modeled as an elasto-plastic material with isotropic hardening using material data obtained from tensile tests. The results of the FE analyses were tested against experimentally determined values for both sheet thickness and geometry. For evaluation of the finite element calculations, the sheet thickness and the geometry are compared to experimental data along a radial section in positive x-direction for both the implicit and explicit FE models (Fig. 6). As a measure of the quality of the
Single Point Incremental Sheet Forming
239
5 experiment explicit FEA implicit FEA
0 z [mm]
–5 –10 –15 –20 –25
0
10
20
30 40 x [mm]
50
60
70
Fig. 6. Comparison of calculated and measured part geometry
calculated geometry, we determine the maximum normal distance dmax and the average normal distance dav between the experimental data and the FE results. Therefore, the normal distance between the two sections (experimental data and FE calculation) is calculated on 141 positions by constructing the normals on the experimental curve and computing the intersections with the FE data curve. The prediction of sheet thickness is judged by means of the maximum deviation dth,max between the experimental data and the FEA. Mesh convergence studies were performed using the experimental data as reference. The blank was meshed with a uniform mesh of 2304 S4R shell elements, which turned out to be a good compromise between accuracy and calculation time. Similar tests were performed for the admissible amount of mass scaling for the explicit FEA. Using the “variable mass scaling” option, scaling to a time step of 10−4 s was found to be a suitable choice. When compared to the more conservative mass scaling to an average time step of 10−5 s the results did not deteriorate considerably, but the calculation time increased from 30 minutes to more than three hours (see Table 1). Since no data are available for friction coefficients in AISF, a sensitivity analysis was performed to assess the role of friction. In five analyses with varying friction coefficients of 0.0, 0.05, 0.1, 0.2 and 0.5, no considerable influence on the prediction of geometry and thickness could be found. Based on these results, and given that the sheet was well lubricated in the experiments, the Table 1. Accuracy of explicit and implicit FEA analysis type
average time step [s]
geometry
thickness
dmax [mm]
dav. [mm]
dth,max [%]
calculation time [h]
explicit explicit implicit
10−4 10−5 –
1.82 1.67 1.09
1.19 1.13 0.59
15.62 15.10 11.7
0.48 3.1 7.32
240
Advanced Methods in Material Forming
friciton coefficient was set to zero in the FEA. In the implicit simulation, the mesh size and input data were chosen in accordance to the explicit simulation runs. The results are summarized in Table 1. Although providing very good results as compared to experimental data, the implicit analysis exhibits a relatively large number of increments. The increment size was altered automatically by ABAQUS depending on the success of the iterations. An average of 4.5 iterations had to be performed for each time increment, two of which are so-called “severe discontinuity iterations” which are used to achieve well-defined contact conditions. As a consequence, the analysis time is large even for the small benchmark part and the short tool path, so that an implicit analysis can presently not be used for tool path optimization. A direct comparison between the thickness provided by the explicit and implicit FEA shows that the maximum difference occurred at the locus where the vertical pitch is performed. This is probably due to the fact that the tool transmits a high kinetic energy during the sudden change from in-plane movement to z-pitch. Plots of the reaction forces obtained by the implicit and explicit calculations corroborate this assumption (Fig. 7): Large deviations between the obtained forces can be found at those points in time where the vertical pitch is performed (see arrows in Fig. 7), while the results are in good agreement throughout the remainder of the calculations.
30 implicit FEA explicit FEA
Fx [kN]
20 10 0 –10 –20
0
0.2
0.4
0.6
0.8
1
0
Fz [kN]
z-level tool path
–10
–20
–30
0
0.2 0.4 0.6 0.8 normalized process time [–]
1
Fig. 7. Comparison between reaction forces obtained by explicit and implicit FEA
Single Point Incremental Sheet Forming
241
10
Fx [kN]
5 0 –5 –10
0
0.2
0.4
0.6
0.8
1
0
Fz [kN]
spiral tool path
–5
–10
–15
0
0.2 0.4 0.6 0.8 normalized process time [–]
1
Fig. 8. Comparison between reaction forces obtained by explicit and implicit FEA (spiral tool path)
To avoid the influence of the step-down on the accuracy, additional explicit and implicit analyses were performed using a spiral tool path. For these analyses, the tool forces obtained by the implicit and explicit FEA are in good agreement (see Fig. 8).
4 Optimization Based on the findings in the foregoing section it can be concluded that the success of computer-aided optimization depends largely on the proper design and para-meterization of the tool path. The finite element analyses show that both accuracy and short runtime can be provided by the “cone/contour” strategy, if it is re-designed to a spiral tool path. In order to increase the number of parameters, it was decided to focus on a strategy consisting of 3 iterated applications of the spiral “cone/contour” strategy presented above. The strategy was parametrized by the dimensions of its intermediate forms; one depth hi and one diameter di describe one spiral “cone/contour” tool path that produces a conical intermediate form (Fig. 9). The third tool path was fixed in
242
Advanced Methods in Material Forming Stage 1
Stage 2 h1
Final stage (fixed) h2
H
+ +
d1 d2
D
Fig. 9. Tool paths for the three-stage forming process to be optimized
order to produce the same part geometry for all parameter combinations (D = 120 mm, H = 18 mm), resulting in four parameters per potential solution. The number of possible parameter combinations was decreased by allowing the di to vary only in steps of 10 mm, and restricting the hi to be a multiple of 3 mm. In addition, the following constraints were imposed on the parameters: d1 < d2 < D h1 < h2 < H
(2a) (2b)
With these constraints, 162 parameter combinations were identified. FEM simulations were run for these combinations to span a solution space. A “metamodell” was built through a linear interpolation between the resulting points, yielding a hypersurface of 5 dimensions. Thus, given an arbitrary point in the solution space the value of the objective function, or “fitness”, can be quickly approximated. This allows for the rapid and repeated evaluation of the different methods needed to compare of their performance. The solution space is visualized in Fig. 10 and Fig. 11. In these figures, the first parameter was held constant and slices of the resulting volumetric space taken at even intervals. The color is tied to the value of the function, black is the maximum of approximately 1.035 mm and white is a value of 0 mm. The initial sheet thickness is 1.5 mm as in the analyses before. Two views are presented, showing the slices from two opposing sides. Important to note is how much black appears, and how rapidly the transition to white occurs. The function varies quickly over a small area, but otherwise does not change much. This could mean that 162 values do not adequately describe the topology of the solution space, or the solution space itself could just be flat. Summary tests indicate that the surface does approximate the actual solution space. A surface based on cubic interpolation of the data was also tested, but resulted in unacceptable overestimation of the sheet thickness. The goal of the optimization was to find the tool path which distributed the material as evenly as possible throughout the part. As a consequence of volume conservation, the part with the largest possible minimum sheet thickness would have a perfectly homogeneous distribution of thickness. To make any
Single Point Incremental Sheet Forming d1 = 40 mm
243
d1 = 50 mm
h2
h2
d2
d2
“fitness“ (thickness [mm]) 1.04
h1
h1
0.5
d1 = 60 mm
d1 = 70 mm
0
h2
h2 d2
d2
h1
h1
Fig. 10. Visualization of the solution space, view 1
point thicker would require taking material from another point, reducing the minimum sheet thickness. Thus, the fitness of a given tool path was taken to be the minimum value of thickness in the simulated part resulting from that tool path. Let n t (p) = {ti (p)}i=1 , n = number of nodes, (3) be a set of nodal thicknesses for all nodes in the mesh, where p = (d1 , d2 , h1 , h2 )
(4)
is the design variable. Then, in terms of optimization, a solution for the following problem is sought: maximize min [{ti (p)}] . i
(5)
The objective function here is obtained through a simulation. It cannot be evaluated analytically and so no information about its gradients is available without performing a huge number of simulations. The large solution space,
244
Advanced Methods in Material Forming d1 = 40 mm
d1 = 50 mm
h2
“fitness“ (thickness [mm]) 1.04
h2
0.5 0
d2
d1 = 60 mm
d2
h1
h2
d1 = 70 mm
h1
h2
d2
d2 h1
h1
Fig. 11. Visualization of the solution space, view 2
lack of gradient or other analytical information and use of simulation all suggest that stochastic or search optimization methods would be most appropriate for optimizing tool paths. The work that follows compares a few of these methods in an attempt find a suitable algorithm for tool path optimization. Three different optimization methods were implemented, all in MATLAB: a Genetic Algorithm (GA), a Particle Swarm Optimizer (PSO), and the simplex method. 4.1 Optimization Algorithms Genetic Algorithm The genetic algorithm is a type of simulated evolution. The parameters of a problem are encoded into an array or “genome” and from these parameters the corresponding fitness value is determined. An individual is a solution represented in this format and placed in a population of other individuals. Based on fitness, individuals are selected for genetic operations that produce new individuals. Two individuals might be combined in some way, called crossover, or one individual might have one or more genes altered in a mutation. This process of creating new solutions continues until the population has converged
Single Point Incremental Sheet Forming
245
on an optimum or some other termination criterium is met. The algorithm implemented here was written for MATLAB by (Houck et al. 1995). During the course of calibrating the algorithm, it was concluded that maintaining diversity in the gene pool is important to prevent the algorithm from converging prematurely on a local optimum. Although there are many parameters to be considered, the selection mechanism seems to be most important for maintaining diversity. The default selection operator is a normal geometric selection, which often picks just two or three individuals for the crossover and mutation processes. This breeds the poor solutions out in as few as one generation but can result in severe stagnation of the gene pool. A considerable amount of mutation is then required to balance this effect, which is “unnatural” in the sense that natural evolution relies more on the crossover of sufficiently disparate individuals to maintain diversity. In an effort to maintain diversity while using mutation in moderation, the selection mechanism of the original GA was modified. Under the new system, the strongest individuals first attempt to mate with each other with a certain probability of success, and failing that, attempts are made to mate elite individuals with random ones from the rest of the population. If this, too, fails, two random individuals from the population are crossed over. After all the crossovers have been performed, members of the previous generation are replaced by the offspring at random; “fitness” represents the fitness to reproduce, not to survive. The motivation for this approach is to allow the elite individuals to persist through their offspring, while still allowing for the mixing of elite gene pools with weaker ones to ensure a more thorough exploration of the solution space. As an additional boon to exploration and diversity, a “migration” of randomly generated individuals is brought in to replace a percentage of the population every few generations. Some work has been done in applying GA’s to similar problems, see (Mori et al. 1996) and(Schenk et al. 2004). Particle Swarm Optimization The particle swarm optimizer is another biologially inspired optimization algorithm. It mimics the behavior of flocks of birds or schools of fish searching for food. A group of particles, the swarm, is inserted into the solution space. Each particle then “flies” through the hyperspace to a new position. Each member of the swarm knows the best solution it has personally found so far and the best solution found by its nearest neighbors, or in some cases, the best solution visited by any member of the swarm. At each iteration, new velocities and positions for each particle are determined from equations of the forms v = c1 v + a c2 (pbest − x) + b c3 (gbest − x) x = x + v
(6) (7)
where x and v are position and velocity, the primed terms are the new values, c1 , c2 , and c3 are positive constants, a and b are random numbers from zero
246
Advanced Methods in Material Forming
to one, pbest is the particle’s own best solution and gbest is the global best solution so far. The optimal values of the constants is not clear, but c1 is typically near 1 and c2 and c3 are close to 2. For this test, the set of parameters was c1 = .729 and c2 = c3 = 1.494 as recommended by (Clerc et al., 2002) and tested by (Trelea et al., 2003). The iterative process continues until convergence or termination. Simplex Method The simplex method differs from the PSO and GA in its approach. Rather than evolving a population of solutions in some way, a simplex is created around an initial guess in the solution space. The simplex is maintained and manipulated iteratively, always seeking to move the weakest vertex of the simplex to a better location. This continues until the simplex has converged to within some specified tolerance. The MATLAB “Optimization Toolbox” function “fminsearch” was used to test the performance of the simplex method. 4.2 Comparison of Methods When optimizing the hypersurface, the simplex method consistently ran into the nearest local optimum and stayed there, leveling off after an average of 43 function calls with a standard deviation of 16. In 30 trials, it did not find the global optimum once. Only if the initial guess was artificially set very close to the global optimum did it manage to find it. A t-test showed that the average best value found with the simplex method was significantly different from the worst performing GA, albeit by .01 mm. As a control, the simplex method was then tested against a random search of 43 function calls. The random search outperformed the simplex in this instance, with a t-test supplying weak evidence that this difference was significant. To compare the PSO and the GA, the hypersurface approximating the objective function was optimized many times with each algorithm. One trial consists of a random initial population and 600 function calls. The algorithms were tested with population sizes of 10, 20, and 30. For each trial, the best value found was recorded. Additionally, the number of function calls made before the fitness reached 99.95 % of its steady state value was noted and will be termed “settling time” for this discussion. The threshold was set so high because the difference from one point in the solution space to the next was so small. As a further consequence of the “flatness” of the solution space, the average value found was the same for both algorithms at all population sizes. The histogram in Fig. 12 shows the difference in speed between the algorithms at a population size of 20. The x-axis represents the “setting time”, the y-axis gives the frequency, i.e. how many times out of 30 trials the algorithm did settle after a given number of function calls. The trend apparent here persists throughout the data for other population sizes, and t-tests in Table 2 show that the difference in speed between the PSO and the GA is significant at all population sizes.
Single Point Incremental Sheet Forming
247
PSO, pop = 20 GA, pop = 20
6 5 4 3 2
600
580
550
520
490
460
430
400
370
340
310
280
250
220
190
160
0
130
1 100
Frequency, total count = 30
7
Settling Time (Function Calls)
Fig. 12. Histogram of algorithm settling time in function calls
In addition to the best value found, the “success rate” was defined as the probability that the algorithm will find the global optimum. Chi-square tests suggest that for the PSO success rate is rather insensitive to the ratio between population size and number of iterations while for the GA smaller populations allowed more iterations are more successful. In a test where the algorithms of population size 30 were not limited in function calls, the GA and PSO were dead even in terms of success rate and best value found, but the PSO was still nearly twice as fast. 4.3 Conclusions on Optimization For this solution space, the GA was better at finding the global optimum but lagged behind the PSO in terms of speed. Furthermore, the PSO seemed to be insensitive to the ratio between population size and number of iterations while the GA preferred smaller populations with more iterations. The simplex method was unable to prove itself better than a random search, so it is of little use in this problem. A good portion of the solution space turned out to be fairly “flat”; the 5 dimensional analog of a 3 dimensional plateau. Once Table 2. Statistics summary algorithm alg. pop size
PSO 10
GA 10
PSO 20
GA 20
PSO 30
GA 30
simplex
mean settling 185.7 251.3 242.0 404.7 293.0 416.0 43.1 time std. dev. 121.8 120.4 130.3 130.2 120.2 141.31 16.4 mean best 1.0343 1.0343 1.0341 1.0341 1.0341 1.0341 1.0192 found std. dev. 2.7E-04 3.6E-04 3.7E-04 5.0E-04 4.3E-04 5.0E-04 2.6E-02 sucess rate 0.23 0.40 0.10 0.33 0.17 0.13 0
248
Advanced Methods in Material Forming
the algorithm gets on top of the plateau, there is nowhere else for it to go. This made it difficult to compare the algorithms in terms of the average best value found, but as already noted a comparison of speed and success rate did produce statistically significant results. It must be noted, however, that other than population size no other parameters were varied for the PSO and the GA. Given the proper tweaking, it is not inconceivable that either of these algorithms could outperform the other. Tuning these parameters is not trivial, however, and is not the only problem in applying these algorithms to an engineering problem of this nature. The PSO is often praised for its sparing use of parameters, and in fact (Clerc et al. 2002) have written a “parameterless” implementation where the user has only to define the problem; the PSO then adapts parameters like swarm size and topology as the algorithm progresses. This is in contrast to most GA’s, which have a host of parameters that must be tuned by the user. So far, there is no a priori method for determining what might be a good parameter set for a given problem, and the optimal parameter set varies heavily from problem to problem. Furthermore, engineering applications are bound by practicality. Population sizes must be small and convergence rates high if the objective function is an FEA calculation, as in this case. The average computation time for these calculations on a 3 GHz machine is roughly 1 hour. With the algorithms settling in 200–400 function calls, one optimization would result in 200–400 hours of computation for each optimization, if it could be immediately recognized that the algorithm had settled. This is unacceptable but is already a tall order; tests of PSO’s on common benchmark functions are often allowed to run for hundreds or thousands of iterations. The advantage of a paramterless approach then becomes even more clear, as such computationally expensive procedures cannot be repeated just to tune the parameters. Furthermore, the fastest setting of the PSO took on average 185 function calls to settle, which is more than the 162 used to approximate the solution space in the first place. Perhaps a better alternative would be a synthesis of methods, using a PSO or GA to obtain a guess for a more localized search method like the simplex. Still, given the highly parallel nature of the PSO and GA the bottleneck is not processor speed but number of processors. Having even 2 processors would halve the computation time per iteration, and a larger cluster could make the computation time manageable. Another way to circumvent the computa-tion barrier would be to depart from FEA altogether and develop a computationally cheaper model, such as the inverse approach (Batoz et al. 1998). Although being limited compared to FEM, such models can give accurate results for strains and that is all that is needed for an optimization of the thickness distribution.
5 Summary and Outlook In the present paper, parameterized strategies for the optimization of sheet thickness distributions in single point incremental sheet forming are developed. The strategies were found to be a promising alternative to the
Single Point Incremental Sheet Forming
249
conventional z-level type tool path strategies. Finite element calculations of the strategies were used to identify a strategy that is suitable for computeraided tool path optimization. A “metamodel” was constructed by 162 finite element calculations. With this representation of the solution space, different optimization algorithms were compared: a genetic algorithm (GA), a particle swarm optimizer (PSO), and the simplex search method. Statistically representative evidence can be given that the GA was better at finding the global optimum but was generally slower than the PSO. Due to the flatness of the solution space near the global optimum, many potentially optimal solutions seem to coexist which caused the simplex method to always get stuck in a local optimum. Future work will largely focus on the parameterization of the tool path, as it determines the topology of the solution space and is therefore crucial to solving the optimization problem. Since full-scale finite element calculations of the process are time-consuming, simplified process models are currently being developed in order to allow for a faster evaluation of the outcome of a given forming strategy.
Acknowledgements The authors would like to thank the German Research Foundation (DFG) for the funding received for the finite element analyses through project HI 790/5-1.
References Amino H., Lu Y., Ozawa S., Fukuda K., Maki T., “Dieless NC Forming of Automotive Service Parts”, Proceedings of the 7th ITCP, Yokohama, 2002, p. 1015–1020. Bambach M.; Ames J.; Azaouzi M.; Campagne L.; Hirt G.; Batoz J.L., “New forming strategies for single point incremental sheet forming: Experimental evaluation and numerical simulation”; Proceedings of the 8th ESAFORM Conference on Material Forming, Cluj-Napoca, 2005, p. 671–674. Batoz J.L., Guo Y.Q., Mercier F., “The Inverse Approach With Simple Triangular Shell Elements for Large Strain Predictions of Sheet Metal Forming Parts,” Engineering Computations, vol. 15 no 7, 1998, p. 864–892. Clerc M., Kennedy J., “The particle swarm: explosion stability and convergence in a multi-dimensional complex space”, IEEE Transactions on Evolutionary Computation, vol. 6 no. 1, 2002, p. 58–73. Giardini C., Ceretti E., Attanasio A., ”Further Experimental Investigations and FEM Model Development in Sheet Incremental Forming”, Advanced Materials Research, vols. 6–8, 2005, p. 501–508. Hirt G.; Ames J.; Bambach M., “A new forming strategy to realise parts designed for deep-drawing by incremental CNC sheet forming”, Steel Research, vol. 76 no. 2/3, 2005, p. 160–166.
250
Advanced Methods in Material Forming
Houck C.R., Joines J.A., Kay M.G., “A genetic algorithm for function optimisation: a MATLAB implementation”. Technical Report NCSU-IE TR 95-09, North Carolina State University. Jeswiet J., Hagan E., “Rapid Proto-typing of a Headlight with Sheet Metal”, Proceedings of the 9th International Conference on Sheet Metal, 2001, pp 165–170. Junk S., Hirt G., Chouvalova I., “Forming Strategies and Tools in Incremental Sheet Forming”; Proceedings of the 10th International Conference on Sheet Metal, 2003, p. 57–64. Kim T.J., Yang D.Y., “Improvement of formability for the incremental sheet metal forming process”, Int. J. Mech. Sci., vol. 42, 2000, p. 1271–1281. Kitazawa K., “Incremental Sheet Metal Stretching-Expanding With CNC Machine Tools”, Proceedings of the 4th ICTP, 1993, p. 1899–1904. Kun Daia Z.R., Wanga, Y.F., “CNC incremental sheet forming of an axially symmetric specimen and the locus of optimization”, Journal of Materials Processing Technology , vol. 102, 2000, p. 164–167. Mori K., Yamamoto M., Osakada K., “Determination of hammering sequence in incremental sheet metal forming using a genetic algorithm”, Journal of Materials Processing Technology, vol. 60, issue 1–4, p. 463–468. Schenk O., Hillmann, M., “Optimal design of metal forming die surfaces with evolution strategies”, Journal of Computers and Structures, vol. 82 issues 20–21, 2004, p. 1695–1705. Trelea I.C., “The particle swarm optimisation algorithm: convergence analysis and parameter selection”, Information Processing Letters, vol. 85, 2003, p. 317–325.