Comput Optim Appl (2012) 53:903–931 DOI 10.1007/s10589-012-9463-1
Computational optimization strategies for the simulation of random media and components Edoardo Patelli · Gerhart I. Schuëller
Received: 5 April 2011 / Published online: 9 February 2012 © Springer Science+Business Media, LLC 2012
Abstract In this paper efficient computational strategies are presented to speed-up the analysis of random media and components. In particular, a Hybrid Stochastic Optimization (HSO) tool, based on the synergy between various algorithms, i.e. Genetic Algorithms, Simulated Annealing as well as Tabu-list is suggested to reconstruct a set of microstructures starting from probabilistic descriptors. The subsequent analysis (e.g. Finite Element analysis) can be performed to obtain the desired macroscopic quantity of interest and, providing a link between the micro- and the macro-scale. Different computational speed-up strategies are also presented. The proposed simulation approach is highly parallelizable, flexible and scalable. It can be adopted by other fields as well where an optimization analysis is required and a set of different solutions should be identified in order to perform computational experiments. Numerical examples demonstrate the applicability of the proposed strategies for realistic problems. Keywords Simulation · Optimization techniques · Parallel computing · Soft computing · Super-elements · Random heterogeneous media 1 Introduction Nowadays computational approaches are used to characterize, predict, and simulate new components, media, etc. and to reduce the development cycle times and costs. E. Patelli () · G.I. Schuëller Institute of Engineering Mechanics, University of Innsbruck, Technikerstraße, 13, 6020 Innsbruck, Austria e-mail:
[email protected] Present address: E. Patelli School of Engineering, University of Liverpool, Brodie Tower, L69 3GQ Liverpool, UK e-mail:
[email protected]
904
E. Patelli, G.I. Schuëller
The problems addressed by numerical analysis become increasingly demanding, both in terms of efficiency and speed. For instance, material scientists aim to adopt the so-called bottom-up approach, i.e. controlling the configurations at the microscopic level, for designing, prototyping, and for acceleration of the e.g. advanced materials technology development cycle through inverse optimization techniques [5, 19]. Predicting the performance of these components may be involved because often the material configurations might contain thousands of input parameters which are not known precisely and the configuration at microscopic level can be characterized only statistically (e.g. by means of small angle X-ray scattering and neutron scattering techniques, etc.). These materials are often named random heterogeneous materials or random media. Despite the continuously improvements in the field of the reconstruction of random materials (see e.g. [15, 16]), computational analysis is still a necessity because a theory which would unambiguously define the class of systems that can be uniquely reconstructed, i.e. based on limited microstructure information, does not yet exist (see e.g. [28, 32]). This means that the uncertainty on the knowledge of the microstructure limits the analytical study of the random media. Nowadays, the relationships between the microstructure information and some macroscopic properties of the components or media are often based on pattern recognition and classification techniques (see e.g. [30]). These methods may prove particularly useful within the context of large scale computational approaches by providing tools to identify the critical microscopic configurations. However, the training phase of these tools requires large data-sets and not only few samples of the configuration at microscopic level. The scope of this work is to present an efficient and highly parallelizable tool which combines Genetic Algorithms (GA), Simulated Annealing (SA) and Tabulist and different numerical strategies to speed-up the analysis of random media and components where their configurations at the microscale level can be characterized only statistically. The proposed approach is used to generate a database of samples, as shown schematically in Fig. 1. Then, computational experiments can be performed on the reconstructed samples to obtain the desired macroscopic properties (effective properties), providing a link between the micro- and the macro-scale. Moreover, the generated database of samples can be used, for instance, in the training phase of the classification tools. The proposed simulation strategies presented here are very flexible and can be applied to solve different optimization problems in various other fields as well.
2 Modelling random media 2.1 General remarks In the most general sense, random media or random heterogeneous materials are composed by different materials and the exact distributions of the constituents are unknown. The uncertainty in the microstructures derives from the inability to deterministically characterize the materials at the micro-scale, e.g. due to the fluctuation in process conditions during manufacture. In this paper, the microscale is defined as the
Computational optimization strategies for the simulation of random
905
Fig. 1 Schematic representation of the analysis of random media adopting the Hybrid Stochastic Optimization (HSO) tool. The configuration of the media at microscopic level are represented by the statistical descriptors that are used to define the optimization problem. The HSO tool and the speed-up strategies are adopted to generate a database of configuration samples and subsequent analyses can be performed to compute the quantity of interest (effective properties)
length scale of a single heterogeneity, in which the material appears to be uniform. Obviously, the material may appear uniform at different length scales according to the application. As an example, in the case of elastic wave propagation in heterogeneous media, the length scale of a single heterogeneity depends on the wavelength. Random materials are adopted by engineers to “design” composite materials with macroscopic properties that are better than the properties of the each individual constituent material. The performance of these materials depends, obviously, on the properties of the constituent materials but also on the distributions of the components, in other words on their microstructure configuration. Therefore, controlling the microstructures it is possible to produce high-performing composite materials, the so called “material-by-design” [20]. In order to perform a realistic analysis of these materials, the models should accurately capture the uncertainty associated with the microstructure. Then, the analysis proceeds with the digital reconstruction of the material microstructure. This is an intriguing inverse problem which can be seen as an optimization problem as shown in Sect. 3. Finally, the macroscopic quantities of interest of the reconstructed material are computed.
906
E. Patelli, G.I. Schuëller
2.2 Microstructure descriptors To fully understand and characterize random media, an accurate representation of their microstructures is necessary. The most common approach to characterize the underlying morphology of random media is based on the knowledge of the statistical descriptors (e.g. correlation functions) of the digitized image of the configuration at microscopic level [6, 21, 24, 32]. The microstructure can be characterized by means of the generalized indicator function, I (j ) (x) of V (j ) where each phase j (j = 1, . . . , nj ) occupies a disjoint subdomain V (j ) ⊂ 3 . The indicator function, adopted to fully describe the configuration of the phases within the medium, is defined as: 1 if x ∈ V (j ) (j ) I (x) = (1) 0 otherwise nj (j ) I (x) = 1. and represents a random field which satisfies ∀x ∈ V : j =1 Various types of correlation functions can be adopted to characterize the morphol(i) ogy. Some examples are: the n-point correlation functions, Sn (see Sect. A.1); the lineal path, L(i) (see Sect. A.2); the radial function, R (i) (see Sect. A.3); cluster correlation function, Cn(i) . It is clear that a full characterization of the random media requires an infinite amount of information (i.e. the knowledge of all the finite-order distributions). Therefore in practice, only limited information of I (x) is available, in the sense that it is limited to certain moments or marginals. It has been recently shown [14] that in general, the two point cluster correlation (i) (i) function, C2 , contains a much greater level of information then either S2 or L(i) . Hence, it allows a better representation of the microstructure. Yet, the necessity to identify the clusters for all the candidate solutions increases significantly the computational efforts required to evaluate this kind of correlation function. Since the main goal of the paper is on the efficient optimization procedures, only correlation functions that require less computational efforts to be computed have been used. 2.3 Macroscopic analysis For the computation of the effective or equivalent (mechanical, thermal and/or electrical) properties of the materials with complex microstructures numerous homogenization techniques have been developed (see e.g. [1]). Since analytical solutions are limited to relatively simple geometries and constitutive relations, the Finite Element (FE) method is the most frequently used technique to solve this class of problems. These simulations are carried out by starting from the digital image of the reconstructed medium so that the entire digital lattice is treated as the FE mesh (see Sect. 5.3). In some cases dedicated models to evaluate particular quantities of interest can also be used. For instance, a Monte Carlo model can be used to simulate the diffusion of particles through a porous media as shown in Sect. 5.2.
Computational optimization strategies for the simulation of random
907
3 Hybrid stochastic optimization tool 3.1 General remarks Stochastic optimization techniques are largely used because they are global search techniques; they can be applied to discrete problems; they parallelize well, and most important, they have been shown to work well in a number of different applications. In general, it is difficult to say that a heuristic method is better than another. It may well be that the best strategy is to import a number of search ideas from different types of heuristic search methods and to implement them in a new hybrid method as suggested here. The suggested approach combines three stochastic procedures as shown in Fig. 2: Genetic Algorithms (GA), Simulated Annealing (SA) and Tabu-list. GA are used to identify the most promising solutions; then the identified solutions are refined by the SA. Tabu-list is used in the selection procedure of the candidate solution by GA and by SA. Furthermore, the Tabu-list allows to include not only statistical constraints in the reconstruction process but also physical constraints as well. 3.2 Objective function Any optimization problem requires the definition of an objective function. For the reconstruction of the microscopic configuration of random media, the quantity that has to be minimize is the difference between the probabilistic descriptors of the reconstructed configuration with the probabilistic descriptors of the target material (see e.g. [35, 36]). More specifically, let fsk and f0k be the k-th probabilistic descriptor for the reconstructed digitized system and the target system, respectively. The k-th probabilistic descriptor can represent any type of correlation functions (see the Appendix). For ex(j ) ample, fsk ≡ Sn represents the n-point correlation function for the j -th phase. In the reconstruction process, the system configuration evolves towards f0k from an initial guess. The objective function, indicated with the symbol E, adopted to reconstruct the material microstructure is: 2 αk fsk − f0k (2) E= k
where for simplicity the dependence of the probabilistic descriptors from the position has been omitted. The parameter αk is an arbitrary weight assigning the relative importance to each individual k-th correlation function that contributes to the total energy, E. 3.3 Genetic Algorithms GA are global searching tools from one group of solutions (population) rather than from one single solution. This characteristic implies that they are much more likely to locate a global peak than traditional techniques, because they are much less likely to
908
E. Patelli, G.I. Schuëller
Fig. 2 Scheme of the HSO approach. Stage I: Genetic Algorithms (GA) and Tabu-list are used to identify the most promising solutions. Stage II: Simulated Annealing (SA) and Tabu-list are used to refine the solutions identified during the stage I. The parallelization of the optimization tools adopting a master/workers paradigm is also shown
get trapped at local minima. Also, due to the parallel nature of the stochastic search, the performance is much less sensitive to the initial conditions, and hence the convergence time is quite predictable. GA are adopted in the first stage of the optimization to identify the most promising areas in the phase space using a limited number of generations. In fact, it has been shown that they are very efficient to identify good solutions but slower than traditional techniques to identify the global optimum [25]. The GA contain two main parts: the generation of the initial population and the generation of successive populations by crossover and mutation of individuals selected from the current population. The generation of candidate individuals is the sequential part of the GA that it is performed by the so-called GA master node. However, the most computational intensive aspect in the optimization is the calculation of the objective function (fittest) for each candidate solution. Therefore, the GA are suited towards implementations of parallel architectures because computation of the fittest of candidate solutions can be easily distributed across the processors of the system as detailed in Sect. 4.2.
Computational optimization strategies for the simulation of random
909
3.4 Simulated Annealing The SA is adopted in the second stage of the reconstruction procedure to refine the solutions already identified by the GA, i.e. each individual of the final population of the GA represents an initial configuration for the SA optimization. SA is a quite accurate procedure to finding a state of minimum “energy” among a set of many local minima by manipulating appropriately the phase of pixels in the digitized system, for instance interchanging the states of two arbitrarily selected pixels of different phases. Here, an implementation similar to the original approach shown in Refs. [35, 36] has been adopted. After each pixel manipulation the energy E and the energy difference E = E − E between two successive states of the system are calculated. Then, the probability, p(E, T ), of making the transition from the current configuration to the candidate new configuration is computed as follows: 1 if E ≤ 0 (3) p(E, T ) = exp(−E/T ) otherwise Such probability depends on the energy associate to each configuration, and on a global time-varying parameter T called the temperature. In this work small values of the initial T and a limited number of iterations have been used (as shown in Tables 1, 2 and 3). Hence, the SA has not been used as a global optimization tool but as a semi-local optimization method. In fact, the optimization is performed starting from good initial solutions (identified by the GA) and hence the exploration of the whole search space is not necessary. 3.5 Tabu-list Another important component is the Tabu-list (also spelled as Taboo-list) technique adopted during the stages I (combined with GA) and II (combined with SA). It allows to avoid the presence of: (a) undesirable/infeasible microstructure configuration in the reconstruction process due to physical constraints, (b) equivalent microstructure configurations and (c) to prevent certain moves. If the candidate solution is “Tabu”, e.g. it does not satisfy the physical constraints or an equivalent solution has already been tested, it is immediately discharged without evaluating the corresponding objective function. Although keeping track of the previously visited solutions can be quite time consuming, the Tabu-list technique produces, generally, a faster convergence of the GA due to an increase of the genetic diversity of the population. In fact, the lack of such diversity would lead to a reduction of the search space spanned by GA and, consequently to a degradation of its optimization performance with the selection of mediocre individuals and resulting in premature convergence to a local minimum. Theoretically, a Tabu-list can be generated taking into account the values of each pixel/voxel of the microstructure. However, storing a complete description of the last visited solutions and testing for each candidate solution whether it is recorded in the Tabu-list is rather time intensive. For this reason, the Tabu-list is constructed adopting equivalent solutions. Equivalent solutions are defined as the microstructure configurations that are a space translation or rotation of an another solution. A very efficient
910
E. Patelli, G.I. Schuëller
Fig. 3 Examples of two equivalent microstructures identified by means of the Fast Fourier Transformation (FFT). The unit of the vertical and horizontal axis are the pixel number. For the figures on the left hand side the pixels represent the space while for the figures on the right hand side the pixels represent the frequency domain. The images of the left hand side represent two digitized (equivalent) microstructure samples and in the right hand side the corresponding module of the FFT
way to identify these equivalent microstructure configurations is to analyse the module of the Fast Fourier Transformation (FFT) of the microstructure. In fact, thanks to the Fourier transformation shift theorem, a space transformation (i.e. translation or rotation) of a microstructure corresponds to a linear phase shift in the frequency domain and it does not affect the module. Figure 3 shows an example of two equivalent microstructure configurations (left hand side) and the corresponding modules of the FFTs (right hand side). Since the two microstructure configurations are equivalent, the resulting modules of the FFTs are identical. The color levels on the FFT are proportional to the intensity of the FFT transformation. Briefly, the intensity of the FFT transformation tells us how much frequent a certain frequency component is present. The magnitude of the FFT transformation is used to identify equivalent solutions. A reconstructed configuration
Computational optimization strategies for the simulation of random
911
is marked as tabu when the difference between its FFT module and the FFT module of the other configurations is lower than a predefined threshold.
4 Speed-up of the modelling analysis 4.1 General remarks The computer industry has been rapidly transitioning from CPUs with a single processing core to multi-core configurations. Furthermore, in a matter of just a few years, the general-purpose computation on the Graphical Process Units (GPU) systems are already outperforming CPU-only clusters in many fields (see e.g. [29]). The multi-core CPUs and the GPUs can be linked forming a heterogeneous computing power network, i.e. a grid computing. Generally speaking high performance computing is largely available. Hence, efficient algorithms must take advantage of the availability of large number of cores and the heterogeneity of the grid computing, i.e. submitting a specific task to a specific hardware in order to reduce the wall-clock time significantly. 4.2 Master/workers paradigm The HSO tool implements a master/workers paradigm: the master node generates tasks to be processed by the workers and each worker receives a job from the master, computes a result and, sends it back to the master. The master/workers paradigm is well suited for the adaptive dynamic environments because: when a new machine (i.e. node) becomes available, a worker can be started there, when a node becomes busy, the master task gets back the pending work which was being computed on this node, to be computed on the next available node. In this paper, an open source job manager (http://sourceforge.net/projects/gridscheduler/) is used for the creation of a dynamic adaptive scheduling of tasks controlling the availability of nodes and automatically distributing the tasks on the available machines. The node availability is identified adopting several load indicators such as the CPU utilization, the load average, the number of users logged in, etc. Different parallelization strategies are implemented for the identification of the most promising solutions (stage I) and for their refinement (stage II) as shown in Fig. 2. In the first stage, the parallelization is obtained creating n different jobs where n represents the number of individuals present in the GA’ population. The created jobs are then distributed on the available machines by the job manager. The master implements a central memory through which all communication to the workers passes. It captures the global knowledge acquired during the search. In other words, the master has access to the entire population and has the main task of proposing the individuals for the successive generation. This is the serial component of GA but the computational efforts are generally negligible compared to the evaluation of the objective function (see (2)). In the second stage, each solution is refined independently by means of SA-workers on the different nodes of the grid computing as shown in Fig. 2. The SA procedure is a typical sequential algorithm although it can be also parallelized using, for instance, the concurrency technique of “speculative computation”
912
E. Patelli, G.I. Schuëller
(see e.g. [23]). Unfortunately, such kind of parallelization is problem dependent and, hence, it is not possible to guarantee its efficiency. The evaluation of the objective function, can be processed by one or more subworkers. Thus, each sub-worker evaluates only a single correlation function or a portion of it (i.e. the correlation function of a sub-domain) as detailed in Sect. 4.3. Tabu-list is parallelized by distributing the list over different processors. The Tabu workers are connected by communication via the Message Passing Interface (MPI). MPI communication implements, sends and receives operations that can fully overlap computations. If a solution is marked as “Tabu”, the worker sends a stop message to other workers and frees the resources. This implementation proves to be particularly helpful in the latter generations where the size of the list can be quite long since each processor has to handle only a small portion of the entire list. 4.3 Domain decomposition approach A further speed-up of the evaluation of the correlation functions is obtained by splitting the original digital image into a number of different separate sub-domains that are processed independently on different CPUs/GPUs. This approach is also known as “divide and conquer” strategy. Figure 4 shows the computational efforts required to evaluate the radial distribution function, g(r), for different domain size (number of particles) and adopting different number of sub-domains. The ideal number of sub-domains depends on the domain size, on the number of available processors/cores and on the latency time of the grid computing system, i.e. the time required by the job manager to start parallel
Fig. 4 Computational time of the evaluation of the radial distribution function adopting the domain decomposition strategy. The bars represent the minimum and maximum computational time required to evaluate the correlation function due to the latency time of the grid computing
Computational optimization strategies for the simulation of random
913
Fig. 5 Computational time and the corresponding speed-up obtained evaluating the Fast Fourier Transformation on the Nvidia 9500GT GPU and on the Intel Pentium D CPU, respectively
jobs. For instance, in the second numerical example (see Sect. 5.3) with 14287 particles (fibers) and 10 sub-domains, a speed-up of 12.8 has been obtained to compute the radial distribution function. 4.4 FFT and general purpose GPU The wall-clock time can be further reduced if efficient algorithms are used to estimate the correlation functions. Up-to-date, the most efficient way to compute the 2-point correlation functions is to adopt the FFT. It is a well known and widely used tool in many scientific and engineering fields and it is essential for many image processing techniques, including filtering, manipulation, correction, and compression. For periodic ergodic media, it requires only O(N ln N ) instead of O(N 2 ) operations, adopting the following formula: (i,j )
S2
(r) = IFFT{FFT[I (i) (r)]FFT[I (j ) (r)]}
(4)
where IFFT is the inverse of the FFT and the bar represents the complex conjugate. The FFT and the IFFT can be efficiently evaluated on GPUs. In fact, GPUs are, in general, much more efficient then CPUs as shown in Fig. 5. The figure shows the computational speed-up that can be obtained evaluating the FFT on an entry-level GPU. It is important to note that compiled codes are not portable across the GPUs from different vendors since they require different libraries. For instance, the NVIDIA GPUs adopt the CUDA (Compute Unified Device Architecture) drivers [22] (freely available on http://www.nvidia.com/object/cuda_get.html) while the computation on AMD/ATI GPUs requires different libraries (see http://developer.amd.com/gpu/).
914
E. Patelli, G.I. Schuëller
With the availability of such libraries that provide the essential high-level programming tools such as compilers, debuggers, math libraries, and application platforms, the GPUs effectively become powerful, programmable open architectures like today’s CPUs allowing a programmer to use the C programming language to code algorithms for execution on the GPU. The execution on the GPU is strictly controlled from the host side (CPU). In fact, the host CPU allocates GPU memory for input and output data, and copies input data from the host system memory onto the GPU external memory; then it launches a computation on the GPU and finally copies the results from the GPU memory onto the host system memory for output or further processing The main drawback of the GPU used for this work is that the operations were available in a single precision only, i.e. occupies 4 bytes (32 bits in modern computers), and the precision of division/square root is slightly lower than single precision available on the CPU. However, in these applications very accurate results of the FFT are not required since the hybrid optimization tool can handle noisy objective functions. It is also important to point out that the implementation of double precision operations has been added to the most recent GPU architectures (e.g. for GPUs supporting CUDA compute capability 1.3 and above). 4.5 Superelement technique The evaluation of the quantity of interest of the reconstructed media (e.g. effective properties) is generally performed by means of FE analysis that can be accelerated adopting the superelement technique. Superelements (or substructures) are groupings of FE that, upon assembly, may be considered as an individual element for computational purposes (see e.g. [8]). They offer a greater level of analysis efficiency by dividing larger structures into equivalent sets of smaller substructures called superelements. Instead of solving an entire FE model each time, superelements offer the advantage of incremental processing one superelement at a time. This technique has been already efficiently applied in different fields, see e.g. [11, 13]. Usually, the samples obtained from the HSO tool share significant parts between them. These parts can be treated as superelements that are not required to be reevaluated for each sample. In terms of performance, there are two major issues affecting the efficiency of the superelements: the number of superelements defined and the size of the shared part of the microstructure. Figures 6 and 7 show the computational efforts required by the FE solver (ANSYS) adopted in the second numerical example to evaluate the von Mises stresses of the reconstructed media (see Sect. 5.3). The computational costs of the analysis are shown as a function of the shared part, i.e. the superelements size, and as a function of the number of samples, respectively. As shown in Fig. 7, the computational efforts required by the full FE analysis increases linearly with the number of samples while the superelement technique allows to reduce the computational efforts considerably. This approach is very efficient, particularly when the number of samples to be analysed ranges in the order of hundreds. The performance degrades rapidly, though, if the number of samples becomes excessive, i.e. due to problems with memory allocation required by the FE analysis and with the increasing number of pixels not shared among the microstructure samples. In order to overcome this limitation, batches of
Computational optimization strategies for the simulation of random
915
Fig. 6 Computational costs required by the FE solver (ANSYS) adopted in Sect. 5.3: the abscissa represents the fraction of elements (pixels) not shared on average among the samples respect to the total number of elements (pixels)
samples can be analysed efficiently adopting the superelement strategy (e.g. analyzing blocks of 100 samples at a time).
5 Numerical examples The HSO tool (see Sect. 3) and the speed-up strategies (Sect. 4) are applied here to generate databases of microstructure samples based on limited microstructural information. In this section, the procedure is first verified by optimizing a mathematical function (Sect. 5.1). Then, two different examples are analysed: a bi-continuous medium (Sect. 5.2) and a fibre reinforced composite material (Sect. 5.3), respectively. 5.1 Verification: Rastrigin function The HSO tool is here adopted to minimize the Rastrigin function shown in Fig. 8 [31]. It is a typical example of non-linear multimodal function, defined as following: R(x) = 20 + x12 + x22 − 10(cos 2πx1 + cos 2πx2 )
(5)
The global minimum of the function is the vector x∗ = (0, 0) with R(x∗ ) = 0. Our aim is here to demonstrate that the proposed hybrid approach is more efficient and better than just using the different optimization tools separately. Moreover, the
916
E. Patelli, G.I. Schuëller
Fig. 7 Computational costs required by the FE solver (ANSYS) adopted in Sect. 5.3 as a function of the number of FE analyses required
proposed approach is not only able to identify the global minimum of (5) but also to identify as many minima as possible that are close to the global one, in terms of values of R(x). The results of the identification of the minimum of the Rastrigin function are shown in Fig. 9. The results show that combining GA and SA, it is possible to obtain more accurate results than using only a specific optimization tool. In fact, GA are a very powerful tool to generate good initial solutions for SA. Theoretically, the SA can start the optimization from any initial point to identify the global minimum of a function if the annealing schedule is extended sufficiently. However, in practical cases when the objective function contains many local minima the time required to ensure a significant probability of success will usually exceed the time required for a complete search of the solution space as shown in [10]. The results of the identification of the best solutions adopting the proposed approach are shown in Figs. 10 and 11. During the first stage of optimization, the GA and the Tabu-list are used to identify the promising solutions. Then, in the second stage of the optimization the SA and Tabu-list are used to refined the solutions identified during the first stage. The Tabu rule adopted here is the minimum distance between solutions equal to π/4, i.e. if the distance of a proposed solution, i, to another solution, j , already present in the population is lower than d ≡ xi − xj < π/4, ∀j = i, then the solution is marked as “tabu” and hence it is immediately discharged.
Computational optimization strategies for the simulation of random
917
Fig. 8 Rastrigin’s function
Fig. 9 Identification of the minimum of the Rastrigin function combining Genetic Algorithms (GA) and Simulated Annealing (SA)
5.2 Bi-continuous media Bi-continuous systems represent media where all the phases are connected. Typical examples are the porous media characterized by a solid phase and a voids (pores)
918 Table 1 Main parameters adopted in the identification of the best solutions of the Rastrigin function by means of the proposed optimization strategy
E. Patelli, G.I. Schuëller Parameter (GA)
Value
Maximum number of generations
10
Number of individuals
20
Elite count
5
Crossover fraction
0.8
Crossover function
Single Point
Mutation rate
0.2
Mutation function
uniform
Selection function
Roulette
Parameter (SA)
Value
Maximum number of iterations
200
Initial temperature
10
Parameter (Tabu-list)
Value
Minimum distance between solutions
π/4
Fig. 10 Identification of the best solutions of the Rastrigin function adopting the proposed HSO tool
phases, respectively. Here, the target medium has been generated numerically. Spatially uncorrelated spheres, with a constant radius and centers determined by a Poisson process, are sampled until the desired volume fraction of the pores phase of the system, φ, is reached [26]. A realization of this bi-continuous material is shown in Fig. 12, and the main characteristics of the target system are reported in Table 2. The basic idea is to use only statistical information extracted from a number of 2D cross sections of the target material. This situation occurs in many practical cases where the statistical correlation functions extracted from the cross sections of a real
Computational optimization strategies for the simulation of random
919
Fig. 11 (Color online) Fittest of the different solutions in the initial population, in the population at the end of the first stage, and in the final population (refined solutions), respectively. The continuous (blue) line with circle represents the fittest of the initial solutions, the dashed (red) line with diamond symbol represents the fittest of the solution of the final population of the GA and the dotted (green) line with triangle symbols represents the fittest of the refined solutions (after the SA optimization)
Fig. 12 Realization of a bi-continuous porous media. Dimension of the axis are expressed in pixels
material are the only information available. In addition, the correlation functions available to characterize the microstructure of a 2D image are generally not sufficient to reconstruct univocally the 3D-microstructure [26].
920
E. Patelli, G.I. Schuëller
Table 2 Main parameters adopted for the reconstruction of the bi-continuous material and the speed-up performance obtained adopting the different strategies (0,0)
(1,1)
(0,1)
L(0) L(1)
Descriptors
S2
Domain discretization
96 × 96 × 96 voxels
S2
Volume fraction
φ0 = 0.75
Maximum GA generations
50
Maximum SA iterations
50
Number of reconstructed samples
100
Number of CPU/GPU
32/0
S2
Strategies (speed-up)
Master/workers (≈30)
Total CPU time
≈16 hours
Fig. 13 Comparison of the two-point correlation functions of the target domain and the reconstructed bi-continuous (0,0) material for the voids S2 , the (1,1)
matrix S2
and voids matrix
(0,1) S2 , respectively
The following microstructure descriptors are here used: the 2-point correlation (0,0) (1,1) (0,1) functions, S2 , S2 , S2 (see Sect. A.1); and the lineal path correlation function, L(0) , L(1) (see Sect. A.2). In this numerical example, the assumption of spatial statistical homogeneity of the bi-continuous material has been adopted. Statistical homogeneity means that the microstructure of the specimen may be assumed to be sampled from the same probabilistic descriptors throughout the domain. Thus, the target correlation functions have been obtained averaging the correlation functions computed at 4 different vertical cross sections of the target material. Figures 13 and 14 show the comparisons between the correlation functions of the target medium and the correlation functions of the reconstructed samples for the two-point correlation functions and the lineal functions, respectively computed at an arbitrary cross section. The reconstructed microstructures are associate with a relative large value of the final energy level in the order of E ≈ 10−1 –10−2 . It is important to note that the two-point correlation functions show a significant variability over the different cross sections of the material. Nevertheless, the two point correlation
Computational optimization strategies for the simulation of random
921
Fig. 14 Comparison of the lineal path correlation functions of the target domain and the reconstructed bi-continuous material for the voids L(0) , the matrix L(1) , respectively
(0,0)
(1,1)
Fig. 15 Comparison of the two-point correlation functions S2 and S2 averaged over all the cross sections of the target domain and the reconstructed bi-continuous material, respectively. The error bars represent the +/− standard deviation of the two-point correlation functions
functions of the target material and the reconstructed material averaged over all the cross-sections are in very good agreement as show in Fig. 15. Although the correlation functions are reconstructed yielding a satisfactory agreement, the cross section of the reconstructed medium and the target medium might be significantly different, as shown in Figs. 16 and 17, due to the incomplete information about the microstructure embedded in the correlation functions adopted in the reconstruction process. The macroscopic quantity of interest of the bi-continuous material is the effective diffusion coefficient. A Monte Carlo simulation model [9] has been adopted to sim-
922
E. Patelli, G.I. Schuëller
Fig. 16 Cross sections of the target bi-continuous material
ulate the diffusion of particles in the reconstructed samples. More specifically, the particles are injected in the medium from the xy surface at z = 0. Then, a random walk is generate via Monte Carlo procedure for each particle. At each time step the new position of the particle is sampled from a uniform distribution that represents all possible movements. If the sampled direction is allowed (i.e. the particle will remain in the same phase) the particle is moved to its new position otherwise the particle is kept on its old position. The procedure is repeated for all the particles and the number of time steps required by the particles to diffuse through the sample (i.e. the time necessary to reach the opposite surface) are collected by appropriate counters. Periodic boundaries are applied to the other surfaces of the samples. The distributions of the arrival times of 5000 particles, expressed in arbitrary time units, are shown in Fig. 18. Despite the low level of information adopted to reconstruct the microstructure, the distributions of the diffusion time of the particles in the target domain and in reconstructed samples are very close thanks to the connectivity information embedded in the lineal path function. The small differences between the distributions of the arrival time of the particles can be attributed to the presence of disconnected pore regions in the reconstructed media. In fact, although these regions contribute to the correlation functions they do not contribute to the fluid flow. The computational time (wall-clock time) required to generate a database of 100-3D microstructure samples is approximately of 16 hours using a Linux cluster composed of 32 processors (Intel Xeon E5345 and E5430). The job manager is used to distribute the jobs to the nodes of the clusters obtaining a speed-up of ≈30
Computational optimization strategies for the simulation of random
923
Fig. 17 Cross sections of the reconstructed bi-continuous material domain
(1 core was reserved to prepare and submit the jobs). The GPU computing has not been used here since the Tabu-list is constructed directly from the FFT adopted to estimate the 2-point correlation functions. The quantity of interest is computed adopting a Monte Carlo simulation method that is easily parallelizable on the different nodes of the network. 5.3 Carbon reinforced fibres Fibre reinforced composites are widely used in many structural applications, as they offer an attractive alternative to the more conventional forms of construction due to e.g. their high strength to weight ratio and resistance to corrosion. The mechanical properties of the composites depend upon the mechanical properties of the constituents, but also on the geometry of the microstructure (i.e. the distribution of the fibers in the matrix). The target material, discretized by 1000 × 1000 pixels, has been generated numerically. The fiber center positions are generated randomly and sequentially: the position of the fiber center is accepted if the center coordinates verify the condition that the distances between i-th fiber center and all the fiber centers previously accepted j = 1, . . . , i − 1 exceed a minimum value, dij = ri − rj > 2 ∗ r. A total number of 14287 fibers are included in order to reach the volume fraction of φ = 0.3 as shown in Table 3. The reconstruction of a particulate composite, i.e. particles with identical specific shape embedded in a matrix phase, is considerably simple with respect to the reconstruction of a generic medium. In fact, the problem is reduced to
924
E. Patelli, G.I. Schuëller
Fig. 18 Distribution of the diffusion times (expressed in arbitrary unit) of 5000 particles through the bi-continuous medium
Table 3 Main parameters adopted in the reconstruction of fiber reinforced composite and the speed-up performance obtained adopting the different strategies
Descriptors
R (fibers,fibers)
Domain discretization
1000 × 1000 pixels—14287 fibers
Volume fraction
φ0 = 0.3
Number of samples
100
Maximum GA generations
50
Maximum SA iterations
50
number of CPU/GPU
32/1
Strategies (speed-up)
Master/workers (≈3) FFT on the GPU (3.8) Domain decomposition (12.8) Superelements (33)
Total CPU time
≈2 hours
the identification of the centers of the fibers only. The distribution of the fibers in the matrix is characterized by means of the radial distribution function (see Sect. A.3). The reconstruction process consists of the identification of the center of the fibers in a 2D plane that satisfies the prescribed target correlation function. Finally, the effective properties of the composite medium have been analysed using the FE software ANSYS Academic Research product, v.11. Figure 19 shows the radial distribution functions of the target material and the reconstructed samples. Figure 20 shows an
Computational optimization strategies for the simulation of random
925
Fig. 19 Radial distribution functions of the target and reconstructed fiber reinforced material, respectively
Fig. 20 Distribution of the von Mises stress in the matrix and in the fibers
example of the distribution of the von Mises stress in the fiber and in the matrix of the reconstructed medium, respectively. The computational time required to generate a database of 100 samples of fiber reinforced material is approximately of 2 hours using a computer cluster with 32 CPU cores and 1 GPU. The GPU is used to calculate the FFT for the Tabu-list achieving
926
E. Patelli, G.I. Schuëller
a speed-up of 3.8. The radial distribution function is computed using the “divide and conquer” strategy obtaining a speed-up of 12.8 on 10 CPU (see Fig. 4). Since the evaluation of the objective function requires the availability of 10 cores, only 3 jobs are executed in parallel distributing the jobs on the node of the grid computing and obtaining a speed-up of ≈3. Hence, the total speed-up for the evaluation of the performance function achieved is of 38.4. Finally, the FE analysis of the reconstructed samples is performed adopting the superelement technique (see Fig. 6) with a speedup of 33. 6 Conclusions In this work, an efficient optimization method based on the synergy between various algorithms, i.e. Genetic Algorithms, Simulated Annealing as well as Tabu-list is presented. The proposed optimization procedure is both, very efficient and flexible and also generally applicable. For example, the proposed approach can be adopted in reliability analysis and reliability based optimization [3, 12, 18]. Furthermore, it has been shown that a number of different strategies can be adopted to speed-up the simulation and to take advantage of the multi-core processors, grid computing environments and the powerfulness of GPUs. It is important to mention that this tool is especially efficient when a set of different solutions (a database) must be identified to carry out computational experiments (i.e. “virtual testing”). As an application example, the design procedure for components made of heterogeneous materials is carried out by designing the component’s configurations according to the performance requirement; i.e. selecting optimal constituent compositions and morphology at microscopic level of the component to satisfy requirements and various constraints supported by a related database of possible configurations [4]. The success of computational design depends on the availability of efficient, flexible and scalable numerical simulation methods. Here, the proposed optimization tool and the speed-up strategies have been applied to reconstruct microstructures of random media and components and thus, to generate a database of configurations (i.e. samples) that share the same microscopic descriptors. The subsequent analysis (e.g. Finite Element analysis) has been performed to obtain the desired macroscopic quantity of interest and, providing a link between the micro- and the macro-scale. In such type of applications the solution database can be adopted for training meta models such as artificial neural networks that have been shown to be efficient tools to model complex relationships between inputs and outputs [33, 34]. Acknowledgements This project was partially supported by the Austrian Science Foundation (FWF) under the contract P19781-N13 which is gratefully acknowledged by the authors.
Appendix: Microstructure descriptors A.1 n-point correlation function The n-point probability functions were introduced in the context of determining the effective transport properties of random media by [2]. In Ref. [7], Debye and Buechue
Computational optimization strategies for the simulation of random
927
showed that the two-point probability functions of an isotropic porous solid can also be obtained experimentally via scattering of radiation. The autocorrelation function of a statistically inhomogeneous system is defined as: (j )
S2 (r1 , r2 ) = I (j ) (r1 )I (j ) (r2 )
(6)
where r1 and r2 are two arbitrary points in the system, the angular brackets denote an ensemble average, and the characteristic function I (j ) (r) is defined by (1). The (j ) quantity S2 (r1 , r2 ) can be interpreted as the probability of finding two points at positions r1 and r2 both in phase j . (j ) For statistically homogeneous isotropic media, S2 (r1 , r2 ) depends only on the distance r = |r1 − r2 | between two points, and therefore can be expressed sim(j ) (j ) ply as S2 (r). For all isotropic media without long-range order, S2 (0) = φj and (j ) limr→∞ S2 (r) = φj2 where φj is the volume fraction of phase j . A series of significant bounds and necessary conditions on the autocorrelations (r1 = r2 ) were derived in e.g. [17]. These provide the necessary framework for the selection of physically realizable models for the two-phase composite materials encountered in practice. Among the significant implications of this assumption, perhaps the most interesting is the relation of the autocorrelation function with an important morphological characteristic of two-phase media called the specific surface s. In the simple one dimensional case it expresses the expectation of the number of points where a phase change occurs per unit length. It has been established that [17]: lim
r→0+
s S (j ) (r) − S (j ) (r) =− r 2
(7)
Since s must be strictly positive and finite, significant restrictions arise on the possible functional form of the autocorrelation. It is worth mentioning that similar relations can be found for two and three dimensional isotropic media as well. The discrete nature of the digitized representation of the material allows to measure the distance r in terms of pixels or voxels and acquires integer values, with the end points of r located at the pixel (voxels) centers as shown in Fig. 21. Also, it can be shown that when sampled along the direction of rows (or columns) of pixels, S2 (r) is a linear function between adjacent pixels: S2 (r) = (1 − f )S2 (i) + f S2 (i + 1)
for i ≤ r < i + 1
(8)
where i is an integer, and f = r mod 1. Because of this linear property, the evaluation of S2 (r) at integral values of r is sufficient to characterize the structure, and (j ) determining it for non-integer values of r is not necessary. Consequently, S2 (r) can be evaluated simply by successively translating a line of r(≡ i) pixels in length at a distance of one pixel at a time and spanning the whole image, counting the number of successes of the two end points falling in phase j , and finally dividing the number of successes by the total number of trials which is also the system size for a periodic medium. In 1D cases, this sampling is of course performed along the single row of pixels only.
928
E. Patelli, G.I. Schuëller
Fig. 21 Example of discretized media and schematic representation of the two-point correlation function and the 2-point cluster correlation function. The points connected by the continuous lines fall in same cluster and contribute to both the correlation functions while the point connected by the dashed lines contribute only to the 2-point correlation function
The evaluation of the n-point correlation function of a digitized image required N O(Np p ) operations where Np represents the number of pixel of the system. To date, the most efficient version of the algorithm is based on the discrete fast Fourier transform (FFT) that requires O(Np log Np + Np ) operations. However, the evaluation of the correlation function due to a “local” change of the microstructure (i.e. the new configuration differs from the previous one only in nd pixels) can be obtained in a computationally much less costly way if the correlation function “before” the change is known [27]. In practice the location of the nd pixels are identified, computing the difference between the old and the new image, and the contribute of these pixels on the correlation function is computed, called patch correlation function (δSn ). The correlation function is then calculated as the sum of the correlation function of the old image Snold and the patch correlation function: Snnew = Snold + δSn . This method requires O(Np nd ) operations only. A.2 Lineal-path The lineal-path function L(j ) (r1 , r2 ) is defined as the probability of finding a line segment spanning from r1 to r2 that lies entirely in phase j [32]. This function contains some connectedness information, at least along a lineal path, and hence contains certain long-range information about the system that are very important especially for the diffusion problems. In a statistically homogeneous isotropic medium the linealpath function depends only on the distance r between the two points and can be expressed simply as L(j ) (r). Clearly, for all media having a volume fraction of φj : (j )
L(j ) (0) = S2 (0) = φj
(9)
The lineal-path function can distinguish between different phases of a medium, in the sense that the lineal-path function for a particular phase does not contain connectedness information of the complementary phase(s). Therefore, for efficient reconstruction using lineal-path functions, it is important to identify which phase in the medium is the target phase to be reconstructed. To estimate L(j ) (r) in a digitized system one can simply sample segments of (j ) length r, and then count the number of time Ns (r) that this segment following com-
Computational optimization strategies for the simulation of random
929
Fig. 22 Schematic representation of the lineal-path estimation method
(j )
pletely in the prescribed phase j . The ratio between Ns (r) and the total number of trials provide an estimate of the lineal-path. However a more efficient procedure can be adopted: 1. introduce an oriented line into the system (Line 1 in Fig. 22); 2. sample a random point (point A in the Fig. 22) belong to the phase of interest on this line; 3. move along the line from the sample point until a different phase is encountered (point B in the Fig. 22); 4. increment the counters associated with the distance r < |A − B|. This procedure is repeated along the initial line and then repeated over many lines in a particular sample or realization. The counters are divided by the total number of the random locations chosen. A.3 Radial distribution function The radial distribution function, R (j ) (r), allows to characterize dispersion of particles in very compact and efficient way. It is of fundamental importance in thermodynamics since that all the thermodynamics force can be expressed in terms of the radial distribution function. Furthermore, the radial distribution function can be ascertained from scattering experiments, which makes it a likely candidate for the reconstruction of a real system. If a priori no information is available on the nature of the material, the assumption that the material is composed by a dispersion of particles (e.g. a colloidal system) could be not adequate to describe the real microstructure of the material under analysis. The radial distribution function represents the probability associated with finding any particle at the radial distance r from the center of another particle. To evaluate R (j ) (r) in a digitized system, one has first to identify all the particles (or clusters) present in the medium and then to compute the center of each particle as shown in Fig. 23. It is important to remind here that the identification of the particles in a digitized medium can require a lot of computational effort. Once the centers of all the particles have been determined, the radial distribution function can be easily computed: R (j ) (r) = Ncp (r)/Ntp where Ncp (r) represents the number of pair particles
930
E. Patelli, G.I. Schuëller
Fig. 23 Schematic representation of the radial distribution function: the lines represent the distance between different “particles”
of phase j that have the distance between their centers equal to r and, Ntp represents the total number of particles in the system. Although the computation cost to evaluate R(r) can be quite high, in many cases this correlation function allows to reduce the complexity of the reconstruction procedure as shown in Sect. 5.3. For instance, if one is interested to reconstruct a dispersion of Ntp particles adopting the radial distribution function, the corresponding optimization problem can be reduced to identify the position of the centers of these particles and not the state of all pixels that composed the medium.
References 1. Arwade, S.R., Deodatis, G.: Variability response functions for effective material properties. Probab. Eng. Mech. 26, 174–181 (2010) 2. Brown, W.F.J.: Solid mixture permittivities. J. Chem. Phys. 23, 1514–1517 (1955) 3. Capiez-Lernout, E., Soize, C.: Robust design optimization in computational mechanics. J. Appl. Mech. 75(2), 021001 (2008). doi:10.1115/1.2775493 4. Chen, K.Z., Feng, X.A.: Computer-aided design method for the components made of heterogeneous materials. Comput. Aided Des. 35(5), 453–466 (2003). doi:10.1016/S0010-4485(02)00069-6 5. Cheng, J., Li, Q.: Application of the response surface methods to solve inverse reliability problems with implicit response functions. Comput. Mech. 43(4), 451–459 (2008). doi:10.1007/s00466-008-0320-0 6. Corson, P.: Correlation functions for predicting properties of heterogeneous materials. i. experimental measurement of spatial correlation functions in multiphase solids. J. Appl. Phys. 45, 3159–3164 (1974) 7. Debye, P., Bueche, AM: Scattering by an inhomogeneous solid. J. Appl. Phys. 20, 518–525 (1949) 8. Egeland, O., Araldsen, H.: A general purpose finite element method program. Comput. Struct. 4, 41–68 (1974) 9. Giacobbo, F., Patelli, E.: Monte Carlo simulation of nonlinear reactive contaminant transport in unsaturated porous media. Ann. Nucl. Energy 34(1–2), 51–63 (2007) 10. Granville, V., Krivanek, M., Rasson, J.P.: Simulated annealing: a proof of convergence. IEEE Trans. Pattern Anal. Mach. Intell. 16(6), 652–656 (1994). doi:10.1109/34.295910 11. Herrmann, J., Maess, M., Gaul, L.: Substructuring including interface reduction for the efficient vibroacoustic simulation of fluid-filled piping systems. Mech. Syst. Signal Process. 24(1), 153–163 (2010). doi:10.1016/j.ymssp.2009.05.003 12. Jensen, H.A.: Structural optimization of linear dynamical systems under stochastic excitation: a moving reliability database approach. Comput. Methods Appl. Mech. Eng. 194(12–16), 1757–1778 (2005)
Computational optimization strategies for the simulation of random
931
13. Jiang, J., Olson, M.D.: Applications of a super element model for non-linear analysis of stiffened box structures. Int. J. Numer. Methods Eng. 36(13), 2219–2243 (1993). doi:10.1002/nme.1620361306 14. Jiao, Y., Stillinger, F.H., Torquato, S.: A superior descriptor of random textures and its predictive capacity. Proc. Natl. Acad. Sci. 106(42), 17,634–17,639 (2009). doi:10.1073/pnas.0905919106, http://www.pnas.org/content/106/42/17634.full.pdf+html 15. Jiao, Y., Stillinger, F.H., Torquato, S.: Geometrical ambiguity of pair statistics. ii. Heterogeneous media. Phys. Rev. E 82(1), 011,106 (2010a). doi:10.1103/PhysRevE.82.011106 16. Jiao, Y., Stillinger, F.H., Torquato, S.: Geometrical ambiguity of pair statistics: Point configurations. Phys. Rev. E 81(1), 011,105 (2010b). doi:10.1103/PhysRevE.81.011105 17. Koutsourelakis, P.: Simulation of random heterogeneous materials. In: 9th ASCE Joint Special Conference on Probabilistic Mechanics and Structural Reliability (CD-ROM), Albuquerque, New Mexico (2004) 18. Marseguerra, M., Zio, E., Martorell, S.: Basics of genetic algorithms optimization for rams applications. Reliab. Eng. Syst. Saf. 91(9), 977–991 (2006) 19. McDowell, D., Olson, G.: Concurrent design of hierarchical materials and structures. Sci. Model. Simul. (2008). doi:10.1007/s10820-008-9100-6 20. McDowell, D.L., Special issue: Design of heterogeneous materials. J. Comput.-Aided Mater. Des. 11(2), 81–83 (2004). doi:10.1007/s10820-005-3159-0 21. Mishnaevsky, L.J., Schmauder, S.: Continuum mesomechanical finite element modeling in materials development: a state-of-the-art review. Appl. Mech. Rev. 54(1), 49–69 (2001) 22. NVIDIA: White paper – accelerating matlab with cuda using mex files. Tech. Rep. WP-03495001_v01 (2007) 23. Osborne, R.B.: Speculative computation in multilisp. Tech. Rep. CRL-90-1, Digital Equipment Corporation – Cambridge Research Lab (1990) 24. Panin, V., Egorushkin, V., Makarov, P.V., Grinyaev, Yu.V., et al.: Physical Mesomechanics of Heterogeneous Media and Computer-Aided Design of Materials. Cambridge International Science Publishing, Cambridge (1998) 25. Patelli, E., Schuëller, G.: On optimization techniques to reconstruct microstructures of random heterogeneous media. Comput. Mater. Sci. 45(2), 536–549 (2009). doi:10.1016/j.commatsci.2008.11.019 26. Roberts, A.P., Garboczi, E.J.: Elastic properties of a tungsten-silver composite by reconstruction and computation. J. Mech. Phys. Solids 47(10), 2029–2055 (1999) 27. Rozman, M.G., Utz, M.: Efficient reconstruction of multiphase morphologies from correlation functions. Phys. Rev. E 63(6), 066,701 (2001). doi:10.1103/PhysRevE.63.066701 28. Rozman, M.G., Utz, M.: Uniqueness of reconstruction of multiphase morphologies from two-point correlation functions. Phys. Rev. Lett. 89(13), 135,501 (2002). doi:10.1103/PhysRevLett.89.135501 29. Sanders, J., Kandrot, E.: CUDA by Example: An Introduction to General-Purpose GPU Programming, 1st edn. Addison-Wesley, Reading (2010) 30. Tan, L., Arwade, S.: Response classification of simple polycrystalline microstructures. Comput. Methods Appl. Mech. Eng. 197(13–16), 1397–1409 (2008) 31. Törn, A., Zilinskas, A.: Global Optimization. Lecture Notes in Computer Science, vol. 350. Springer, Berlin (1989) 32. Torquato, S.: Random Heterogeneous Materials: Microstructure and Macroscopic Properties. Interdisciplinary Applied Mathematics, 2nd edn. Springer, New York (2002) 33. Waszczyszyn, Z., Slonski, M.: Advances of Soft Computing in Engineering, pp. 237–316. Springer, Wien (2010) 34. Waszczyszyn, Z., Ziemianski, L.: Parameter Identification of Materials and Structures, pp. 265–340. Springer, Wien (2005) 35. Yeong, C.L.Y., Torquato, S.: Reconstructing random media. Phys. Rev. E 57(1), 495–506 (1998a). doi:10.1103/PhysRevE.57.495 36. Yeong, C.L.Y., Torquato, S.: Reconstructing random media. ii. three-dimensional media from twodimensional cuts. Phys. Rev. E 58(1), 224–233 (1998b). doi:10.1103/PhysRevE.58.224