A Multiblock Multigrid AMR Algorithm in 2D

0 downloads 0 Views 321KB Size Report
A Multiblock Multigrid AMR Algorithm in 2D. M. Borrel and J.C. Jouhaud. O.N.E.R.A.,. B.P. 72, 92322 Ch^atillon cedex, FRANCE. E-mail: borrel@@onera.fr.
A Multiblock Multigrid AMR Algorithm in 2D M. Borrel and J.C. Jouhaud

Key Words:

O.N.E.R.A., B.P. 72, 92322 Ch^atillon cedex, FRANCE E-mail: borrel@@onera.fr Error Indicator, Hierarchical Adaptive Mesh Refinement, Local Multigrid, Multiblock

1. Introduction

Multiblock structured grid techniques have become a standard for solving practical aerodynamic problems due to their facility in mesh generating around complex geometries. The mesh strategy proposed here consists of representing the entire ow domain by a coarse multiblock grid and re ning the grid using the hierarchical patched adaptive mesh re nement (AMR) algorithm independently within each block. The local adaptive mesh re nement method, originally developed by Berger and Oliger [1] and afterwards by Quirk [2] and many authors, uses a sequence of nested levels of re ned structured grids (patches), in which any block ow solver is applied. For steady-state problems, the hierarchical solution computation allows to take advantage of multigrid convergence acceleration and to save memory, as pointed out by the authors [3] or by Dubek and Colella [4].

2. Numerical Procedure

From a practical point of view, the numerical processes described below correspond to embedded loops. For computer code eciency, we have chosen to implement the Runge-Kutta loop at the most inner place : that order has revealed to be stable.

2.1. Flow solver The classical MUSCL nite volume approach is used, associated with the Van Leer ux vector splitting formulas or the Roe ux formulas. Since we are interested in steady-state solutions, a linear implicit phase is added at each time step, solved with an approximate ADI factorization. A local time step is used too. A 3-step Runge-Kutta scheme is applied to advance the solution in time.

2.2. Adaptive mesh re nement

Our starting point is the body- tted curvilinear and non-recursive implementation of the AMR algorithm for steady-state computations presented in [3]. In this previous work, the adaption was based on an error indicator built from the density gradient and the stress tensor for viscous cases. In the present work, we have evaluated an other error indicator built from the entropy production within each cell, which leads to a more physical sensor than the density gradient.

2.3. Local multigrid

In order to enforce, or even to obtain, convergence we used a nonlinear multigrid method devised from the Jameson and Yoon method [5]. The forcing function, added to the coarse level residuals, is expressed in terms of local composite residuals [6]. At a ne-coarse boundary, a re uxing technique is applied : in that manner, the conservation of the composite solution is ensured at convergence. It must be noticed that, contrary to standard mutigrid strategies, the present process always goes towards enrichment, the coarsening being performed completly while moving on a coarse level. As a result, no requirement is needed for the initial grid and the enrichment ratio can be taken arbitrary.

2.4. Multiblock

The sucess of the method relies on communication between blocks which must be realised carefully. In our implementation, an inter-block data structure, corresponding to a complete re nement, allows to tranfer quantities like the solution vector, the gradients, the error indicator or the metrics between the blocks.

1

3. Numerical Results

To test the e ectiveness of the present method, we have chosen the benchmark transonic ow past a NACA0012 airfoil at Mach number .85 and 1 degree of angle of attach. The objective is two-fold : First, to validate the implementation. Second, to estimate the performances of the method in terms of robustness, accuracy and exibility. In Fig. 1 we plot the Mach number contours, a comparison of the wall pressure coecient distributions between a local and a global computation and a comparison of convergence histories. The accuracy of the present method is put in evidence and we can see that the local multigrid performs as well as the global one in terms of work units. Nevertheless, it is dicult to converge while adapting, so we have to freeze the AMR structure after a speci ed iteration (here 20 or 50). The exibility in problem solving is demonstrated in next simulations concerning an inviscid ow through a bi-NACA0012 airfoil and of a viscous ow past a NACA0012 airfoil.

4. Conclusion

We have evaluated a new computational strategy for simulating steady-state transonic ows on multiblock structured grids. This strategy is based on the local mesh re nement method AMR reformulated in terms of local multigrid method. The main result is that the computational gains reachable with the multigrid method and the AMR method add each other. In addition to the local multigrid eciency, the multiblock technique gives great exibility in problem solving.

References

[1] Berger M.J. and Oliger J., Adaptive mesh re nement for hyperbolic di erential equations. J. Comp. Phys., 53, 484-512, 1984. [2] Quirk J.J., An adaptive grid re nement algorithm for computational shock hydrodynamics. PhD thesis, Cran eld Institute of Technology, 1991. [3] Jouhaud J.C. and Borrel M., A hierarchical adaptive mesh re nement method : application to 2D ows. 3rd ECCOMAS CFD Conf., September 1996. [4] Dubek S.A. and Colella P., Steady-state solution-adaptive Euler computations on structured grids. AIAA 980543, January 1998. [5] Jameson A. and Yoon S., LU implicit scheme with multiple grids for the Euler equations. AIAA 86-0105, 1986. [6] Jouhaud J.C., Methode d'adaptation de maillages structures par enrichissement : application a la resolution des equations d'Euler et de Navier-Stokes. PhD thesis, Universite de Bordeaux I, 1997. Mach contours

Convergence history

Wall Cp distribution 100

-1

grid adaption

-0.8

2

10-1

-0.6

10-2

-0.4

Residuals

1

Y

Cp

-0.2

10

0

0.2

0

-3

local (112 s CPU) start AMR

10-4

0.4

local (cl=.378, cd=.061) global (cl=.359, cd=.059)

0.6

-1

10-5

global (852 s CPU)

0.8

-1

0

1

2

0

0.2

0.4

0.6

X

X

0.8

1

10

-6

50

100

150

Work units

Figure 1: Transonic ow over a NACA0012 airfoil (Mach = :85; = 1o ), 5-block computation. 2

200

Suggest Documents