The Potential and the Limitations of Pore Scale Computation

45 downloads 0 Views 12MB Size Report
May 25, 2018 - SIAM Journal on Scientific Computing, 38(2),. C96-C126. Schornbaum, F., & Rüde, U. (2017). Extreme-Scale Block-Structured Adaptive Mesh.
The Computational Mathematics Aspects of Porous Media, and Fluid Flow 22 - 25 May 2018, Leiden, the Netherlands

The Potential and the Limitations of Pore Scale Computation Ulrich Rüde LSS Erlangen and CERFACS Toulouse [email protected]

Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique www.cerfacs.fr

Lehrstuhl für Simulation Universität Erlangen-Nürnberg www10.informatik.uni-erlangen.de

Pore Scale Computation — Ulrich Rüde

1

Outline Motivation Direct Simulation of Complex Flows 1. Supercomputing: scalable algorithms, efficient software 2. Solid phase - rigid body dynamics 3. Fluid phase - Lattice Boltzmann method 4. Gas phase - free surface tracking, volume of fluids 5. Electrostatics - finite volume 6. Fast implicit solvers - multigrid Multi-physics applications Perspectives Disclaimer: Parallel multigrid for Stokes with 1013 DoF not in this lecture. Pore Scale Computation

-

Ulrich Rüde

2

SuperMuc: 3 PFlops

Building Block I:

Current and Future High Performance Supercomputers

Pore Scale Computation



Ulrich Rüde

3

Multi-PetaFlops Supercomputers Sunway TaihuLight SW26010 processor 10,649,600 cores 260 cores (1.45 GHz)
 per node 32 GiB RAM per node 125 PFlops Peak Power consumption:
 15.37 MW TOP 500: #1

JUQUEEN

SuperMUC (phase 1)

Blue Gene/Q
 architecture 458,752 PowerPC
 A2 cores 16 cores (1.6 GHz)
 per node 16 GiB RAM per node 5D torus interconnect

Intel Xeon architecture 147,456 cores 16 cores (2.7 GHz) per node 32 GiB RAM per node Pruned tree interconnect 3.2 PFlops Peak TOP 500: #27

5.8 PFlops Peak TOP 500: #13

What is the problem?

Pore Scale Computation

Ulrich Rüde

Building block II:

Granular media simulations

1 250 000 spherical particles 256 processors 300 300 time steps
 runtime: 48 h (including data output) texture mapping, ray tracing

with the physics engine Pöschel, T., & Schwager, T. (2005). Computational granular dynamics: models and algorithms. Springer Science & Business Media. Schruff, T., Liang, R., Rüde, U., Schüttrumpf, H., & Frings, R. M. (2018). Generation of dense granular deposits for porosity analysis: assessment and application of large-scale non-smooth granular dynamics. Computational Particle Mechanics, 5(1), 59-70.

Pore Scale Computation — Ulrich Rüde

5

Discretization Underlying the Time-Stepping

Nonlinear Complementarity: Measure Differential Inclusions Non-penetration conditions



discrete

impulses

continuous

forces

⇠=0

⇠˙+ ⇠˙+ = 0

⇠¨+ ⇠

⇠=0

⇠˙+

0?

n

0

0?

n

0

0?

n

0

0 ? ⇤n

0

0 ? ⇤n

⇠ + v 0n( ) t Signorini condition

impact law

Coulomb friction conditions

k

+ k vto k2

n

to

+ ˙ k v tok2

to

µ

= =

µ µ

n n

+ vto

n

˙v + to

+ k vto k2 = 0

k⇤tok2  µ⇤n

+ k vto k2⇤to =

0

0?

to k2

0

k

0 k vto ( )k2

friction cone condition

to k2 to

+ µ⇤n vto

µ

=

µ

n n

0 vto ( )

frictional reaction opposes slip

Moreau, J., Panagiotopoulos P. (1988): Nonsmooth mechanics and applications, vol 302. Springer, Wien-New York 15.12.2014 — T. Preclik — Lehrstuhl fur ¨ Systemsimulation — Ultrascale Simulations of Non-smooth Granular Dynamics Popa,Erlangen, C., Preclik, T., & UR (2014). Regularized solution of LCP problems with application to rigid body dynamics. Numerical Algorithms, 1-12. Preclik, T., & UR (2015). Ultrascale simulations of non-smooth granular dynamics. Computational Particle Mechanics, 2(2), 173-196.

Preclik, T., Eibl, S., & UR (2017). The Maximum Dissipation Principle in Rigid-Body Dynamics with Purely Inelastic Impacts. arXiv preprint:1706.00221.

Pore Scale Computation

-

Ulrich Rüde

6

9

Shaker scenario with sharp edged hard objects

864 000 sharp-edged particles with a diameter between 0.25 mm and 2 mm. Pore Scale Computation

-

Ulrich Rüde

7

Setup of random spherical structure for porous media with the PE

Pore Scale Flow



Ulrich Rüde

8

Scaling Results

18

Tobias Preclik, Ulrich R¨ ude

fastest. The T coordinate is limited by the number of Solver algorithmically not optimal for dense systems, hence processes per node, which wascannot 64 for the scale above measurecreation of a three-dimensional communiunconditionally, but is highly efficient inments. manyUpon cases of practical importance cator, the three dimensions of the domain partitionStrong and weak scaling results for a constant number of iterations ing are mapped also in row-major order. This e↵ects, if performed on SuperMUC and Juqueenthe number of processes in z-dimension is less than the number of processes per node, that a two-dimensional Largest ensembles computed or even three-dimensional section of the domain parti10 tioning is mapped to a single node. However, if the number of processes in z-dimension is larger or equal to the 10 number Breakup of processes up per of node, only a one-dimensional compute times on section of Erlangen the domain partitioning is mapped to a single granular gas: scaling results RRZE Cluster Emmy 7.1 Scalability of Granular Gases (a) Weak-scaling graph on the Emmy cluster. node. A one-dimensional section of the domain partitioning performs considerably less intra-node communi0.116 1.2 cation than a two- or three-dimensional section of the 0.114 1 domain partitioning. This matches exactly the situa0.112 tion for 2 048 and124.6 096 nodes. For 2 048 nodes, 1a6 two0.11 .5 % % 0.8 % % section 1⇥2⇥32 of the domain .7 dimensional partitioning 0.108 64⇥64⇥32 is mapped to each node, and for 4 096 nodes 0.106 0.6 0.104 a one-dimensional section 1 ⇥ 1 ⇥ 64 of the domain par0.4 0.102 titioning 64 ⇥ 64 ⇥ 64 is mapped to each node. To sub0.1 stantiate this claim, we confirmed that the performance 0.2 av. time per time step ( rst series) av. time per time step (second series) 0.098 jump occurs when the last dimension of the domain parparallel e ciency (second series) 0.096 0 titioning reaches the number of processes per node, also 1 4 16 64 256 1024 4096 16384 number of nodes when using 16 and 32 processes per node. (a) Time-step profile of the granular gas exe(b) Time-step profile of the granular gas exe22

%

.3% %8 9 . 5

25 .9

parallel e ciency

% 9.5

25

.8%

8.0

cuted with 5 × 2 × 2 = 20 processes on a single node.

30.6%

(b) Weak-scaling graph on the Juqueen supercomputer.

16.0%

18.1%

av. time per time step and 1000 particles in s

2.8 × 10 non-spherical particles 1.1 × 10 contacts

cuted with 8 × 8 × 5 = 320 processes on 16 nodes.

Fig. 5c presents the weak-scaling results on the SuFigure 7.3: supercomputer. The time-step profiles for two weak-scaling the granular perMUC The setup executions di↵ers offrom thegas on the Emmy cluster with 253 particles per process. granular gas scenario presented in Sect. 7.2.1 in that it Pore Scale Computation - Ulrich Rüde 9centers of is more TheThe distance between twodecomdomain dilute. decompositions. scaling experiment for thethe one-dimensional domain

Building Block III:

Scalable Flow Simulations Succi, S. (2001). The lattice Boltzmann equation: for fluid dynamics and beyond. Oxford university press. Feichtinger, C., Donath, S., Köstler, H., Götz, J., & Rüde, U. (2011). WaLBerla: HPC software design for computational engineering simulations. Journal of Computational Science, 2(2), 105-112.

Pore Scale Computation

-

Ulrich Rüde

10

Domain Partitioning and Parallelization

static block-level refinement (→ forest of octrees)

static load balancing

DISK

compact (KiB/MiB) binary MPI IO

DISK

allocation of block data (→ grids)

separation of domain partitioning from simulation (optional)

Pore Scale Computation — Ulrich Rüde

11

Adaptive Mesh Refinement and Load Balancing

Isaac, T., Burstedde, C., Wilcox, L. C., & Ghattas, O. (2015). Recursive algorithms for distributed forests of octrees. SIAM Journal on Scientific Computing, 37(5), C497-C531. Meyerhenke, H., Monien, B., & Sauerwald, T. (2009). A new diffusion-based multilevel algorithm for computing graph partitions. Journal of Parallel and Distributed Computing, 69(9), 750-761. Schornbaum, F., & Rüde, U. (2016). Massively Parallel Algorithms for the Lattice Boltzmann Method on NonUniform Grids. SIAM Journal on Scientific Computing, 38(2), C96-C126. Schornbaum, F., & Rüde, U. (2017). Extreme-Scale Block-Structured Adaptive Mesh Refinement. arXiv preprint:1704.06829.

Pore Scale Computation

-

Ulrich Rüde

12

Performance on Coronary Arteries Geometry

Weak scaling 
 458,752 cores of JUQUEEN
 over a trillion (1012) fluid lattice cells Color coded proc assignment Godenschwager, C., Schornbaum, F., Bauer, M., Köstler, H., & UR (2013). A framework for hybrid parallel flow simulations with a trillion cells in complex geometries. In Proceedings of SC13: International Conference for High Performance Computing, Networking, Storage and Analysis (p. 35). ACM.

Strong scaling
 32,768 cores of SuperMUC cell sizes of 0.1 mm 2.1 million fluid cells 6000 time steps per second

Pore Scale Computation

-

Ulrich Rüde

LBM AMR - Performance

AMR Performance

• JUQUEEN – space filling curve: Morton 12

197 billion cells

seconds

10

58 billion cells

#cells per core

14 billion cells

8

31,062

hybrid MPI+OpenMP version with SMP 1 process ⇔ 2 cores ⇔ 8 threads

6

127,232 429,408

4

2 0 256

4096

32,768

458,752

cores Peta-Scale Simulations with the HPC Framework waLBerla: Massively Parallel AMR for the LBM Florian Schornbaum - FAU Erlangen-Nürnberg - April 15, 2016

Ultrascalable Algorithms

-

Ulrich Rüde

14

37

LBM AMR - Performance

AMR Performance

• JUQUEEN – diffusion load balancing 12

197 billion cells

seconds

10

58 billion cells

#cells per core

14 billion cells

8

31,062 6

127,232

time almost independent of #processes !

4

429,408

2 0 256

4096

32,768

458,752

cores Peta-Scale Simulations with the HPC Framework waLBerla: Massively Parallel AMR for the LBM Florian Schornbaum - FAU Erlangen-Nürnberg - April 15, 2016

Ultrascalable Algorithms

-

Ulrich Rüde

15

38

Deriving macroscopic closure laws

Lattice Boltzmann Study of the Drag Correlation in Fluid-Particle Systems Beetstra, R., Van der Hoef, M. A., & Kuipers, J. A. M. (2007). Drag force of intermediate Reynolds number flow past mono-and bidisperse arrays of spheres. AIChE Journal, 53(2), 489-501. Tenneti, S., Garg, R., & Subramaniam, S. (2011). Drag law for monodisperse gas–solid systems using particle-resolved direct numerical simulation of flow past fixed assemblies of spheres. International journal of multiphase flow, 37(9), 1072-1092. Bogner, S., Mohanty, S., & UR (2015). Drag correlation for dilute and moderately dense fluid-particle systems using the lattice Boltzmann method, International Journal of Multiphase Flow 68, 71-79.

Pore Scale Computation



Ulrich Rüde

16

Flow field and vorticity • • • •

2D slice visualized Domain size: Re = 300 Volume fraction:

Pore Scale Computation



Ulrich Rüde

Macroscopic drag correlation

Final drag correlation:

Average absolute percentage error: 9.7 % Bogner, S., Mohanty, S., & UR (2015). Drag correlation for dilute and moderately dense fluid-particle systems using the lattice Boltzmann method, International Journal of Multiphase Flow 68, 71-79.

Pore Scale Computation



Ulrich Rüde

18

Pore scale resolved flow in porous media

Direct numerical simulation of flow through sphere packings Beetstra, R., Van der Hoef, M. A., & Kuipers, J. A. M. (2007). Drag force of intermediate Reynolds number flow past mono-and bidisperse arrays of spheres. AIChE Journal, 53(2), 489-501. Tenneti, S., Garg, R., & Subramaniam, S. (2011). Drag law for monodisperse gas–solid systems using particle-resolved direct numerical simulation of flow past fixed assemblies of spheres. International journal of multiphase flow, 37(9), 1072-1092.

Pore Scale Computation



Ulrich Rüde

19

To make the comparison independent of the setup, all of the flow properties are non-dimensionalized.

Flow over porous structure Porosity

10

Lecture Notes in Computer Science: Authors’ Instructions

Setup

Height of the channel

1

0.3

0.5

0.6

0.7

0.8

0.9

1

0.8

0.6

Porosity Velocity Profile

0.4

Porosity 0.3 0.4 0.5 0.6 in 0.7 0.8 0.9 Fig. 5. Schematic of the simulation domain and averaged velocity profile the open 1 and porous regions. 0.2

Height of the channel

0.4

1

0.8

The value of the interface velocity Uint , can be directly obtained from the 0.6 averaged velocity profile of the DNS. In order to obtain the velocity gradient on

0

0

0.2

0.4

0.6

0.8

1

Streamwise velocity

Porosity Velocity Profile

0.4

Pore geometry and streamlines 0.2

0

0

0.2

0.4

0.6

0.8

1

Streamwise velocity

Fattahi E., geometry Waluga C., Rüde U.average (2016)ofLarge Scale velocity Lattice Boltzmann Simulation for the Coupling (a) pore and Wohlmuth streamlines B.,(b) planar stream-wise of Free and Porous Media Flow. In: Kozubek T., Blaheta R., Šístek J., Rozložník M., Čermák M. (eds) High Performance Computing in Science and Engineering. HPCSE 2015. Lecture Notes in Computer Science, vol Fig. 4. pore-scale simulation of free flow over porous media. 9611. Springer, Cham. in the OTW model, the jump coefficient the e↵ective- viscosity e↵ are Pore Scale and Computation UlrichµRüde

20

Turbulent flow over a permeable region • • • • • •

Re ≈ 3000 direct numerical simulation Volume rendering of velocity magnitude Periodic in X and Y direction I10 cluster, 7x32x19=4256 core hours 1,300,000 timesteps 8 times more time steps on the finest level

Pore Scale Computation



Ulrich Rüde

21

Flow through structure of thin crystals (filter)

Gil, A., Galache, J. P. G., Godenschwager, C., & Rüde, U. (2017). Optimum configuration for accurate simulations of chaotic porous media with Lattice Boltzmann Methods considering boundary conditions, lattice spacing and domain size. Computers & Mathematics with Applications, 73(12), 2515-2528.

Pore Scale Computation



Ulrich Rüde

22

Multi-Physics Simulations for Particulate Flows Parallel Coupling with waLBerla and PE

Ladd, A. J. (1994). Numerical simulations of particulate suspensions via a discretized Boltzmann equation. Part 1. Theoretical foundation. Journal of Fluid Mechanics, 271(1), 285-309. Tenneti, S., & Subramaniam, S. (2014). Particle-resolved direct numerical simulation for gas-solid flow model development. Annual Review of Fluid Mechanics, 46, 199-230. Bartuschat, D., Fischermeier, E., Gustavsson, K., & UR (2016). Two computational models for simulating the tumbling motion of elongated particles in fluids. Computers & Fluids, 127, 17-35.

Pore Scale Computation — Ulrich Rüde

23

Fluid-Structure Interaction

direct simulation of Particle Laden Flows (4-way coupling)

Götz, J., Iglberger, K., Stürmer, M., & UR (2010). Direct numerical simulation of particulate flows on 294912 processor cores. In Proceedings of Supercomputing 2010, IEEE Computer Society. Götz, J., Iglberger, K., Feichtinger, C., Donath, S., & UR (2010). Coupling multibody dynamics and computational fluid dynamics on 8192 processor cores. Parallel Computing, 36(2), 142-151.

Pore Scale Computation

-

Ulrich Rüde

24

Comparison between coupling methods 10

8

8

6

6

xp||

10

z

Example: Single moving particle evaluation of oscillating oblique regime: Re= 263, Ga= 190 correctly represented by momentum exchange (less good with Noble and Torczynski method) Different coupling variants First order bounce back Second order central linear interpolation Cross validation with spectral method of Ullmann & Dušek

4

4

2

2

0

0

M.Uhlmann, J.Dušek, The motion of a single heavy sphere in ambient fluid: A benchmark forinterface-resolved particulate flow simulations with significant 2 2 2 0 2 2 0 2 relative velocities, International Journal of Multiphase Flow 59 (2014). xpH xpHz? D. R. Noble, J. R. Torczynski, A Lattice-Boltzmann Method for Partially Saturated Computational Cells, International Journal of Modern Physics C (1998). Figure 4: Contours of the projected relative velocity urk for case B-CLI-48 (Ga = 178.46). Contours are Rettinger, C., Rüde, U. (2017). A comparative study of fluid-particle coupling the red line outlines the recirculation area with urk = 0. The blue cross in the left plot marks the l methods for fully resolved lattice Boltzmann simulations. Computers & Fluids.

Visualization of recirculation length in particle wake

calculation of the recirculation length Lr .

Pore Scale Computation



Ulrich Rüde

25

Simulation of suspended particle transport Preclik, T., Schruff, T., Frings, R., & Rüde, U. (2017, August). Fully Resolved Simulations of Dune Formation in Riverbeds. In High Performance Computing: 32nd International Conference, ISC High Performance 2017, Frankfurt, Germany, June 18-22, 2017, Proceedings (Vol. 10266, p. 3). Springer.

0.864*109 LBM cells 350 000 spherical particles

Pore Scale Computation —

Ulrich Rüde

26

Sedimentation and fluidized beds

3 levels mesh refinement 3800 spherical particles Galileo number 50 Pore Scale Computation —

Ulrich Rüde

128 processes 1024-4000 blocks Block size 323 27

Building Block IV (electrostatics)

Positive and negatively charged particles in flow subjected to transversal electric field

Direct numerical simulation of charged particles in flow

Masilamani, K., Ganguly, S., Feichtinger, C., & UR (2011). Hybrid lattice-boltzmann and finite-difference simulation of electroosmotic flow in a microchannel. Fluid Dynamics Research, 43(2), 025501. Bartuschat, D., Ritter, D., & UR (2012). Parallel multigrid for electrokinetic simulation in particle-fluid flows. In High Performance Computing and Simulation (HPCS), 2012 International Conference on (pp. 374-380). IEEE. Bartuschat, D. & UR (2015). Parallel Multiphysics Simulations of Charged Particles in Microfluidic Flows, Journal of Computational Science, Volume 8, May 2015, Pages 1-19

Pore Scale Computation — Ulrich Rüde

28

6-way coupling charge distribution

velocity BCs

LBM

Finite volumes MG treat BCs V-cycle

treat BCs stream-collide step

object motion

hydrodynam. force

iter at.

electrostat. force

Newtonian mechanics collision response object distance

correction force

Lubrication correction

Bartuschat, D. & UR (2015). Parallel Multiphysics Simulations of Charged Particles in Microfluidic Flows, Journal of Computational Science, Volume 8, May 2015, Pages 1-19 Bartuschat, D. & UR (2018).Coupled Multiphysics Simulations of Charged Particle Electrophoresis for Massively Parallel Supercomputers. To appear in Journal of Computational Science, doi: 10.1016/j.jocs.2018.05.011

Pore Scale Computation

-

Ulrich Rüde

29

The sweeps that scale perfectly—HydrF, LubrC, size—mainly dueand to increasing MPI communication—are Map, SetRHS, ElectF—are summarized as ‘Oth‘. LBM, MG,simulation pe, and PtCm. LBM and MG take For longer timesOverall, the particles attracted by the up more wall than are 75%no of the totalevenly time, w.r.t. the runtimes bottom longer distributed, possibly of the individual sweeps. causing load imbalances. However, they hardly a↵ect the The performance. sweeps that scale perfectly—HydrF, LubrC, overall For the simulation for the animaMap, SetRHS, and ElectF—are summarized as ‘Oth‘. tion, the relative share of the lubrication correction is For longer simulation times the particles attracted by the below each otherevenly sweepdistributed, of ‘Oth‘ ispossibly well below bottom0.1% wall and are no longer 4% of the total runtime.However, they hardly a↵ect the causing load imbalances. Overall the coupled achieves overall performance. For multiphysics the simulationalgorithm for the animation, parallel the relative share of lubrication correction 83% efficiency onthe 2048 nodes. Since mostistime 0.1% and each LBM other and sweepMG, of ‘Oth‘ is well below isbelow spent to execute we will now turn to 4% of the total runtime. analyse them in more detail. Fig. 14 displays the paralOverall the coupled multiphysics algorithm achieves lel performance for di↵erent numbers of nodes. On 2048 83% parallel efficiency on 2048 nodes. Since most time nodes, MG executes 121,083 MLUPS, corresponding to is spent to execute LBM and MG, we will now turn to a parallel efficiency of 64%. The LBM performs 95,372 analyse them in more detail. Fig. 14 displays the paralMFLUPS, with 91% parallel efficiency. lel performance for di↵erent numbers of nodes. On 2048

Separation experiment

shares on the total runtime. This diagram is based on the

400maximal (for MG, LBM, pe) or average (others) runtimes

Number of nodes

0

64 12 8 25 6 51 2 10 24 20 48

32

16

8

4

2

1

Figure 13 Runtimes of charged particle algorithm sweeps for 240 time stepsNumber on increasing number of nodes. of nodes

60 30 50 20 40 10 30 0

80 40 LBM Perform. 60 MG Perform. 20 40

3

Perform. 20 0 250 500 750 1000 1250LBM 1500 1750 2000 Number of nodesMG Perform. 20 10 Figure Figure 13 Runtimes of charged particle algorithm sweeps 0 14 Weak scaling performance of MG and LBM 0Ulrich 1000 250Rüde 500 750time 1250 1500 1750 2000 Pore Scale Computation 30 for 240 time steps on increasing number of nodes. sweep for 240 steps. Number of nodes

3

SetRHS PtCm ElectF

3

64 12 8 25 6 51 2 10 24 20 48

32

2

0 100

16

200

8

100

300

4

200

10 MLUPS (MG)

400

3

300

10 MFLUPS 10 MFLUPS (LBM) (LBM)

LBM Map nodes, MG executes 121,083 MLUPS, corresponding to Lubr efficiency of 64%. The LBM performs 95,372120 HydrF a parallel 90 LBM MFLUPS, with 91% parallel efficiency. pe 80 Map 100 MG 70 Lubr SetRHS 60 HydrF 90 120 80 PtCm pe 50 80 100 60 MG ElectF 70 40

of the di↵erent sweeps among all processes. The upper

10 MLUPS (MG)

240 time steps fully 6-way coupled simulation 400 sec on SuperMuc weak scaling up to 32 768 cores 7.1 Mio particles in Fig. 13 for di↵erent problem sizes, indicating their

1 Total runtimes []

Total runtimes []

equired number of iterations scales with the diameter f thethe problem al. (2014) to coarsestsize gridGmeiner problem et is depicted for according di↵erent problem sizes. Doubling the domain in all three dimensions, he growth in the condition number Shewchuk (1994). the when number of CG the iterations approximately doubles. However, doubling problem size, CG iterations This corresponds to or thehave expected that the ometimes stay constant to bebehaviour increased. This number of iterations scales with theDirichlet diameter esultsrequired from di↵erent shares of Neumann and of the problem size Gmeiner et al. (2014) according to BCs on the boundary. Whenever the relative proportion the growth in the condition number Shewchuk (1994). f Neumann BCs increases, and However, when doublingconvergence the problem deteriorates size, CG iterations more CG iterations necessary. sometimes stay are constant or have to be increased. This The runtimes all parts of the algorithmand areDirichlet shown results from of di↵erent shares of Neumann n Fig. 13 on forthe di↵erent problem sizes, indicating their BCs boundary. Whenever the relative proportion increases, convergence and haresofonNeumann the totalBCs runtime. This diagram isdeteriorates based on the more CGMG, iterations maximal (for LBM, are pe) necessary. or average (others) runtimes The runtimes all partsall of processes. the algorithm areupper shown f the di↵erent sweepsof among The

Conclusions

Pore Scale Computation — Ulrich Rüde

31

Computational Science is done in Teams Dr.-Ing. Dominik Bartuschat Martin Bauer, M.Sc. (hons) Dr. Regina Degenhardt Sebastian Eibl, M. Sc. Dipl. Inf. Christian Godenschwager Marco Heisig, M.Sc.(hons) PD Dr.-Ing. Harald Köstler Nils Kohl, M. Sc. Sebastian Kuckuk, M. Sc. Christoph Rettinger, M.Sc.(hons) Jonas Schmitt, M. Sc. Dipl.-Inf. Florian Schornbaum Dominik Schuster, M. Sc. Christoph Schwarzmeier, M.Sc. (hons) Dominik Thönnes, M. Sc.

Pore Scale Computation



Dr.-Ing. Benjamin Bergen Dr.-Ing. Simon Bogner Dr.-Ing. Stefan Donath Dr.-Ing. Jan Eitzinger Dr.-Ing. Uwe Fabricius Dr. rer. nat. Ehsan Fattahi Dr.-Ing. Christian Feichtinger Dr.-Ing. Björn Gmeiner Dr.-Ing. Jan Götz Dr.-Ing. Tobias Gradl Dr.-Ing. Klaus Iglberger Dr.-Ing. Markus Kowarschik Dr.-Ing. Christian Kuschel Dr.-Ing. Marcus Mohr Dr.-Ing. Kristina Pickl Dr.-Ing. Tobias Preclik Dr.-Ing. Thomas Pohl Dr.-Ing. Daniel Ritter Dr.-Ing. Markus Stürmer Dr.-Ing. Nils Thürey Ulrich Rüde

32

Thank you for your attention!

Bogner, S., & UR. (2013). Simulation of floating bodies with the lattice Boltzmann method. Computers & Mathematics with Applications, 65(6), 901-913. Anderl, D., Bogner, S., Rauh, C., UR, & Delgado, A. (2014). Free surface lattice Boltzmann with enhanced bubble model. Computers & Mathematics with Applications, 67(2), 331-339. Bogner, S. Harting, J., & UR (2017). Simulation of liquid-gas-solid flow with a free surface lattice Boltzmann method. Submitted.

Pore Scale Computation —

Ulrich Rüde

33