Compressible Multiphase Flow: Cavitation and Collapse Phenomena

Cloud Cavitation

Cavitation is encountered in liquid flows which are subject to large pressure variations. It refers to the rapid growth of vapor nuclei in low pressure regions followed by an intense collapse of the resulting vapor pockets in zones of high pressure. This process is known for its detrimental impact onto engineering devices. The caused effects comprise, among other things, material erosion, noise and vibrations and thus limit the lifetime of the affected device. In contrast, the power of caviation can also be harnessed, for instance, for biomedical applications, cleaning purposes or vegetable oil extraction.

On the one hand, attempts to make use of the destructive power of cavitation needs precise control of the behavior of only a few gas bubbles. On the other hand, the design of engineering devices requires accurate models that are able to predict the collective behavior of cavitating and potentially turbulent flow. The latter typically involves thousands of vapor bubbles and includes a multitude of spatio-temporal scales. A detailed understanding of these complex processes is a particularly challenging task which cannot be captured by experimental measurements only, but also requires numerical investigations.

We perform large-scale simulations of clouds of tens of thousands gas bubbles that are subject to a collapse process. Systems with such a large number of bubbles are computationally enabled by Cubism-MPCF, an open-source compressible multicomponent flow solver for high performance computing, which is developed and maintained by the members of the CSElab. The software is available for download here.

Figure 1 shows a close up of the outer surface of a cloud comprising 50 000 bubbles. The bubbles first collapse at the outer surface of the cloud. This process comes along with the formation of micro jets due to bubble-bubble interactions. The combined effect of individual collapsing bubbles results in the formation of an inward-propagating pressure wave and velocity front. While moving towards the core of the cloud, the pressure wave and the velocity front continuously increase in strength. Eventually, significant pressure amplifications are observed in the core region.

Shock Induced Bubble Collapse

Further simulations consider shock induced collapses of arrays of bubbles. Figure 2 depicts a rendering of the pressure field and the air/liquid interface for a linear array of bubbles that are subject to an incoming shock from the left-hand side. The dense orange fields correspond to very high pressures while thin green fields corresponds to moderately high pressures.

People: Fabian Wermelinger, Petr Karnakov, Panos Hadjidoukas
Funding: ETH Zürich

Publications

2019

  • U. Rasthofer, F. Wermelinger, P. Karnakov, J. Šukys, and P. Koumoutsakos, “Computational study of the collapse of a cloud with 12500 gas bubbles in a liquid,” Phys. Rev. Fluids, vol. 4, p. 63602, 2019.

BibTeX

@article{rasthofer2019a,
author = {Rasthofer, U. and Wermelinger, F. and Karnakov, P. and {\v{S}}ukys, J. and Koumoutsakos, P.},
doi = {10.1103/PhysRevFluids.4.063602},
issue = {6},
journal = {{Phys. Rev. Fluids}},
month = {Jun},
numpages = {30},
pages = {063602},
publisher = {American Physical Society},
title = {Computational study of the collapse of a cloud with 12500 gas bubbles in a liquid},
url = {https://cse-lab.seas.harvard.edu/files/cse-lab/files/rasthofer2019a.pdf},
volume = {4},
year = {2019}
}

Abstract

We investigate the collapse of a cloud composed of 12500 gas bubbles in a liquid through large-scale simulations. The gas bubbles are discretized by a diffuse interface method, and a finite volume scheme is used to solve on a structured Cartesian grid the Euler equations. We investigate the propagation of the collapse wave front through the cloud and provide comparisons to existing models such as Mørch’s ordinary differential equations and a homogeneous mixture approach. We analyze the flow field to examine the evolution of individual gas bubbles and in particular their associated microjet. We find that the velocity magnitude of the microjets depends on the local strength of the collapse wave and hence on the radial position of the bubbles in the cloud. At the same time, the direction of the microjets is influenced by the distribution of the bubbles in its vicinity. We envision that the present, state-of-the-art, large-scale simulations will serve the further development of low-order models for bubble collapse.

2018

  • F. Wermelinger, U. Rasthofer, P. E. Hadjidoukas, and P. Koumoutsakos, “Petascale simulations of compressible flows with interfaces,” J. Comput. Sci., vol. 26, p. 217–225, 2018.

BibTeX

@article{wermelinger2018a,
author = {F. Wermelinger and U. Rasthofer and P.E. Hadjidoukas and P. Koumoutsakos},
doi = {10.1016/j.jocs.2018.01.008},
journal = {{J. Comput. Sci.}},
month = {may},
pages = {217--225},
publisher = {Elsevier {BV}},
title = {Petascale simulations of compressible flows with interfaces},
url = {https://cse-lab.seas.harvard.edu/files/cse-lab/files/wermelinger2018a.pdf},
volume = {26},
year = {2018}
}

Abstract

We demonstrate a high throughput software for the efficient simulation of compressible multicomponent flow on high performance computing platforms. The discrete problem is represented on structured three-dimensional grids with non-uniform resolution. Discontinuous flow features are captured using a diffuse interface method. A distinguishing characteristic of the method is the proper treatment of the interface zone as a mixing region of liquid and gas. The governing equations are discretized by a Godunov-type finite volume method with explicit time stepping using a low-storage Runge-Kutta scheme. The presented flow solver Cubism-MPCF is based on our Cubism library which enables a highly optimized framework for the efficient treatment of stencil based problems on multicore architectures. The framework is general and not limited to applications in fluid dynamics. We validate our solver by classical benchmark examples. Furthermore, we examine a highly-resolved shock-induced bubble collapse and a cloud of O(10^3) collapsing bubbles, which demonstrate the high potential of the proposed framework and solver.

2016

  • P. E. Hadjidoukas, D. Rossinelli, F. Wermelinger, J. Sukys, U. Rasthofer, C. Conti, B. Hejazialhosseini, and P. Koumoutsakos, “High throughput simulations of two-phase flows on Blue Gene/Q,” in Parallel computing: on the road to exascale – ParCo 2015, 2016, p. 767–776.

BibTeX

@inproceedings{hadjidoukas2015c,
author = {Panagiotis E. Hadjidoukas and Diego Rossinelli and Fabian Wermelinger and Jonas Sukys and Ursula Rasthofer and Christian Conti and Babak Hejazialhosseini and Petros Koumoutsakos},
booktitle = {Parallel Computing: On the Road to Exascale – {ParCo} 2015},
doi = {10.3233/978-1-61499-621-7-767},
note = {Proceedings of the International Conference on Parallel Computing – {ParCo} 2015},
pages = {767--776},
publisher = {{IOS} Press},
series = {Advances in Parallel Computing},
title = {High throughput simulations of two-phase flows on {Blue Gene/Q}},
url = {https://cse-lab.seas.harvard.edu/files/cse-lab/files/hadjidoukas2015c.pdf},
volume = {27},
year = {2016}
}

Abstract

CUBISM-MPCF is a high throughput software for two-phase flow simu- lations that has demonstrated unprecedented performance in terms of floating point operations, memory traffic and storage. The software has been optimized to take advantage of the features of the IBM Blue Gene/Q (BGQ) platform to simulate cav- itation collapse dynamics using up to 13 Trillion computational elements. The per- formance of the software has been shown to reach an unprecedented 14.4 PFLOP/s on 1.6 Million cores corresponding to 72% of the peak on the 20 PFLOP/s Se- quoia supercomputer. It is important to note that, to the best of our knowledge, no flow simulations have ever been reported exceeding 1 Trillion elements and reach- ing more than 1 PFLOP/s or more than 15% of peak. In this work, we first ex- tend CUBISM-MPCF with a more accurate numerical flux and then summarize and evaluate the most important software optimization techniques that allowed us to reach 72% of the theoretical peak performance on BGQ systems. Finally, we show recent simulation results from cloud cavitation comprising 50000 vapor bubbles.
  • F. Wermelinger, B. Hejazialhosseini, P. Hadjidoukas, D. Rossinelli, and P. Koumoutsakos, “An efficient compressible multicomponent flow solver for heterogeneous CPU/GPU architectures,” in Proceedings of the platform for advanced scientific computing – PASC ’16, 2016.

BibTeX

@inproceedings{wermelinger2016a,
author = {Fabian Wermelinger and Babak Hejazialhosseini and Panagiotis Hadjidoukas and Diego Rossinelli and Petros Koumoutsakos},
booktitle = {Proceedings of the Platform for Advanced Scientific Computing - {PASC} {\textquotesingle}16},
doi = {10.1145/2929908.2929914},
publisher = {{ACM} Press},
title = {An Efficient Compressible Multicomponent Flow Solver for Heterogeneous {CPU}/{GPU} Architectures},
url = {https://cse-lab.seas.harvard.edu/files/cse-lab/files/wermelinger2016a.pdf},
year = {2016}
}

Abstract

We present a solver for three-dimensional compressible multicomponent flow based on the compressible Euler equations. The solver is based on a finite volume scheme for structured grids and advances the solution using an explicit Runge-Kutta time stepper. The numerical scheme requires the computation of the flux divergence based on an approximate Riemann problem. The computation of the divergence quantity is the most expensive task in the algorithm. Our implementation takes advantage of the compute capabilities of heterogeneous CPU/GPU architectures. The computational problem is organized in subdomains small enough to be placed into the GPU memory. The compute intensive stencil scheme is offloaded to the GPU accelerator while advancing the solution in time on the CPU. Our method to implement the stencil scheme on the GPU is not limited to applications in fluid dynamics. The performance of our solver was assessed on Piz Daint, a XC30 supercomputer at CSCS. The GPU code is memory-bound and achieves a per-node performance of 462 Gflop/s, outperforming by 3.2× the multicore- based Gordon Bell winning cubism-mpcf solver [16] for the offloaded computation on the same platform. The focus of this work is on the per-node performance of the heterogeneous solver. In addition, we examine the performance of the solver across 4096 compute nodes. We present simulations for the shock-induced collapse of an aligned row of air bubbles submerged in water using 4 billion cells. Results show a final pressure amplification that is 100× stronger than the strength of the initial shock.

2015

  • P. E. Hadjidoukas, D. Rossinelli, B. Hejazialhosseini, and P. Koumoutsakos, “From 11 to 14.4 PFLOPs: performance optimization for finite volume flow solver,” in Proceedings of the 3rd international conference on exascale applications and software – easc ’15, 2015, p. 7–12.

BibTeX

@inproceedings{hadjidoukas2015d,
author = {Panagiotis E. Hadjidoukas and Diego Rossinelli and Babak Hejazialhosseini and Petros Koumoutsakos},
booktitle = {Proceedings of the 3rd International Conference on Exascale Applications and Software – EASC '15},
doi = {10.5555/2820083.2820085},
note = {Proceedings of the 3rd International Conference on Exascale Applications and Software – {EASC} '15},
pages = {7--12},
publisher = {University of {E}dinburgh},
title = {From 11 to 14.4 {PFLOPs}: Performance Optimization for Finite Volume Flow Solver},
url = {https://cse-lab.seas.harvard.edu/files/cse-lab/files/hadjidoukas2015d.pdf},
year = {2015}
}

Abstract

CUBISM-MPCF is a compressible, two-phase flow solver that has performed unprecedented flow simulations, employ- ing 13 trillion computational elements to study cavitation collapse of a cloud composed of 15’000 bubbles. The code had been deployed on 1.6 million cores of the Sequoia IBM BlueGene/Q supercomputer, reaching initially 11 PFLOPs, corresponding to 55% of its nominal peak performance. This paper reports, for the first time, the techniques used to extend the performance of the code by 30% reaching 14.4 PFLOPs on BlueGene/Q systems. The achieved 72% of the peak performance constitutes to date the best performance for flow simulations in supercomputer architectures. Our techniques take advantage of the underlying hardware capabilities and were applied through all levels in the soft- ware abstraction aiming at full exploitation of the inherent instruction/data-,thread- and cluster-level parallelism. The software advances by two to three orders of magnitude the state-of-the-art both in terms of time to solution and geo- metric complexity of the flow. We believe that the present methods are relevant to all grid based solvers and as such they may serve to enhance the capabilities across different areas of simulation based science.

2013

  • D. Rossinelli, P. Koumoutsakos, B. Hejazialhosseini, P. Hadjidoukas, C. Bekas, A. Curioni, A. Bertsch, S. Futral, S. J. Schmidt, and N. A. Adams, “11 PFLOP/s simulations of cloud cavitation collapse,” in Proceedings of the international conference for high performance computing, networking, storage and analysis on – SC ’13, 2013.

BibTeX

@inproceedings{rossinelli2013a,
author = {Diego Rossinelli and Petros Koumoutsakos and Babak Hejazialhosseini and Panagiotis Hadjidoukas and Costas Bekas and Alessandro Curioni and Adam Bertsch and Scott Futral and Steffen J. Schmidt and Nikolaus A. Adams},
booktitle = {Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - {SC} {\textquotesingle}13},
doi = {10.1145/2503210.2504565},
publisher = {{ACM} Press},
title = {11 {PFLOP}/s simulations of cloud cavitation collapse},
url = {https://cse-lab.seas.harvard.edu/files/cse-lab/files/rossinelli2013a.pdf},
year = {2013}
}

Abstract

We present unprecedented, high throughput simulations of cloud cavitation collapse on 1.6 million cores of Sequoia reaching 55% of its nominal peak performance, correspond- ing to 11 PFLOP/s. The destructive power of cavitation re- duces the lifetime of energy critical systems such as internal combustion engines and hydraulic turbines, yet it has been harnessed for water purification and kidney lithotripsy. The present two-phase flow simulations enable the quantitative prediction of cavitation using 13 trillion grid points to re- solve the collapse of 15’000 bubbles. We advance by one or- der of magnitude the current state-of-the-art in terms of time to solution, and by two orders the geometrical complexity of the flow. The software successfully addresses the challenges that hinder the effective solution of complex flows on con- temporary supercomputers, such as limited memory band- width, I/O bandwidth and storage capacity. The present work redefines the frontier of high performance computing for fluid dynamics simulations.