The present project is aimed at demonstrating the scalability of an in-house, academic Large-Eddy Simulation solver in Fortran language on both Vega and Karolina CPU partitions, in order to provide evidence of its portability and suitability for future production runs on those systems. In particular, we aim at identify the best option for our code, to be considered for our future proposals in the framework of regular EuroHPC calls. Finite-differences are utilized to discretize the filtered Navier-Stokes equations. An immersed-boundary methodology enables the use of regular grids, as Cartesian or cylindrical, making the decomposition of the overall flow problem into subdomains very straightforward, efficient and suitable to parallel computing. Communications across subdomains are handled via calls to MPI libraries. I/O operations are performed using calls to parallel HDF5 libraries. The solver is not I/O intensive, with I/O operations taking only about 5% of the overall computational cost of a typical simulation. The computational grid we will consider in this project is a cylindrical one, composed of about 5 billion points. Although the scalability of the present solver was already tested on several architectures, also part of the PRACE infrastructure (Marconi KNL, Joliot-Curie KNL, Joliot-Curie SKL, Joliot-Curie Rome), the test-case that will be considered in this project was specifically designed to be representative of the computational effort of the problem we aim to tackle in the framework of the upcoming EuroHPC calls for regular projects. Results of these tests will be included in the proposal we are going to submit in the next future to EuroHPC when asking for allocation of computational resources on Vega and/or Karolina clusters.

Scalability of a Large Eddy Simulation solver on Vega and Karolina clusters

Antonio Posa;Riccardo Broglia
2022

Abstract

The present project is aimed at demonstrating the scalability of an in-house, academic Large-Eddy Simulation solver in Fortran language on both Vega and Karolina CPU partitions, in order to provide evidence of its portability and suitability for future production runs on those systems. In particular, we aim at identify the best option for our code, to be considered for our future proposals in the framework of regular EuroHPC calls. Finite-differences are utilized to discretize the filtered Navier-Stokes equations. An immersed-boundary methodology enables the use of regular grids, as Cartesian or cylindrical, making the decomposition of the overall flow problem into subdomains very straightforward, efficient and suitable to parallel computing. Communications across subdomains are handled via calls to MPI libraries. I/O operations are performed using calls to parallel HDF5 libraries. The solver is not I/O intensive, with I/O operations taking only about 5% of the overall computational cost of a typical simulation. The computational grid we will consider in this project is a cylindrical one, composed of about 5 billion points. Although the scalability of the present solver was already tested on several architectures, also part of the PRACE infrastructure (Marconi KNL, Joliot-Curie KNL, Joliot-Curie SKL, Joliot-Curie Rome), the test-case that will be considered in this project was specifically designed to be representative of the computational effort of the problem we aim to tackle in the framework of the upcoming EuroHPC calls for regular projects. Results of these tests will be included in the proposal we are going to submit in the next future to EuroHPC when asking for allocation of computational resources on Vega and/or Karolina clusters.
2022
Istituto di iNgegneria del Mare - INM (ex INSEAN)
Rapporto finale di progetto
CFD
LES
Immersed boundaries
turbulent flows
File in questo prodotto:
File Dimensione Formato  
prod_463697-doc_181727.pdf

accesso aperto

Descrizione: Final report PRACE Preparatory Access Type A #2010PA6132
Dimensione 556.77 kB
Formato Adobe PDF
556.77 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/443799
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact