ESS Project FY95 Annual Report: Applications Software

Convective Turbulence and Mixing in Astrophysics

Objective: To perform state-of-the-art large-scale simulations of astrophysical turbulence using next-generation scalable parallel machines

Approach: We have developed different families of portable and scalable codes for multidimensional hydrodynamics and magnetohydrodynamics (MHD). We have used monitoring tools from Argonne National Laboratory (ANL) and the University of Pavia (Italy) to assess and optimize the parallel scalability and performance of the various codes on different architectures. We have used these codes to conduct studies of turbulent processes and magnetic field generation (dynamo action) in astrophysics.

Accomplishments:

APPLICATIONS
We developed new application codes for Multiple Instruction Multiple Data (MIMD) architectures, including: (1) 2D and 3D codes for compressible hydrodynamics based on explicit higher-order Godunov methods and hybrid schemes (i.e., mixed finite difference and pseudospectral methods). (2) 2D and 3D general-purpose elliptic solvers based on parallel multigrid methods; these methods have been used for the implicit treatment of non-linear thermal conduction in plasmas and for the treatment of plasmas' self-gravity. (3) 2D and 3D fully parallel Fast Fourier Transforms (FFT's) for spectral or pseudospectral hydrodynamics and MHD codes. The codes have been written using special-purpose libraries, the PETSc libraries developed at ANL (W. Gropp and B. Smith), to insure maximal portability across architectures. At the same time, experience from the codes development has helped to assess the effectiveness of the PETSc libraries as tools for rapid and efficient scientific programming.

Special programming techniques have been developed to insure code modularity and rapid reusability of software components for various applications. While conventional languages (C and F77) and libraries have been used for portability and computational efficiency, simple concepts from object-oriented programming have been introduced to develop code interfaces and functionality. Effort has been spent to instrument the application codes with efficient I/O capabilities. This is currently the major obstacle to performing large-scale simulations on parallel machines, due to present hardware limitations. We have developed parallel I/O routines for the IBM SP systems at ANL that allow direct access to the UniTree mass storage system. We have also tested more general-purpose (but less efficient) portable I/O routines from the Chameleon library. Machines used in the development of the above codes include the IBM SP systems, the Intel Touchstone Delta and Paragon, Kendall Square Research systems, and the CRAY Y-MP and CRAY T3D.

PERFORMANCE ANALYSIS
We have collected experimental performance measurements of our new codes in order to: (1) evaluate code parallel scalability and single-processor computational efficiency; (2) individuate and understand the potential for code improvement with respect to specific architectures; and (3) evaluate machines. We have used monitoring tools from ANL and from the University of Pavia to instrument the codes and collect timing information about various components (subroutines, communications, etc.). This process has proven very valuable to understand how details in machine architectures (e.g., the topology of the interconnection network) affect the efficiency of parallel algorithms. Such effects usually are not taken into account in simple theoretical performance models and can lead to performance considerably lower than expected.

SCIENTIFIC RESEARCH
We have used our new codes for novel scientific calculations, including: (1) compressible penetrative convection in Sun-like stars; (2) core convection within A stars; (3) compressible turbulent convection constrained by rotation; (4) evolution of magnetic fields in turbulent conducting fluids (the "dynamo" effect); (5) gravitational collapse of star forming clouds. Typical calculations have been performed on grids in the range 1024 x 512 in 2D and 128^3 to 256^3 in 3D. While the codes demonstrably scale to larger sizes, "production" runs with larger sizes are made difficult by technical limitations in I/O performance.

Significance: We have developed the first generation of portable application codes for turbulent mixing problems on massively parallel architectures, representing a broad range of techniques for solving hydrodynamic and MHD problems. These codes achieve the high performance required to study new frontier problems in astrophysical fluid dynamics. The basic parallel algorithms developed (e.g., the FFT's and the multigrid methods) are of very general use, and they will facilitate the migration of a wide variety of scientific applications to scalable parallel machines. Additionally, we have accumulated considerable expertise in code performance measurement and machine evaluation. Such expertise will be crucial for the development of next-generation high-performance architectures and applications.

Status/Plans: As part of the final phase of the project we plan to: (1) complete an extensive study and comparison of different parallel FFT's implementations; (2) continue work on parallel I/O; (3) complete reports on parallel performance experiments; and (4) continue our series of scientific calculations.


Jeans Instability
Simulation of a self-gravitating gas undergoing gravitational collapse due to Jeans instability. The domain is triply periodic with an initially uniform background. The figure shows the rendering of the gas density at a given instant during the collapse. Yellow regions represent high density condensations of mass. Grid size: 128 x 128 x 128. Machine: IBM SP-1 at Argonne National Laboratory. Code: PPM + linear multigrid. Author: Andrea Malagoli (University of Chicago)

Turbulent Diffusion of Magnetic Fields
Simulation of magnetic field diffusion in a triply periodic magnetized fluid. This study is aimed at understanding the process of magnetic field generation in stars, know as the "stellar dynamo" effect. The figure shows the rendering of the flow enstrophy (the square vorticity) at a given instant in time. Grid size: 128 x 128 x 128. Machine: IBM SP-1 at Argonne National Laboratory. Code: Spectral MHD using 3D real-to-complex parallel FFT's. Authors: Fausto Cattaneo and Anshu Dubey (University of Chicago)

Turbulent Convection with Rotation
Simulation of turbulent convection with rotation in a compressible stratified gas at high Rayleigh number. It is a local-volume model in Cartesian geometry. The figure shows the rendering of the gas enstrophy (the square vorticity); brighter colors represent concentrations of intense enstrophy, which are associated with strong down-flowing plumes. Grid size: 128 x 128 x 192. Machine: CRAY C90 at the Pittsburgh Supercomputing Center. Code: hybrid pseudospectral and finite-difference. Authors: Nic Brummell (University of Colorado) and Neal Hurlburt (Lockheed).

MPEG Movies


Investigator Progress Metric


Points of Contact:

Robert Rosner
Andrea Malagoli
University of Chicago
312-702-0560
312-702-0624
URL: Take a trip to Rosnerville


Table of Contents | Section Contents -- Applications Software | Subsection Contents -- Grand Challenge Investigator Teams