The NASA ESS/HPCC Cooperative Agreement

Grand Challenge Applications and Enabling Scalable Computing Testbed(s)
in Support of High Performance Computing

The University of Chicago Grand Challenge Team

Milestone 5: Annual Report 1997

Date: 19 September 1997

Title of the Agreement: Turbulent Convection and Dynamos in Stars
Principal Investigator: Andrea Malagoli
LASR 132b
The University of Chicago
933 E. 56th St.
Chicago, IL 60610
e-mail:
a-malagoli@uchicago.edu
Agreement Number: NCCS5-151
Period Covered: 09/01/96 -- 09/19/97

Picture of the Year:
Model of Dynamo Action in a Turbulent Magnetized Fluid

This picture shows a volumetric rendering of the magnetic enstrophy (the square of the magnetic field) in a simulation of dynamo action in a turbulent, incompressible magnetized fluid.

The bright colors represent regions of intense magnetic enstrophy. This simulation was carried out with the MPS code (one of our three Milestones codes) by Fausto Cattaneo and collaborators [see Cattaneo and Huges 1996].

This work shows how the turbulent a-effect, which is considered the mechanism for fast dynamo action, can be suppressed due to the dynamic interaction of magnetic fields and turbulent fluid motions.

Objective:

Turbulent convection and dynamo activity are the most challenging unresolved problems in the physics of rotating late--type stars like the Sun, where they play a fundamental role in such processes as the transport of energy, mixing of specific angular momentum, generation of magnetic fields, and basic stellar variability. We combine advanced scientific, mathematical and computational analyses with state-of-the-art computational technology to attack these problems by means of very high resolution hydrodynamic and magnetohydrodynamic numerical simulations. Such simulations serve the unique purpose of fostering technological advancement in high performance computer architectures as means to solve fundamental scientific issues.

Approach:

The Investigator Team involves an active collaboration of three major research groups focusing on large-scale numerical simulations of turbulent convection and dynamo processes in rotating stars like the sun. Such modeling efforts are unique in their combined demands for high performance computers, fast access to data storage devices and networks, and scientific data analysis including visualization. Significant research progress requires a multi-disciplinary team involving theoretical astrophysicists and applied mathematicians with broad experience in turbulence simulations, and computer scientists specializing in all aspects of parallel computing and visualization. Such diverse skills will enable the team to devise and utilize the technological resources and computational algorithms which are essential for breakthroughs in this grand challenge problem area. The participants have established joint working relationships in such HPC efforts and keen interests in developing methods to utilize the new scaleable parallel machines efficiently for the high demands of turbulence simulations.

The University of Chicago and Argonne National Laboratory participation is led by Andrea Malagoli (overall PI, computational magnetohydrodynamics and parallel algorithms), joined by Co-Is Fausto Cattaneo (convection and dynamo simulations and parallel algorithms), Anshu Dubey (parallel algorithms), Robert Rosner (nonlinear MHD processes), William Gropp (scalable parallel libraries and tools), Rick Stevens (advanced parallel systems and visualization) . The University of Colorado} Co-Is are Nicholas Brummell (rotating turbulence modeling and algorithms), Thomas Clune (code optimization), Mark Rast (compressible convection and radiative transport) and Juri Toomre (computational turbulence and convection). The University of Minnesota Co-Is are Wenlong Dai (computational MHD), David Porter (computational turbulence and visualization) and Paul Woodward (fluid algorithms and scientific visualization).

During the past year we have focused mainly on porting the benchmark codes to the Cray T3D testbed at Goddard Space Flight Center. The benchmark application codes have both successfully met the first Performance Milestone of this CAN: achieving 10 GigaFlops sustained performance on the Cray T3D testbed. Two benchmark application codes were listed for this milestone: the PPMC code, a code to study compressible convection based on the Piecewise Parabolic Method (PPM); and MPS, a pseudospectral code that solves the equations of incompressible magnetohydrodynamics for studies of dynamo processes.

The successful achievement of the Performance Milestones has relied crucially on the effective collaboration links established by members of this Team both within the Team, and with members of the Science Team at NASA GSFC, and at SGI/Cray, the testbed vendor. In particular, Anshu Dubey and Tom Clune's combined efforts have determined the resolution of perhaps the biggest obstacle to the achievement of the Performance Milestones: the development of highly optimized multidimensional Fast Fourier Transforms (FFTs) based on parallel transposes. Tom Clune, who has been a member of this Team during the initial phases of the CAN, has subsequently been hired as one of two in-house support scientists representing the testbed vendor SGI/Cray. Tom Clune is still providing this Team with invaluable support and help in achieving the CAN's Performance Milestones. David Porter and Wenlong Dai at the University of Minnesota have played a major role in the optimization of the PPMC code for studies of solar convection based on the Piecewise Parabolic Method.

All the detail of the technical and scientific achievements of this Team are posted under the project's WEB: http://astro.uchicago.edu/Computing/HPCC.

Scientific Accomplishments:

The code suite is being used to study three-dimensional models of turbulent compressible convection, including effects of rotation, penetration, and magnetic fields; and magnetic field evolution by dynamo action

Local region, high resolution numerical investigations of highly turbulent compressible convection have led to new understandings of the organization of convective motions. These high Reynolds number simulations have revealed that a laminar surface cellular network (resembling solar granulation) enforced by the density stratification actually disguises a turbulent interior. The majority of the domain below the surface layers is characterized by small-scale turbulent motions, punctuated by temporally and spatially coherent strong downflowing plumes. These coherent structures emanate from the junctions of the surface network and span the multiple scale-heights of the box. Surprisingly, the energy transport across the domain is mostly achieved by the turbulent motions and not by the coherent structures. This discovery may shed some light on how very local hypotheses such as mixing-length theories can achieve tolerably accurate results. Motivated by these prominent difference in the energy transport properties of laminar and turbulent compressible convection, we have examined such convection under the influence of rotation to investigate the turbulent transport of (angular) momentum. This model attempts to resolve the discrepancies between the results of previous laminar numerical calculations and the observations of helioseismology pertaining to the differential rotation of our Sun. The earlier numerical simulations carried out in a spherical shell geometry predicted that angular velocity would be roughly constant on cylinders aligned with the rotation axis,

The previous calculations were all performed in a Cartesian domain, periodic in the two horizontal directions but confined between two stress-free impenetrable surfaces in the vertical. With the emergence of strong coherent structures in the convection, and the resulting importance of small-scale turbulent generated from their shear instabilities, certainly the lower boundary condition is overly constraining. We have turned to new simulations of penetrative convection to assess such effects, dealing with a convectively unstable layer of fluid overlying a stable layer, with such a configuration achieved by a piecewise thermal conductivity. Whilst still not directly comparable to the bottom of the solar convection zone, at least this models allows motions to penetrate beyond the unstable layer. This model reveals striking differences from the more constrained domains. The convection is much smoother in the convecting region, with stronger, more coherent structures. The energy flux budget is different from that of the non-penetrative domain, with the coherent structures playing a more important role. The results only partially agree with theoretical models of penetration: there appears to be no adiabatic overshoot, only a thermal adjustment region of penetration. Active questions are to understand the mixing properties of the penetrating plumes and to determine whether rotation enhances or retards these overshooting motions.

Technology Accomplishments:
  • The suite of benchmark scientific applications developed by this project has been successfully ported to the Cray T3D, and they have achieved the 10GigaFlops performance milestones expected for the past year, and a documented version of such codes has been made publicly available. All the details relevant to this achievement are publicly available from the project's WEB site: http://astro.uchicago.edu/Computing/HPCC [Check the Milestones Table].
  • We have established a collaboration between the Computational Astrophysics Group at the University of Chicago and members of the Futures Laboratory in the Mathematics and Computer Science division (MCS) at Argonne. The purpose of the collaboration is to develop advanced visualization tools for the remote visualization of three-dimensional simulation data. Simulation data sets from supercomputer simulations developed under this project are being used as sample data to test the new visualization tools. One such tool, a volume rendering tool based on virtual reality environments, has been previously demonstrated by Randy Hudson and Andrea Malagoli as one of the I-way demos at Supercomputing95. The tool has now been substantially upgraded and will be soon released for public domain use [for more information, contact Randy Hudson].
  • We are assembling the first demonstration of a distributed scientific application across two continents! See below under Special Events.
  • The Computational Astrophysics Group at the U. of Chicago is now connected to the Mathematics and Computer Science division of Argonne via a ATM. The link is part of the larger Metropolitan Research and Education Network (MREN) directed by Joe Mambretti at the UofC. Via MREN, we also get access to the vBNS network, which in turn allows the three university sites involved in this project (Chicago, Colorado, Minnesota) to exchange data and information ver efficiently [e.g. video conferencing between remote sites is now available].
  • A NSF funded Terabyte tape storage facility is now available to the project. This facility will be expanded to a total capacity of more than 30 TeraBytes in the near future. The system is now the main repository for the simulation data produced by the Computational Astrophysics Group at the U. of Chicago.
  • Within the Laboratory for Computational Dynamics (LCD) at JILA, U. of Colorado, we have implemented high-capacity storage tape storage media based on the Ampex DD-2 tape system, with 320 GB capacity per tape and 20 MB/s transfer rates, thereby enabling the transfer and capture of the very large data sets of multi-TB size being produced by the turbulent simulations. LCD is now connected via ATM at OC-3 rates to the vBNS network, soon to be upgraded to OC-12, with both our 10-processor R-10K based SGI Power Onyx visualization machine so connected, along with distributed Indy and O2 graphics workstations both in LCD and in some JILA offices. This enables both the transfer of primary data from the simulations on the Cray T3E, and effective experiments in remote visualization.
Status/Plans:
  • The team is now focusing on achieving the upcoming (15 November 1997) performance milestone, that involves achieving at least 50 GigaFlops sustained on the Cray T3E.
  • Of the three codes promised in this milestone, two of them, the HPS and MPS codes, make intensive use of FFTs. Firstly, we have developed a special purpose FFT for the T3E which should help the core of both the HPS and the MPS codes. For vectors of length n equal to odd powers of 2, the complex-to-complex FFT goes at over 200 MF per node -- n = 512 gives 230 MF using the Cray formula for flops = 5*n*log(n), or 275 MF counting using PAT. We have found that computing some of the trigonometric constants during the run is faster than using a lookup table, so the number of flops is actually higher. This new FFT does not do the rearrangement (bit reversal) of data that is traditionally done in an FFT. For spectral codes, this step is not essential so long as the corresponding inverse FFT also exists. (The FFT is no longer its own inverse for this reason.) With more work we should be able to produce well load-balanced real-complex routines.
  • We are also investigating a new data partitioning strategy for the HPS code. Currently, the code partitions the data by sending a stack of horizontal planes to individual processors. This is essentially a 2D data partition. This restricts the flexibility of the code since it then requires a large numbers of planes in the vertical to use a large number of processors efficiently. We are seeking to produce a version of HPS with a 3D data partition, where the planes are subdivided into lines (or pencils). Such a partition would enhance the flexibility of the code and remove the inefficiency associated with the restriction to always utilizing whole planes. Such a smaller data partition also allows us to attack the major problem of this class of codes -- cache management. With smaller data parcels, we are considering a reworking of the code such that the work is essentially nested in the reverse manner from before. This approach boils down to doing a large amount of work on a small amount of data rather than a small amount of work on a larger amount of data. We anticipate that this could speed up the code considerably. With such a code, we plan to continue our investigations of penetrative convection with and without rotation and magnetic fields.
  • The MHD-PPM code has been developed based on a new scheme scheme proposed by W. Dai and Paul Woodward (U. of Minnesota). This new scheme introduces a new technique to preserve the solenoidality of the magnetic field down to machine accuracy [see Dai and Woodward 1998]. The code is also being tuned for performance on the Cray T3E, in order to meet our second Performance Milestone [at least 50 GigaFlops sustained on the Cray T3E at Goddard].
  • A proposal is being assembled to obtain access to NASA's National Research and Education Network (NREN) from the three main sites of this project (UofC, UofColorado, and Uof Minnesota). The link will allow the three sites to connect to the Testbed and visualization facilities at Goddard using ATM. This will improve considerably the usability of the Testbed to perform scientific simulations, and to transfer simulations data from Goddard to the university sites for scientific data analysis and visualization.
Special Events:
  • On July 97 the University of Chicago has been awarded one of five ASCI Alliances projects from the Department of Energy (DoE). This project will involve the formation of the Center for Thermonuclear Flashes in Astrophysics, whose main purpose is to build advanced numerical models of Supernovae Type Ia, Novae, and X-ray Flashes.

    The chances for success in the competition for the ASCI Alliances project has been greatly enhanced by this NASA CAN, which is the main source of funding for the Computational Astrophysics Group at the University of Chicago.

  • We are assembling the first demonstration of a distributed scientific application across two continents! This demonstration is the result of a collaboration between the PI of this project and Dr. Carlo Paccagnini from Alenia Spazio, in Torino, Italy. The demonstration involves setting up an ATM link between the University of Chicago (USA) and Alenia Spazio (Italy). This link is made possible by generous support from several network providers across the USA, Canada, Germany, and Italy. For more details, contact Andrea Malagoli.

The following Postdocs and Staff members were funded in part or entirely by this project:

Senior Researchers:

  • Andrea Malagoli, University of Chicago
  • Nicholas Brummell, University of Colorado
  • Fausto Cattaneo, University of Chicago
  • David Porter, University of Minnesota

Postdocs:

  • Anshu Dubey, University of Chicago
  • Tom Clune, University of Colorado
  • Wenlong Dai, University of Minnesota
  • Julian Elliott, University of Colorado
  • Wayne Peterson, University of Minnesota
  • Marc Rast, University of Colorado
  • Steve Tobias, University of Colorado

Graduate Students:

  • Marc DeRosa, University of Colorado
  • Aaron Hoff, University of Colorado
  • Marc Miesch, University of Colorado
  • Natalia Vladimirova, University of Chicago
Refereed Publications (with abstracts):

Turbulent Compressible Convection with Rotation: I. Flow Structure and Evolution -- Brummell, N.H, Hurlburt, N.E. & Toomre, J., Astrophys. J., 473, 494 (1996)

ABSTRACT - The effects of Coriolis forces on compressible convection are studied using three-dimensional numerical simulations carried out within a local modified f-plane model. The physics is simplified by considering a perfect gas occupying a rectilinear domain placed tangentially to a rotating sphere at various latitudes, through which a destabilizing heat flux is driven. The resulting convection is considered for a range of Rayleigh, Taylor and Prandtl (and thus Rossby) numbers, evaluating conditions where the influence of rotation is both weak and strong. Given the computational demands of these high-resolution simulations, the parameter space is explored sparsely to ascertain the differences between laminar and turbulent rotating convection. The first paper in this series examines the effects of rotation on the flow structure within the convection, its evolution and some consequences for mixing. Subsequent papers consider the large-scale mean shear flows that are generated by the convection, and the effects of rotation on the convective energetics and transport properties. It is found here that the structure of rotating turbulent convection is similar to earlier nonrotating studies, with a laminar, cellular surface network disguising a fully turbulent interior punctuated by vertically coherent structures. However, the temporal signature of the surface flows is modified by inertial motions to yield new cellular evolution patterns and an overall increase in the mobility of the network. The turbulent convection contains vortex tubes of many scales, incl

Turbulent Compressible Convection with Rotation: II. Mean Flows and Differential Rotation -- Brummell, N.H, Hurlburt, N.E. & Toomre, J., Astrophys. J., in press (1997).

ABSTRACT - The effects of rotation on turbulent compressible convection within stellar envelopes are studied through three-dimensional numerical simulations conducted within a local f-plane model. This work seeks to understand the types of differential rotation that can be established in convective envelopes of stars like the sun, for which recent helioseismic observations suggest an angular velocity profile with depth and latitude at variance with many theoretical predictions. This paper analyzes the mechanisms that are responsible for the mean (horizontally-averaged) zonal and meridional flows that are produced by convection influenced by Coriolis forces. The compressible convection is considered for a range of Rayleigh, Taylor and Prandtl (and thus Rossby) numbers encompassing both laminar and turbulent flow conditions under weak and strong rotational constraints. When the nonlinearities are moderate, the effects of rotation on the resulting laminar cellular convection leads to distinctive tilts of the cell boundaries away from the vertical. These yield correlations between vertical and horizontal motions which generate Reynolds stresses that can drive mean flows, interpretable as differential rotation and meridional circulations. Under more vigorous forcing, the resulting turbulent convection involves complicated and contorted fluid particle trajectories, with few clear correlations between vertical and horizontal motions, punctuated by an evolving and intricate downflow network that can extend over much of the depth of the layer. Within such networks

On the Divergence-free Condition and Conservation Laws in Numerical Simulations for Supersonic Magnetohydrodynamical Flows -- Dai, W., & Woodward, P., to be published in ApJ, 493, (1998).

ABSTRACT - An approach to exactly maintain the eight conservation laws and the divergence-free condition of magnetic field is proposed in this paper for numerical simulations for multi-dimensional magnetohydrodynamical (MHD) equations. The approach is simple and may be easily applied to both dimensionally split and unsplit Godunov schemes for supersonic MHD flows. The numerical schemes based on the approach are second order accurate in both space and time if the original Godunov schemes are. As an example of such schemes, a scheme based on the approach and an approximate MHD Riemann solver is presented. The Riemann solver is simple and is used to approximately calculate the time-averaged flux. The correctness, accuracy and robustness of the scheme are shown through numerical examples. A comparison in numerical solutions between the proposed scheme and a Godunov scheme without the divergence-free constraint.

Linear and Nonlinear Dynamo Action -- Brummell, N.H, Cattaneo, F., Tobias, S., Phys. Rev. Lett., submitted (1997).

ABSTRACT - We consider the nonlinear behavior of a dynamo driven by a an enforced time-dependent ABC flow. We study parameters such that the resultant kinematic flow is a fast dynamo. The results show a dependence on the forcing frequency Omega. For low Omega values, nonlinear dynamo behavior is observed i.e. magnetic field is kinematically amplified and nonlinearly saturated at a finite value which remains for all time. Surprisingly however, larger Omegas, which may actually have larger kinematic growth rates, saturate nonlinearly in a state that cannot sustain the magnetic field. Here, the finite amplitude influence of the Lorentz force acts to destabilize the initial ABC flow, creating an mhd state which is a nondynamo. The kinematically amplified magnetic field decays in this nonlinear state to a new hydrodynamic solution of the forced N-S equations. This variety of nonlinear dynamic behavior raises questions about the definition of dynamo action especially in the nonlinear regime.

Nonlinear Saturation of the Turbulent a-effect -- Cattaneo, F., & Huges, D., Phys. Rev. E, 54, 4532 (1996).

ABSTRACT - We study the saturation of the turbulent a-effect in the nonlinear regime. A numerical experiment is constructed based on the full nonlinear MHD equations that allows the a-effect to be measured for different values of the mean magnetic field. The object is to distinguish between two possible theories of nonlinear saturation. It is found that the results are in close agreement with the theories that predict strong suppression and are incompatible with those that predict that the turbulent a-effect persists up to mean fields of order the equipartition energy.

Suppression of Chaos in a Simplified Nonlinear Dynamo Model -- Kim, E., Cattaneo, F., & Huges, D., Phys. Rev. Lett., 76, 2057 (1996).

ABSTRACT - Many astrophysical magnetic fields are generated by the motions of electrically conducting fluids --- dynamo action. Studies of these processes have followed two distinct routes. The kinematic approach is concerned solely with the growth of magnetic fields for prescribed flows; physically it can only be justified for very weak fields. The dynamic approach includes self--consistently the back--reaction of the magnetic field on the velocity and, therefore, can follow the evolution of the magnetic field into the nonlinear regime. Recent kinematic (i.e. linear) studies have concentrated on chaotic flows since only these can lead to field amplification in the astrophysical limit of high electrical conductivity---fast dynamos. However, the nonlinear development of these systems is largely unknown. Here we address this important issue by investigating a simplified model that allows the transition from the kinematic to the dynamical regimes to be studied in detail. We find that the magnetic field differs markedly in the two regimes; furthermore, the velocity is modified in a subtle but fundamental way such that its chaotic (i.e. exponentially stretching) properties are suppressed.

Kinetic Helicity, Magnetic Helicity and Fast Dynamo Action -- Hughes, D.W., Cattaneo, F., & Kim, E., Phys. Lett. A, 223, 167 (1996).

ABSTRACT - The influence of the flow helicity on kinematic fast dynamo action is considered. Three different flows are studied, possessing identical chaotic properties but very different distributions of helicity (maximal helicity, zero net helicity and zero helicity density). All three flows provide strong evidence of fast dynamo action, indicating that helicity is not a crucial feature of fast dynamo flows. Comparisons are made between the magnetic fields generated by the three flows and it is established how certain key quantities scale with the magnetic Reynolds number. In particular, it is shown that the relative magnetic helicity tends to zero as the magnetic Reynolds number tends to infinity.

The Solar Dynamo Problem -- Cattaneo, F., in Solar Convection, Oscillations and their Relationship, eds. F. Pijpers and Christensen-Dalsgaard, Kluwer, in press (1997).

ABSTRACT - The solar dynamo problem is reviewed in the light of recent developments in dynamo theory. We distinguish between the generation of magnetic fields on scales smaller than the velocity correlation length--small scale dynamo, and larger than the velocity correlation length--large scale dynamo. We argue that small scale dynamo action is likely to occur everywhere in the convection zone. The field thus generated however is disordered both in space and time. Large scale dynamo action on the other hand is responsible for the activity cycle and the large scale organization of the solar field. The existence of a large scale dynamo is related to the breaking of symmetries in the underlying field of turbulence.

Parallel Implementation of Pseudospectral MHD Code -- Dubey, A., Cattaneo, F., Malagoli, A., in proceedings of the 1997 Simulation Multiconference'', April 6--10 1997, Atlanta, Georgia.

ABSTRACT - As a part of NASA High Performance Computing and Communication (HPCC) initiative, we have developed a highly efficient parallel code to solve the incompressible magnetohydrodynamics (MHD) equations in a three-dimensional periodic domain. It is based on the pseudo-spectral transform method and has been optimized to run efficiently on the CRAY T3D machine with 512 processors. It can achieve a sustained performance of 10.88 Gflops for a data size of 2563.

Conference Presentations and Talks:

Turbulent Dynamics of the Sun -- Malagoli, A., invited talk at NASA AMES Research Center, Moffett Fields, CA, (1997).

Linear and nonlinear dynamo action -- Brummell, N., at Nonlinear Dynamics of the Sun, Ortisei, Italy, July 1997.

Grand challenge simulations of turbulence in stars -- Toomre, J., at the National Science Foundation, Wash DC, Feb 1997.

Solar convection zone dynamics -- Toomre, J., at Local Helioseismology, University of Cambridge, England, April 1997.

Coupling of turbulent convection and rotation -- Toomre, J., at Nonlinear Dynamics of the Sun, Ortisei, Italy, July 1997.

The joys of simulations of turbulent solar convection -- Toomre, J., Dept. of Physics, Stanford University, Aug 1997.

MultiMedia Presentations:
  • The project is highly represented on the World Wide Web. Extensive information about the project, as well as presentations, scientific papers, codes (with documentation) and more can be found under the project's WEB site: http://astro.uchicago.edu/Computing/HPCC.
  • A general presentation of the project based on PowerPoint slides is available from the projects WEB site: click here to view the presentation.
  • Visualizations from this project have been included in the video tape Images of Earth and Space: The Role of Visualization in NASA Science, edited by Jarrett Cohen.
  • Simulation data from the project have been used in a demonstration of a tool for interactive volume rendering based on Virtual Reality. The tool has been developed by Randy Hudson at Argonne National Laboratory.
 Points of contact:

Back to the main page

Prepared by Andrea Malagoli
Date: 19 September 1997