FCTMHD3D

A 3D Flux-Corrected Transport Code
for Ideal, Compressible Magnetohydrodynamics


GENERAL DESCRIPTION


NUMERICAL TECHNIQUES


TYPICAL APPLICATIONS


CONTENTS OF THIS GUIDE


ABSTRACT

FCTMHD3D is a computer program for solving three-dimensional, compressible magnetohydrodynamics (MHD) problems. It employs new flux-corrected transport algorithms to integrate the MHD equations in a conservative, finite-volume representation. The program has been optimized for high performance on the massively parallel Cray T3D computer system. The structure and input/output requirements of the program are detailed below.


INTRODUCTION

FCTMHD3D is a new, three-dimensional, magnetohydrodynamics model developed on the Cray T3D computer system. It extends to 3D the flux-corrected transport algorithms we have used on a variety of platforms to study two-dimensional MHD phenomena. A conservative, finite-volume representation of the equations is adopted, and explicit timestepping (two-step Runge-Kutta) is used to advance the variables. The numerical developments and their application to selected problems are described in the references below. Scientific highlights of results obtained with FCTMHD3D's predecessor codes are available for browsing, as is the documentation of the new code's 10 GFlops sustained performance on the T3D. The code itself is available via anonymous ftp at lcp.nrl.navy.mil in the directory /pub/hpcc-ess, in the compressed tar file fctmhd3d.t3d.tar.Z.


STRUCTURE OF FCTMHD3D

FCTMHD3D is a multiple subroutine package written in Fortran 77 which uses the Cray shared-memory communications library to exchange data between the processors. The full package consists of four Fortran files with the following subroutines:

fctmhd3d.f

fctmhd3d
Control the program flow and execution; open the formatted input and output files fctmhd3d.cnt, fctmhd3d.prt and fctmhd3d.inst; read the cntrol input data group.
diagno
Perform diagnostics, evaluating the mass, momentum, and energy integrals.
dtset
Determine the timestep using the Courant condition.
inital
Initialize the variable arrays and set up the physical problem to be solved; read the start input data group.
rstart
Restart the program from unformatted datasets fctmhd3d.rst.xxx, where xxx is the processor number.
dumper
Write unformatted datasets fctmhd3d.rst.xxx for restarts or graphics.
trans
Perform the transport by solving the ideal MHD equations in conservative form with zero gravity; read the hydro input data group.
transcom
Perform the data exchange between processors prior to the MHD integration step.

shmem.f

shmem_bcast_real
Broadcast a real data array from a source processor to all processors.
shmem_bcast_integer
Broadcast an integer data array from a source processor to all processors.
shmem_bcast_logical
Broadcast a logical data array from a source processor to all processors.
shmem_reduce_real_sum
Reduce a sum of real data from all processors to a target processor.
shmem_reduce_integer_sum
Reduce a sum of integer data from all processors to a target processor.
shmem_reduce_real_max
Reduce a maximum of real data from all processors to a target processor.
shmem_reduce_real_min
Reduce a minimum of real data from all processors to a target processor.
shmem_allreduce_real_max
Reduce a maximum of real data from all processors to all processors.
shmem_sendrecv_fct
Send and receive data packets between processors for the FCT integrations.

lcpfct3.f

lcpfct3
Integrate the generalized continuity equation for a flow scalar.
lcpgrd3
Construct the necessary metric quantities for the lcpfct3 integrations.
lcpsrc3
Construct the source terms for the lcpfct3 integrations.
lcpcsm3
Evaluate the conservation sum for a flow scalar.

mhdfct3

mhdfct3
Integrate the generalized hydromagnetic equation for a 3D magnetic field.
mhdgrd3
Construct the necessary metric quantities for the mhdfct3 integrations.
mhdsrc3
Construct the source terms for the mhdfct3 integrations.

The package also contains thirteen data files which are incorporated into the Fortran subroutines through include statements. Each data file consists of datatype declarations and a common block declaration:

comms.h

Common block comms; communications data.
const.h
Common block const; physical constants.
fct3.h
Common block fct; arrays for the lcpfct3 and mhdfct3 integrations.
geom1d.h
Common block geom1d; metric data for the lcpfct3 integrations.
geom1m.h
Common block geom1m; metric data for the mhdfct3 integrations.
gridm.h
Common block gridm; field variable grid data.
grids.h
Common block grids; flow variable grid data.
lcpctl3.h
Common block lcpctl; boundary condition data for the lcpfct3 integrations.
mhd3.h
Common block mhd; arrays for the basic MHD variables.
mhdctl3.h
Common block mhdctl; boundary condition data for the mhdfct3 integrations.
psize.h
Common block psize; problem size data.
trn3.h
Common block trn; arrays for the MHD timestepping in trans.
tstep.h
Common block tstep; time and timestep data.


INPUT/OUTPUT FOR FCTMHD3D

The control of the program flow and the physics inputs to the problem being solved by FCTMHD3D are passed in through the three input data groups cntrol, hydro, and start. These are contained in the formatted file fctmhd3d.cnt, which is assigned to Fortran unit 15. Examples of this file are provided with the code. The first line of the file is a label for the run. It is followed by the cntrol data group in the following format:

               Spherical Blast Wave in a Magnetic Field
 &cntrol
    minstp    maxstp     time0     timef
         1       451       0.0       1.0
    lrstrt      lmfr
         F         T
     idump     idiag
         0         0
    dtdump    dtdiag
       0.6      0.05
 &end

The text lines are read as 79-character lines and may be modified as desired, while the lines with numerical inputs are read as indefinitely formatted lines whose datatypes and sequences must match those above, but otherwise also may be modified. The other two input data groups follow the same standard.

The variables whose values are set by the cntrol input data group, their datatypes, and their functions in the code are as follows:

minstp

Integer; initial timestep counter.
maxstp
Integer; final timestep counter.
time0
Real; initial physical time.
timef
Real; final physical time.
lrstrt
Logical; is this calculation starting from an unformatted dataset?
lmfr
Logical; does the unformatted restart dataset contain data at multiple time levels?
idump
Integer; timestep interval for writing unformatted restart datasets.
idiag
Integer; timestep interval for performing diagnostics.
dtdump
Real; physical time interval for writing unformatted restart datasets.
dtdiag
Real; physical time interval for performing diagnostics.

The calculation begins at timestep minstp and physical time time0; it ends when either timestep maxstp or time timef is reached. If the run is restarting from an unformatted dataset, the requested timestep minstp must match exactly that in the restart dataset, and the time time0 must match within 1%. Restart files will be written at timestep multiples of idump or physical time multiples of dtdump, or both, as specified. Zero values for either of these parameters prohibits a write of its type. Finally, if lmfr is false, the restart files are rewound after each write. If true, subsequent writes are appended to the end of the existing file.

The hydro input data group establishes boundary conditions for the FCT integrations of the MHD equations:

&hydro
    crho1l    frho1l    crho1r    frho1r
       0.0       0.0       0.0       0.0
    crho2l    frho2l    crho2r    frho2r
       0.0       0.0       0.0       0.0
    crho3l    frho3l    crho3r    frho3r
       0.0       0.0       0.0       0.0
...
     sbx1l     sbx1r     sby2l     sby2r     sbz3l     sbz3r
       0.0       0.0       0.0       0.0       0.0       0.0
    iflx1l    iflx1r    iflx2l    iflx2r    iflx3l    iflx3r
         1         1         1         1         1         1
    jflx1l    jflx1r    jflx2l    jflx2r    jflx3l    jflx3r
         1         1         1         1         1         1
     ipbc1     ipbc2     ipbc3
         1         1         1
 &end

The ellipsis ... in the listing above indicates that the preceding six lines of data for the variable rho (the mass density) are to be replicated for rvx, rvy, and rvz (the x, y, and z momentum densities, respectively), then e (total energy density), p (plasma pressure), vx, vy, and vz (the x, y, and z velocities), and finally bx, by, and bz (the x, y, and z components of the magnetic field), in that order.

The values of the following variables are set by the hydro input data group:

crho1l, crho1r

Real; the constant portions of the mass density beyond the domain boundaries along the first (x) coordinate, at the left (x = x_min) and right (x = x_max), respectively.
frho1l, frho1r
Real; the fractional portions of the mass density beyond the domain boundaries along the first coordinate, at the left and right, respectively.
crho2l, crho2r, crho3l, crho3r
Real; the same roles as crho1l and crho1r along the second (y) and third (z) coordinates, respectively.
frho2l, frho2r, frho3l, frho3r
Real; the same roles as frho1l and frho1r along the second and third coordinates, respectively.

Identical roles are played by the next eleven sets of twelve boundary values, for the remaining eleven MHD variables listed in the preceding paragraph. The rest of the input data in hydro sets the following parameters:

sbx1l, sbx1r

Real; the symmetric portions of the first (x) magnetic field component beyond the domain boundaries along the same coordinate, at the left (x = x_min) and right (x = x_max), respectively.
sby2l, sby2r, sbz3l, sbz3r
Real; the same roles as sbx1l and sbx1r for the second (y) and third (z) components of the magnetic field along their coordinates, respectively.
iflx1l, iflx1r
Integer; index indicating whether (iflx1_ = 1) or not (iflx1_ = 0) to apply fluxes at the domain boundaries to the flow variables in lcpfct3, for the first coordinate at the left and right, respectively.
iflx2l, iflx2r, iflx3l, iflx3r
Integer; the same roles as iflx1l and iflx1r for the fluxes at the boundaries along the second and third coordinates, respectively.
jflx1l, jflx1r, jflx2l, jflx2r, jflx3l, jflx3r
Integer; the same roles as iflx1l, ..., and iflx3r for the magnetic field variables in mhdfct3.
ipbc1
Integer; index indicating periodic (ipbc1 = 1) or aperiodic (ipbc1 = 0) boundary conditions along the first coordinate.
ipbc2, ipbc3
Integer; the same role as ipbc1 along the second and third coordinates, respectively.

Values of the flow and field variables one cell beyond the boundaries of the computational domain are required in order to calculate the fluxes of mass, momentum, and energy at the bounding faces and the electric fields at the bounding edges. Given m1 cells subdividing the first coordinate with m1p = m1+1 interface locations, the conditions on the mass density at the domain boundaries along that direction take the form

   rho(  0,j,k) = crho1l + frho1l * rho( 1,j,k)
                         + ipbc1  * rho(m1,j,k),
   rho(m1p,j,k) = crho1r + frho1r * rho(m1,j,k)
                         + ipbc1  * rho( 1,j,k),

where j and k index the other two cell center coordinates. The three terms on the right sides of these equations correspond to constant, extrapolative, and periodic contributions to the boundary conditions at the ends of the grid. Similar expressions apply along the remaining coordinate directions, and for the rest of the flow variables. Identical relations also hold for each component of the magnetic field, along the two directions transverse to it ( e.g., for B_x along y and z). Along the coordinate aligned with the field component, however, the rule is different because the field values lie on the interfaces. In this instance, we also allow for distinct symmetric and extrapolative contributions to the boundary condition,

   bx(  0,j,k) = cbx1l + fbx1l * bx(  1,j,k)
                       + ipbc1 * bx(m1 ,j,k)
                       + sbx1l * bx(  2,j,k),
   bx(m1q,j,k) = cbx1r + fbx1r * bx(m1p,j,k)
                       + ipbc1 * bx(  2,j,k)
                       + sbx1r * bx(m1 ,j,k),

where m1q = m1p+1. It is crucial to ensure that every one of the boundary parameters is properly set and all are mutually consistent, or incorrect results can be expected. For example, selecting periodic boundaries does not automatically cause the remaining boundary terms to be zeroed out. The user must do this explicitly.

Finally, the start input data group sets the initialization, timestepping, and processor partitioning parameters for the code:

 &start
       rin      rout       pin      pout       bx0       by0       bz0
       8.0       1.0      32.0       1.0       0.0       0.0      10.0
      xlen      ylen      zlen       rad
       8.0       8.0       8.0       1.0
     dtmin     dtmax    fdthyd
       0.0       1.0      0.25
     n1pes     n2pes     n3pes
         8         8         8
 &end

The first two pairs of data lines fix the physical parameters for the problem initialized and solved by FCTMHD3D, in which a spherical bubble of plasma is centered in a uniform, magnetized plasma:

rin, rout

Real; the initial mass density inside and outside the bubble.
pin, pout
Real; the initial pressure inside and outside the bubble.
bx0, by0, bz0
Real; the initial, uniform magnetic field.
xlen, ylen, zlen
Real; the extent of the domain in x, y, and z.
rad
Real; the radius of the plasma bubble.
dtmin, dtmax
Real; the minimum and maximum allowed time increments.
fdtcfl
Real; the Courant factor for the timestep calculation.
n1pes, n2pes, n3pes
Integer; the number of processors subdividing the first, second, and third coordinate directions, respectively.

The value for the Courant factor fdtcfl is used to fix the time increment, with the result limited by the input constraints dtmin and dtmax. The distribution of the problem across the processors is fixed by the input choices of n1pes, n2pes, and n3pes. Their product must match the total number of T3D processors assigned to the run, the system variable n$pes.

The physical diagnostics from the code are written to the formatted file fctmhd3d.prt, which is assigned to Fortran unit 16. All of the input data groups are replicated in this print file, as are the periodic diagnostics from the calls to diagno, timestep information from dtset, and the results of any restart read or write attempts by rstart and dumper. The timing diagnostics of the run appear in the formatted file fctmhd3d.inst, which is assigned to unit 99. System times and operation counts from all of the processors are combined and the results printed for the various routines called. The system times are specified in clock periods per timestep. The average performance figures are given in MFlops per processor, and the grand total for the machine is given in GFlops.


RUNNING FCTMHD3D

FCTMHD3D is compiled and linked by executing the following statement:

cf77 -Wf"-o unroll" -Wl"-D rdahead=on" -Xnpes -o fctmhd3d fctmhd3d.f shmem.f lcpfct3.f mhdfct3.f

where npes is the number of T3D processors being used. This compilation must be done in batch mode, because the optimizing compile is very compute-intensive itself. The code then can be run interactively on a few processors, or in batch mode on any number of PEs.


PLATFORMS SUPPORTED & CODE PERFORMANCE

This distribution of the code runs on the Cray T3D. Floprates can be found in the FCTMHD3D code performance report.


REFERENCES

Numerics:

J. P. Boris and D. L. Book, Journal of Computational Physics 11 (1973) 38.

D. L. Book, J. P. Boris, and K. Hain, Journal of Computational Physics 18 (1975) 248.

J. P. Boris and D. L. Book, Journal of Computational Physics 20 (1976) 397.

J. P. Boris and D. L. Book, in Methods of Computational Physics 16 (Academic Press, New York, 1976), p. 85.

S. T. Zalesak, Journal of Computational Physics 31 (1979) 335.

E. S. Oran and J. P. Boris, Numerical Simulation of Reactive Flow (Elsevier, New York, 1987).

C. R. DeVore, Naval Research Laboratory Memorandum Report No. 6544, September 1989.

C. R. DeVore, Journal of Computational Physics 92 (1991) 142.

Applications:

J. T. Karpen, S. K. Antiochos, and C. R. DeVore, Astrophysical Journal 356 (1990) L67.

J. T. Karpen, S. K. Antiochos, and C. R. DeVore, Astrophysical Journal 382 (1991) 327.

J. T. Karpen, S. K. Antiochos, and C. R. DeVore, Astrophysical Journal 450 (1995) 422.

K. Murawski, C. R. DeVore, S. Parhi, and M. Goossens, Planetary and Space Sciences 44 (1996) 253.

J. T. Karpen, S. K. Antiochos, and C. R. DeVore, Astrophysical Journal 460 (1996) L73.

S. Parhi, P. DeBruyne, K. Murawski, M. Goossens, and C. R. DeVore, Solar Physics 167 (1996) 181.

J. T. Karpen, S. K. Antiochos, and C. R. DeVore, Astrophysical Journal (1997), submitted.


CREDITS

The algorithms were devised by C. Richard DeVore. Discussions with Jay P. Boris, David L. Book, Steven T. Zalesak, John H. Gardner, and Gopal Patnaik on the computational aspects of FCT and with Judith T. Karpen and Spiro K. Antiochos on the applications of the codes are gratefully acknowledged.

This work was supported by the NASA HPCC/ESS program under Cooperative Agreement Notice 21425/041.


POINT OF CONTACT

C. Richard DeVore
Naval Research Laboratory
Phone: 202.767.2096
e-mail:devore@lcp.nrl.navy.mil