Hyglac, a Pile of PCs

Hyglac at JPL

Objectives:

To bring unprecedented price-performance scalable computing to NASA mission-critical applications, by assembling a low-cost, commodity-based, parallel computer and assessing its suitability for a variety of engineering applications.

Approach:

Hyglac is a Beowulf-class clustered computing system sponsored by NASA code S and assembled for NASA JPL at Caltech CACR. It consists of 16 Pentium Pro PCs, each with 2.5 GByte disk, 128 MByte memory, and a Fast Ethernet card. The network used to connect the PCs is 100Base-T, using a 16-way crossbar switch. Total cost, for 16 PCs, the crossbar, and one monitor and keyboard was approximately $54K. (The system is pictured above).

The Linux operating system (Red Hat distribution) is running on all CPUs, and additional public domain software has been downloaded and installed: MPI (both MPICH and LAM versions), PVM, Gnu compilers (C, C++, g77). Commercial software on order: NAG Fortran 90 compiler and optimized scientific libraries.

Five JPL application codes have been ported to Hyglac, and one additional code port is in progress.

Accomplishments:

Code Description Results
Physical Optics
Application
Used for design of reflector antennas and telescopes operating at microwave frequencies. Ported successfully. Runs approximately 30% faster than on 16 T3D processors.
Electromagnetic
Finite-Difference
Time-Domain
Application
Used to solve Maxwell's Equations in the time domain, for analysis of antenna patterns, radar cross section calculations, and examination of fields within small electronic components and interconnects. Ported successfully. For large problems, runs approximately 2.75 times slower than on 16 T3D processors.
Electromagnetic
Finite Element
Solver
Used for similar analysis as the finite difference software. Works in the frequency domain, using a very different algorithm than the FDTD software. Ported successfully. For large problems, runs approximately 2.9 times slower than on 16 T3D processors.
Incompressible
Fluid Flow
Application
Used for solving the Navier-Stokes equations for incompressible flow simulations, using a second-order projection method with a multigrid solver. Ported successfully. For large problems, runs at approximately the same speed as on 16 T3D processors.
Non-linear
Thermal Convection
Application
Used for simulating large scaled three dimensional, time-dependent, thermal convective flows. Ported successfully. For large problems, runs approximately 1.7 times slower than on 16 T3D processors.
Parallel
Extensions
for Matlab
Used to extend Matlab by applying parallel computers to large (time consuming) matrix calculations which can be performed much more quickly in parallel. Completion of port pending modification to allow PVM visibility of hyglac nodes to the outside world

Significance:

The codes which were ported successfully obtain speeds from 2.75 times slower to 30% faster than the same number of T3D processors. While a factor of 2.75 slower might initially look bad, and indeed, does imply an obvious choice of machine to run on if both are available, from the point of view of price- performance, this is quite good. A 16 processor T3D would cost at least 7 times more than a 16 PE Beowulf today. (Note: On 8/31/97, estimated cost of hyglac = $35K. Estimated cost of 16 PE T3D = $250K, plus one Cray YMP.) This shows the potential value that may be obtained by using commodity components.

Status/Plans:

Hyglac is currently stable. We will install the new compilers and libraries when they arrive. The libraries may help performance on some of the codes. Additionally, we will encourage other JPL projects to use Hyglac, to gain better understanding of this type of system.

Points of Contact:

Regarding the Hyglac hardware:
Thomas Sterling
California Institute of Technology / Jet Propulsion Laboratory
tron@cacr.caltech.edu
(626) 395-3901

Regarding the Hyglac applications:
Daniel S. Katz
Jet Propulsion Laboratory
Daniel.S.Katz@jpl.nasa.gov
(818) 354-7359