Objective: The ESS CAN is intended to establish a collaboration among Grand Challenge application scientists, the high-performance computing industry, and NASA as part of the ESS Project. As part of that collaboration, a large scalable parallel computing testbed is to be selected as a focus of experiments leading to sustained performance of greater than 50 GFLOPS. As part of the testbed selection process, a means of providing a quantitative component to the evaluation methodology is required.
Approach: Develop a set of benchmarks from known applications in the Earth and space sciences that can be used to test existing vendor offerings. Devise a testing methodology that will expose the performance and scaling characteristics of the system. Require proposing vendors to port and run benchmarks and provide timing measurements of test runs. Analyze data and rank systems.
Accomplishments: A set of seven benchmarks from the Earth and space sciences was developed to be used by vendors in evaluating their machines. These were provided in sequential and message passing source code form with extensive descriptive material. A set of metrics was established to measure key characteristics of system capabilities. Weightings were assigned to these metrics and an aggregate figure of merit defined. Proposing vendors submitted benchmark measurements, and a team of ESS Project staff analyzed the results. These were made available to the CAN review board.
Significance: The CAN benchmarks exposed the capabilities of the vendor systems driven by real-world ESS applications. They revealed sustained performance and scaling characteristics. Included in the evaluation were a set of synthetic I/O tests for exercising disk mass storage. Examples of codes were derived from active research in both the Earth and space sciences. Vendors were permitted a wide degree of flexibility in order to benefit from optimizations peculiar to their particular architectures.
Status/Plans: The evaluation task employing the CAN benchmarks has been completed. On-site visits will be conducted to verify the validity of the measurement methodology employed by the vendors. Lessons learned from this evaluation process will be applied to the development of the permanent ESS Parallel Benchmark set.
Points of Contact:
Dr. Thomas Sterling
Center of Excellence in Space Data and Information Sciences (CESDIS)
Goddard Space Flight Center
tron@chesapeake.gsfc.nasa.gov
301-286-2757