NASA
High Performance Computing
and Communications Program
Computational AeroSciences Project
Parallel Rendering of Time-Varying Volume Data
Objective: Time-varying volumetric data sets (TVVD), which arise from numerical simulations or remote sensing instruments, provide scientists insight into the important dynamics of the phenomenon under study. When appropriately rendered, they form an animation sequence that illustrates how the underlying structures evolve over time. Although the subject of parallel rendering a single volumetric data has been studied by numerous researchers, parallel animation of TVVD, in contrast, has received little attention. The goal of this research is to study the performance issues in parallel animation of TVVD, specifically how to partition a given number of processors to minimize the overall rendering time.
Approach: Rendering time-varying volumetric data sets poses a different problem than rendering a single volume data set. A naive approach would result in idle processors wasting resources unnecessarily during the beginning and end of the rendering process. We argue that parallel volume animation requires rethinking the types of parallelism that be exploited to achieve the optimal performance. In particular, I/O overlap and efficient resource utilization play a crucial role in the parallelization strategy.
Given a fixed number of processor nodes and I/O bandwidth, we pipeline the rendering tasks for consecutive data volumes in the sequence, essentially exploiting both inter-volume (rendering volumes concurrently) and intra-volume (rendering one volume collectively) parallelism. That is, processors available are partitioned into groups and each group is responsible for one data volume; the optimal performance can be achieved by carefully balancing the inter-volume and intra-volume parallelism.
Accomplishments: We have implemented a prototype volume renderer that embodies the idea of pipelined rendering for time-varying data sets. We are able to attain the most effective system utilization bounded only by the data distribution overhead. We also identify three possible performance criteria for evaluating TVVD data sets and show that different partitioning strategies are needed to optimize for different criteria. Figure 1 shows some test results which plot time breakdowns versus the number of groups, given a fixed number of physical processors, 32. We can see the optimal number of partitions for rendering 32 data volumes with 32 processors is 4. Another test using 64 processors derives similar results. Figure 2 shows snapshots from an animation sequence.
Significance: Visualization of large time-varying volume data sets can only be done efficiently on a massively parallel computer. This research demonstrates that two factors that affect the overall execution time are resource utilization efficiency and pipeline startup latency. The optimal partitioning configuration corresponds to the one that makes a good balance between the two. This strategy allows computational researchers to maximize the utilization of a parallel computer for post-processing their simulation results, and thus increases their overall productivity significantly.
Status/Plans: Although our results show that there indeed exists an optimal partitioning for a given configuration, the optimum depends on such factors as the machine size, the length of TVVD sequence, and the ratio between computation and overheads such as those for communication and I/O. This ratio in turn is affected by the hardware characteristics and the coherence property of the data set itself. Thus, data-set dependent statistics need to be collected at run time to determine the optimal partition number, which makes this approach practical for data sets that need to be explored extensively. It is clear that a dedicated I/O manager plays an important role in improving the overall performance of TVVD rendering. Future work includes the development of a flexible interface for the I/O manager and compression techniques favoring the volume rendering process.
Contact:
Kwan-Liu Ma
ICASE, M.S. 403
NASA Langley Research Center
(757)-864-2195
kma@icase.edu