David H. Porter and Paul Woodward
University of Minnesota
Minnesota Supercomputer Institute &
Laboratory for Computational Science and Engineering, Minneapolis, MN
Juri Toomre and Nick Brummel
Joint Institute for Laboratory Astrophysics, University of Colorado, Boulder
Research Objective: To perform computational experiments which help us to understand the role of small-scale turbulent motions in determining the characteristics of large-scale convection and of heat and material transport in stratified atmospheres of stars and planets. This is an NSF-funded Grand Challenge Application Group, it derives substantial benefit from access to AHPCRC facilities, particularly the Graphics and Visualization Laboratory (GVL), and it leverages D.o.E. funded research in turbulence and in numerical algorithms for fluid dynamics computations on massively parallel machines.
Methodology: The PPM gas dynamics code is being used to simulate the compressible convec-tion of a gas layer confined between two horizontal plates. Gravity makes the bottom of this layer about 20 times denser than the top. Heat is introduced at a constant rate through the bottom plate, and the top plate is held at a fixed, relatively cool temperature. Use of fine 3-D grids and the PPM Euler code allows the effects of viscosity to be minimized. In nature these effects are orders of magnitude smaller than in the least viscous computer simulations. The small viscosities of the PPM simulations produce small effective Prandtl numbers of the order of 10^2. For the case of the atmosphere of the sun, the Prandtl number is about 10^8. The largest runs are being performed on the Cray T3D. Juri Toomre and Nick Brummel visited the AHPCRC and spent several days working with the Minnesota team in the GVL to visualize different aspects of these convection simulations and to compare them with Navier-Stokes simulations of their own. Exten-sive use is being made of the volume visualization tools and hardware systems which have been built in the GVL in order to study the hundreds of Gigabytes of data which have been collected from these convection simulations.
Accomplishments: These simulations of compressible convection are unique in their ability to resolve the first few length scales of turbulent flow within the fairly coherent downward plunging plumes of relatively cool gas which develop in the convection cells of these flows. Nevertheless, there is not enough grid resolution yet to model the interaction of this turbulence with the larger flow with complete confidence. Still, the contrast between the PPM simulations and the much more visous Navier-Stokes simulations performed by the Colorado team is giving insight into the mechanisms which drive turbulent energy transport in these flows and the importance of this process compared with the gross material transport of the larger convective motions. This work has also helped to drive efforts in scientific visualization and massive data handling which benefit many other projects. In particular, the largest of these convection simulations, which ran for about 100,000 node hours on the Cray T3D at Pittsburgh was one of two early large-scale computations carried out on that machine (we were involved indirectly with the other of these two computations) after it was upgraded to 512 nodes in September, 1994. The 250 Gigabyte data set generated by this run was used for a demonstration of very high resolution visualization, display, and animation on our Power Wall in the Silicon Graphics exhibit booth at the Supercomputing '94 conference in November, 1994.
Significance: The main interest in this work for the D.o.E. and for the Army is the development through this project of a highly optimized PPM code for the Cray T3D and the continued development of the graphics and visualization software in the GVL which is driven by this project.
Future Plans: The PPM code running on the Cray T3D at the Pittsburgh Supercomputer Center is being used to carry out the largest convection simulation to date, using a grid of 512x512x256 zones. This simulation has been run for over 100,000 node hours and has produced over a quarter terabyte of archived data. This code is running at about 3.5 Gflop/s on 256 T3D nodes. We hope to achieve much higher code performance on the Cray T3E, perhaps using a special translation of the PPM algorithm which produces a completely scalar code which should fit into the small on chip cache of the new DEC Alpha microprocessors. Higher performance will allow more realistic convection simulations in which the artificial floor in the simulation is replaced by a convectively stable gas layer.