WITH APPLICATIONS TO UNSTABLE FLUID FLOW
IN 2 AND 3 DIMENSIONS
Principal Investigator
Prof. Paul R. Woodward
University of Minnesota, Department of Astronomy,
Supercomputer Institute,
and AHPCRC
1100 Washington Ave. S., Minneapolis, MN 55415
This project is aimed at the development of powerful and efficient new numerical algorithms for fluid dynamical simulations. An associated project focusses on making these new algorithms "bullet proof" and bundling them into a library, PPMLIB, optimized for use on modern parallel computers. The numerical algorithms are based upon the Piecewise-Parabolic Method (PPM), originally developed at Livermore by the P.I. and his collaborator Phil Colella. These algorithms have been much refined since that original work and extended to treat 3- D flows, multifluid problems, flow in complex geometries, and magnetohydrodynamical problems in 2- and 3- D. A relaxation method is presently under development to make the methods implicit where this is necessary.
An additional direction of development has been to couple these numerical techniques to new and powerful scientific visualization software and systems developed in our group at Minnesota. Without interactive visualization tools and systems such as these, the very high resolution 2- and 3- D flow simulations made possible by today's parallel computers would be nearly pointless. The PPM Graphics Tools are being bundled together with the PPM code modules into PPMLIB.
A final direction for our development has been to generate tools for the construction of parallel code which is both efficient and portable. The need for this development arises from today's lack of programming language standards for parallel computers. Even as proposed standards such as High Performance Fortran (HPF) emerge, we do not anticipate compilers for these new languages which will generate efficient code on multiple parallel platforms from a single source any time soon. We are therefore developing a precompiler which translates a restricted and stylized subset of Fortran-77 code, which we call Fortran- P, into the parallel Fortran supported on various target machines, including the Cray-T3D and clusters of shared memory multiprocessors (the SGI Power Challenge Array and the IBM SP-2 are targeted here). Fortran-P programs run without translation on single processors of Cray vector machines or on single processors of standard Unix workstations. The Fortran-P precompiler will be made available along with PPMLIB.
Ours is not solely a software development project, although that process is a major focus for us. Our software development is driven by grand challenge problems in fluid dynamics. We are concentrating on the simulation of turbulent flows. Our most recent work has involved the simulation of homogeneous, compressible turbulence on grids of up to a billion zones, generating data sets of up to 500 GByte. We are also exploring the applicability of our numerical methods to flow within and around complicated boundaries and/or in the presence of unstable fluid interfaces. Using leverage from an NSF Grand Challenge award, we are testing our methods and the new implicit schemes in the context of compressible convection flows.
Our project derives a high degree of leverage from heavy investments made by the University of Minnesota in its supercomputing program, from our association with the Army High Performance Computing Research Center (AHPCRC), from our participation in the NSF Grand Challenge program, and from our joint development projects with the computer industry. Our association with the Minnesota Supercomputer Institute and the AHPCRC give us priviledged access to the largest academic computer center in the world. In the last year, access to the AHPCRC's 896-node CM-5 and to the University's small Cray-T3D was invaluable to our work. Through the NSF Grand Challenge program we obtain special access to the latest parallel computing equipment from a variety of vendors. Through this program we were able to obtain dedicated use of a 256-node partition of the Cray-T3D at Pittsburgh for over 2 weeks. Through our joint work with Silicon Graphics, we were able to obtain a year ago one month of dedicated access to a 16-machine Challenge Array configured specifically to our needs in order to perform our one-billion-zone simulation of homogeneous, compressible turbulence, the largest fluid dynamics computation which we have performed to date. Our work with Silicon Graphics also enabled our development this fall of a prototype scalable high-resolution display system, which we called the PowerWall, and which we demonstrated at the Supercomputing '94 exhibit in Washington. All this leverage is substantial. It greatly enriches the value of our work for this D.o.E. program at no cost to the D.o.E. It also helps to create the excitement in new and challenging projects which attracts to our group the very high quality personnel which make possible our continued success.