In December we moved our offices. Our new mailing address is:

  • UNIVERSITY OF UTAH
  • CHPC
  • 155 S 1452 E RM 405
  • SALT LAKE CITY UT 84112-0190

which is located just North of the Park Building. Our phone numbers remain unchanged.

The Intermountain Network and Scientific Computation Center facilitates multidisciplinary research activities and collaboration among researchers from different academic departments. By locating these research activities in the same space and sharing computer and network infrastructures, the University intends to accelerate research in solving complex problems that require large research teams with expertise in a variety of disciplines and access to advanced computer facilities. The VP for Research is responsible for the center.

INSCC will provide:

  • Space for multidisciplinary research activities addressing computational modeling to solve problems of national and global importance.
  • State-of-the-art, high-performance computing, networking, and visualization resources.
  • Model infrastructure to support a distributed-computing research environment.

The center participants have access to a number of state-of-the-art computational tools, including a 64-node IBM SP, a 60-node 8 InfiniteReality SGI Origin2000, and a multi-terabyte archival storage system. A network backbone connects these systems using fiber/ATM technology, at speeds of up to 622 Mbytes/sec. The building's network can provide these advanced services to any location within the center. The building also connects through the Utah Educational Network gateway to the most advanced national computer networks. This connectivity will give researchers in the center access to both local and remote computing resources. The Center for High Performance Computing is responsible for the operation, maintenance, and constant upgrade of these resources.

The INSCC provides approximately 49,000 assignable square feet of computer labs, physics labs, offices, and shared support areas. Approximately 36 percent of the space is dedicated to computer labs, 16 percent to physics labs, and 31 percent to faculty, staff, postdoctoral, and shared-space conference rooms, teleconference resources, and computer visualization facilities.

The following research activities will use these advanced computational resources:

Advanced Materials and High-Speed Optics

The study of the structural and dynamic properties of new materials requires the use of high-speed, high-power laser equipment. The experiments also require controlled environments and generate large amounts of data to be manipulated, stored, and analyzed using the center's unique computational infrastructure. The detailed characterization of new materials such as semiconductors and biopolymers will help researchers to design products with improved performance in the electronic and biomedical industries.

Astrophysical Studies

The University of Utah and other institutions from around the world operate a number of large cosmic ray detectors in Utah. Analyses of the data these experiments generate require high-speed networks and large devices for storing information. The design of new and more advanced instruments requires the use of high-performance computers that can simulate the interactions of cosmic rays with the earth's atmosphere. This group of researchers attempts to increase our understanding of the origin of energetic particles from remote parts of the universe, as well as the formation of stellar objects and the properties of intergalactic space.

Center for the Simulation of Accidental Fires and Explosions (C-SAFE)

The U.S. Department of Energy has awarded $20 million over a five-year period to create the Center for the Simulation of Accidental Fires and Explosions (C-SAFE). The center at the University is part of a federal program called the Accelerated Strategic Computing Initiative, and will provide state-of-the-art computerized simulation of fires using three accident scenarios:

  • A container involved in a jet fuel fire after an airplane crash.
  • The explosion and fire resulting from the shelling of an explosives storage area by terrorists.
  • The burning of a building containing high-energy materials and fuels.

Although no classified research will be done at the University, the award will permit C-SAFE scientists access to three of the largest computers in the world currently used by the Department of Energy.

The Accelerated Strategic Computing Initiative (ASCI) promotes the development of computer simulation capabilities needed to ensure a safe and reliable weapons stockpile. The ultimate objective of the center is to simulate fires involving a wide range of accident scenarios with multiple high-energy devices, complex building geometries and fuel sources. These simulations also will be of value to American industries where accidental fires are a particular concern.

Combustion Research

This group uses high-performance computing to develop models of combustion. These simulations can be used to gain understanding of combustion processes and to optimize the design of industrial furnaces having optimum efficiency with a minimum emission of pollutants. The simulations are also necessary in the development and implementation of advanced process control systems. This group forms the core of the engineering simulation team of the C-SAFE project.

High-Energy Theoretical Physics

Through high-performance computing, this group studies the most fundamental constituents of the universe and their interactions. The program requires access to powerful computers for numerical simulations and access to high-speed networks to communicate with scientific collaborators. Group members also use the networks to gain access to the most powerful supercomputers in the country to solve the complex equations that describe matter at high-energy states.

Mathematical and Computational Biology

High-performance computing and visualization allow this group to gain understanding of the mathematical equations that describe biological processes. Problems of interest to the group are self-aggregation in biological systems, population dynamics, electric models of the heart, and platelet aggregation in blood vessels.

Meteorological Studies

The National Oceanographic and Atmospheric Administration (NOAA) and the University of Utah established the NOAA Cooperative Institute for Regional Prediction during 1996. The institute conducts research aimed at improving weather and climate prediction in the Intermountain West. Computer models are being developed to improve forecasts of winter storms and other severe weather events that occur along the Wasatch Front. Research is under way to develop weather information required for the 2002 Winter Olympics.

Seismic and Groundwater Studies

Earthquake and exploration seismology, together with groundwater science, are research areas of great importance to the Intermountain West and to the nation. This research group works in the development of new computational techniques that can take advantage of parallel computers, rendering better models for the exploration of mineral resources and understanding of the destructive effects of earthquakes.

Theoretical Chemistry

This research group uses high-performance computing and visualization to solve chemical problems relevant to material and biological applications. The focus is on the development of new techniques to calculate process rates both in solids and in liquids. This group forms the core of the molecular fundamentals team of the C-SAFE project.

Hierarchical Storage Management (HSM) has as its goal maximizing efficiency and cost of computer storage space. The idea is to keep frequently used files on higher cost "local" disk space and to "migrate" infrequently accessed files onto less expensive forms of storage (optical, tape, network disk) without the owner of the files needing to know where the files actually reside. The only difference the user sees is the length of time needed to access the file.

This neat idea has a few requirements:

  1. There must be a process monitoring the space usage on local disk.
  2. There must be a record kept of where the files actually reside.
  3. There must be an interface with the OS so that normal unix commands that look at disk space and files will receive the correct data regarding sizes and permissions so that the user sees no difference between local and migrated files.
  4. There must be somewhere to migrate files.
  5. When a user wishes to access a migrated file, there must be a process (between the user and the OS) which transparently accesses the file.
  6. There must be some sort of policy as to which files are eligible for migration and for how long inactive files are kept around.

I could go on, but you get the idea.

The concept is simple. The implementation is a bit more involved. It is a combined hardware/software problem that requires close cooperation with the operating system of the computer with the "local" disk space.

Benefits to CHPC users --

By implementing an HSM service, CHPC hopes to provide several advantages for our users.

Large or infrequently needed files can be migrated to less expensive storage. With a tiered storage charge list, this means that you (or at least the PI) saves valuable research money which can then be spent elsewhere. CHPC charges $.05 per Mbyte/month for permanent disk space vs. $.25/Gbyte/month for permanent tape.

The HSM service presents what is effectively an infinitely (well at least until the database gets too unwieldy) extensible filesystem. This means that we should be able to substantially reduce the number of times the user filesystems fill up and we have to ask everyone to house-clean.

As a sidelight (not exactly HSM, but associated), we will be able to offer the service of archiving files, i.e. moving them totally from on-line or near on-line media and into shelf storage.

Finally, use of this service will be totally voluntary.

CHPC implementation (under construction) --

CHPC acquired an IBM 3494 tape library equipped with two IBM 3590 Magstar tape drives. The library has 200 slots, each of which holds a tape of 10 GB uncompressed capacity to yield a nominal near on-line storage capacity of 200 TB. Each tape drive is operated by a dedicated Fast and Wide SCSI interface, as the speed of data transfer to these devices is limited by the bus speed. Tests have produced an effective transfer rate from SSA disk to 3590 tape of ~4 GB/ hour.

We also have 213 GB of IBM SSA disks. This will provide the "local" storage space. These disk drives are faster than current SCSI disk drives for data access.

The server hardware platform is an IBM RS/6000 F50 with two 604e processors, X5 Level 2 cache, 128 MB of RAM and adapters to talk to the tape drives, the tape library and the SSA disk. This host will be equipped with an OC3 ATM adapter for high speed communications to CHPC's compute servers.

The software which we are using is IBM's ADSM (Adstar Distributed Storage Manager -- if you care about the acronym). ADSM provides the HSM functionality through client/server routines. The tasks are divided up this way:

  • Server
    • Controls destination storage, whether disk, optical or tape.
    • Keeps a database of where everything is currently located, whether archived, backed-up (but you don't have to worry about that part) or migrated (space managed).
    • Keeps the data in a consistent format for retrieval.
    • Operates the storage devices with "seek" commands to increase the speed of access to slower media.
  • HSM Client
    • Monitors disk space given to it to manage.
    • Has programmable parameters for high and low thresholds for migration and recalls.
    • Allows individual files to be marked for migration or recall.
    • Interfaces with the operating system.

Both pieces of software will reside on the server host.

All of the hardware for our implementation will be controlled by the server host for greatest efficiency of operation. With advanced networking speeds, the bottleneck is unquestionably the device-to-device movement of data (plus database accesses) rather than the NFS traffic across the network.

CHPC will provide the HSM services to our users by providing a networked filesystem across all of our compute servers. Any user who wishes to take advantage of these services merely needs to move files into this filesystem.

The University of Utah participated in SC97, the largest convention on high performance computing and networking held at the San Jose Convention Center, San Jose, CA, USA, on November 15-21, 1997.

Remarkable visitors at the UofU booth included Victor H. Reis, Assistant Secretary for Defense Programs, U.S. Dept. of Energy and Gilbert G. Weigand, Deputy Assistant Secretary and Senior Information Officer, Strategic Computing and Simulation Division.

The conference has grown in 10 years to more than 5000 participants. Abstracts of research exhibits and conference proceedings can be found at http://www.supercomp.org/sc97. The conference has reflected the growing importance of communications as part of computing and has added "networking" to the high-performance computing theme of the conference. An exhibit of the history of the Internet can be found at http://scxy.tc.cornell.edu/sc97/inet_history97/.

The UofU exhibit and booth were coordinated by the Center for High Performance Computing with contributions by several research groups and centers. Special thanks go to Julia Caldwell and the CHPC User Services Group, Chris Johnson and his graduate students, and to Robert McDermott.

The Center for High Performance Computing is planning on organizing a research exhibit at SC98, to be held in Orlando on November 7-13, 1998. You may find information on SC98 at http://www.supercomp.org/sc98. CHPC will solicit research groups to participate in this exhibit and will contact you for updates in your research and exhibit material in late summer.

Following is a list of the posters and web sites that were presented at the UofU research exhibit:

Posters:

  • Center for Simulation of Accidental Fires & Explosions (C-SAFE) David W. Pershing, Thomas C. Henderson Philip J. Smith, Gregory A. Voth
  • Scientific Computing and Imaging Christopher Johnson, Chuck Hansen, Rob MacLeod
  • High Resolution Fly's Eye Group Utah Experiment Finds The Highest Energy Particle Ever Seen H.Y. Dai, C. C. H. Jui, D. Kieda, E. C. Loh, P. Sokolsky, P. Sommers, R.W. Springer
  • High Energy Physics James Ball, Carleton DeTar, Brenda Dingus, David Kieda, Michael Salamon, Yong-Shi Wu
  • Mathematical and Computational Biology F. Adler, N. Beebe, D. Bottino, D. Eyre, A. Fogelson, J. Keener, M. Lewis, H. Othmer, M. Owen, E. Palsson,
  • Seismic Cat Scan of an Ancient Earthquake Dave Morey and Jerry Schuster
  • Combustion and Reacting Flow Simulation Philip J. Smith, and Industrial Collaborators
  • Meteorological Studies The National Oceanic and Atmospheric Administration (NOAA) Cooperative Institute For Regional Prediction John D. Horel, and W. James Steenburg
  • Materials and High Speed Optics David Ailion, Rui-Rui Du, Orest Symko, Craig Taylor Valy Vardeny, Clayton Williams, George Williams
  • The Henry Eyring Center for Theoretical Chemistry Gregory A. Voth, Thanh Truong, and Jack Simons

Web sites (please find all links at http://www.chpc.utah.edu/sc97/):

  • Avalanche, Scalable Parallel Processor Project
  • Center for the Simulation of Accidental Fires & Explosions
  • Department of Chemistry with Computational Chemistry (Thang N. Truong)
  • Combustion and Reacting Flow Modeling
  • Computational Cardiac Dynamics
  • Cosmic Ray Research and Astrophysics Education
  • Henry Eyring Center for Theoretical Chemistry (The Voth Group)
  • Geophysics
  • Medical Imaging Research Lab
  • Meteorology
  • Quantum Chromodynamics Group
  • SCI Utah - Scientific Computing and Imaging
Last Modified: October 06, 2008 @ 21:09:11