Over the next two quarters CHPC will be implementing changes to the allocation and accounting policies, which were approved at the last meeting of the Faculty Advisory Board.

Fall quarter (beginning 10/1/97):

  • Default allocation goes from 50 to 200 SU
  • Disk storage charges decrease from $.25/Mb/month to $.05/Mb/month.
  • Archival storage (Permanent tape): $.25/Gbyte/month

Winter quarter (beginning 1/1/98):

  • Allocations no longer required for the IBM RS6K cluster
  • Allocations remain for SGI Power Challenge
  • Allocations begin on the IBM SP and the SGI Origin 2000
  • SU devalued and default allocation changed as follows:
MachineSU per CPU HourDefault Allocation
IBM SP1 SU10 SU
SGI Origin 20001 SU10 SU
SGI Power Challenge.5 SU5 SU

Any existing allocations for Winter Quarter (and any for Spring and Summer as well) will accordingly be divided by 12. These SU's will be valid on the Power Challenge only.

Proposals are due on December 1st for the quarter beginning January 1st. You may submit the proposal for up to 4 quarters. Allocations will be granted on a per system basis, so be sure to include the system(s) you are requesting time on in your proposal.

Allocation forms are available on-line at http://www.chpc.utah.edu/policies/allocation.html

Please note: Allocations may not be transferred between machines.

CHPC System and Networking Support

by David Huth, Assistant Director

CHPC Systems and Networks System and networking staff support is growing to fulfill the expanded role of the Center for High Performance Computing. CHPC staff efforts are concentrated in supporting the occupants of the Intermountain Networking and Scientific Computation Center (INSCC), providing support for high performance computing resources, and providing advanced network, security, distributed file systems and second tier UNIX administration support for the University community. In addition, system staff will implement and support the C-SAFE intranet.

The INSCC building computing infrastructure is top priority. Although services are in development and its internal network has been designed, network equipment is scheduled to arrive early November and the Sun Enterprise 10000 fileserver is due to arrive mid-December. CHPC will be the first entity to occupy the new building - mid to late November. Individual research groups will move as services required by the group become available.

Although the move will require re-addressing of most of the CHPC systems, there will be no disruptions of service on any of the CHPC compute machines (this work will be done on scheduled down times). Coinciding with the move several system enhancements will occur, including: statistical applications will move from AIX to Solaris; the Hierarchical Storage Management system will receive a new control processor; and, the SGI Origin 2000 will become full production.

Many CHPC system staff are involved with general campus issues and are part of University committees dealing with network and computing. Areas of particular involvement include: next generation campus network, information resource security, distributed file systems, and digital video. Following is a list of CHPC network and system staff and their respective areas of responsibility. Although each person can be identified with a specific area of concentration, the staff works together in planning and implementing network and computing solutions.

David Huth - Assistant Director CHPC Systems and Networks.

Lloyd Caldwell - High Performance Compute systems. Primarily responsible for the SGI Origin 2000 and RS6000 cluster, and involved with the operation of all HPC systems.

Janet Curtis - High Performance Compute systems. Primarily responsible for the IBM SP and Hierarchical Storage Management systems, and involved with the operation of all HPC systems.

Lou Langholtz - Software Systems. Systems including WWW, FTP, intranet, local utilities and other user oriented applications.

Tim Ma - Systems Administration. Responsible for fileservers, client configurations, mail and other system-wide applications.

Jimmy Miklavcic - Digital Video. Responsible for graphics systems, video conferencing, digital imaging and systems related to digital presentation.

Hossein Shahrebani - Physical Systems. Responsible for UPS, physical computing infrastructure, hardware and general computing environment issues.

John Storm - High Performance Networking. All aspects of high performance local and wide-area networking and communications.

Grant Weiler - Security. Responsible for dealing with all aspects of information resource security.

System staff will grow as the needs of INSCC clients, C-SAFE and the University community it serves increase. The INSCC building and the research community will continually present staff with new challenges, which will be responded to in terms of development and testing. One of the crucial resources for future solutions is the Advanced Network Lab, located in the INSCC building, which will serve as testbed for next generation networks and computing solutions.

Secretary of Energy Federico Peņa announced on July 31, 1997 that the University of Utah was one of five universities awarded a five year, $20 million contract to establish a research center with the goal of advancing the capabilities of large-scale computer modeling and simulation.

Dr. David Pershing will lead the new Center for the Simulation of Accidental Fires & Explosions (C-SAFE) which includes twenty UofU faculty, and partners from Brigham Young University, Worcester Polytechnic Institute, and Thiokol Corporation.

The major objective of the center is to develop state-of-the-art, science-based tools for the numerical simulation of accidental fires and explosions, especially within the context of handling and storage of highly flammable materials. The ultimate goal of C-SAFE is to provide a system comprising a problem-solving environment in which fundamental chemistry and engineering physics are fully coupled with non-linear solvers, optimization, computational steering, visualization and experimental data verification. The availability of simulations using this system will help to better evaluate the risks and safety issues associated with fires and explosions.

Center director Dr. David Pershing, who also is dean of the College of Engineering and acting Vice President for Budget and Planning, said the simulation will include physical and chemical changes in containment vessels and structures during fires and explosions. It also will simulate the mechanical stress and rupture of containers and the chemistry and physics of organic, metallic and energetic material inside the containment vessels. The simulations will be conducted up to, but not beyond, the point of detonation.

Other universities selected from the 48 applicants were Stanford University, California Institute of Technology, the University of Chicago, and the University of Illinois at Urbana/Champaign. The three DOE National Laboratories, Lawrence Livermore, Los Alamos and Sandia, are partners in the Academic Strategic Alliances Program (ASAP) that created the centers, and UofU faculty will work closely with Laboratory scientists.

The ASAP objective is to "establish and validate the practices of large scale modeling, simulation and computation as a viable scientific methodology in key scientific and engineering applications that support DOE science-based stockpile stewardship goals and objectives."

C-SAFE researchers will use DOE computers that are capable of 10-30 TeraFlops/s (a TeraFlop is one trillion floating point operations). The major challenges in high end computing include scalable hardware and software architectures and I/O; parallel problem solving environments; communications networks; visualization; algorithms and data management. There are also major modeling challenges in the physical science areas such as computational physics, energetic materials and materials modeling, including the effects of system and component aging. The five university centers will be given access to approximately 10% of the ASCI platforms at the Defense Programs laboratories.

The university alliance program is an integral part of the DOE's plans for complying with the Comprehensive Test Ban Treaty and the underlying Science Based Stockpile Stewardship program administered by the Department of Energy. A key component of this program is the Accelerated Strategic Computing Initiative (ASCI), which over the course of the next decade will create the leading-edge computational modeling and simulation capabilities that are essential for maintaining the U.S. stockpile's safety and reliability. No classified research will be done at the U.

Dr. Richard K. Koehn, vice president for research at the U. of U., said several factors contributed to the U.'s successful bid, including faculty expertise in high-performance computing, theoretical chemistry and computational engineering. Of critical importance, Koehn said, was the ability of the faculty from differ ent departments to integrate themselves into an organized research team. Koehn said factors contributing to the U.'s getting the center include a recently established Silicon Graphics Visual Supercomputing Center, which is a partnership between the U. and Silicon Graphics Inc., a separate partnership with IBM, the strong commitment of the U. to High Performance Computing through its continued investment in the Center for High Performance Computing, as well as a new federally funded building to house the center. C-SAFE primarily will be located in the new Intermountain Network and Scientific Computation building.

IBM is sponsoring a contest to name CHPC's SP system. The winner will receive two tickets to a Jazz game! Entries that tie in with the C-Safe project (see the article in this issue) will be given preference. Please submit entries to Julia Caldwell, jhc@chpc.utah.edu before November 7th, 1997

Internet2

by Julio Facelli, Director CHPC

We are moving into an exciting new world of networking technologies. After the success of the Internet a number a high end users from government and universities are pressing for a new generation of networks. Why a new generation of networks? Because many of the applications like:

  • Digital Libraries
  • Virtual Labs (collaboratories)
  • Immersion Environments
  • Tele-medicine
  • Distributed Computing,

etc., will require networks with higher bandwidths, lower latencies, and more importantly guarantees quality of services.

The networking model of the Internet is a collection of computers or network devices that are able to receive and send messages according to certain rules (routes). These messages are handled on a "best effort basis". Most of the time quite fast, but some times very slowly, depending on the load of the system. This is fine for applications like FTP and even telnet, but the more interactive type of applications that we envision need a network that it is predictable and in which the quality of service (QoS) is guaranteed once that the communication (circuit) has been established. It looks familiar? Yes this is like the telephone system; once that you reach your party the conversation does not get interrupted, there is consistency in the service, and there is predictability.

Two main initiatives have been established to advance networking in this direction, the academic community has formed the Internet2 project (http://www.internet2.edu) and the Next Generation Internet (NGI) initiative has been proposed by the government mission agencies (http://www.ngi.gov/). These initiatives, from the two communities that traditionally have been at the front of computer and networking technologies, are quite complementary and I am glad to report that the University of Utah is a participant in both.

The University of Utah has been awarded a NSF (http://www.vbns.net/) grant to connect to the vBNS backbone. The vBNS is the most advanced network in the US and serves the NSF research community as well as it is the current backbone for the I2 project. Professor Tom Henderson, Computer Science, is the PI of the vBNS grant and the grant application has described the many important research applications that will make use of this additional network resource (http://www.cs.utah.edu/~tch/vBNS/) at the U. The University is considering different connection options to attach to the vBNS backbone. We anticipate that the connection will be operational by the end of the fall quarter. In the near term vBNS services will be provided to the MEB and INSCC buildings, with services to other locations scheduled for later in 1998. The University is also looking at different options to provide high speed access to other government networks, like those from DOE and NASA. These networking services are critical to keep our high end computing users at the leading edge of the technology.

The University is also a member of the Internet2 project. Within this project there is a very active group dedicated to the development of applications that can take full advantage of the new generation of networks. While research type of applications are quite welcome, we are very interested in identifying teaching/learning Internet2 applications. The University has identified some funding that may be available for these applications. If you have any ideas that you would like to explore, I urge you to contact me directly (facelli@chpc.utah.edu). As the applications coordinator for the University, I am very happy to help you in securing the resources needed to develop your project.

Last Modified: October 06, 2008 @ 21:09:12