Visualization on the IBM SP-2

by Adam Tinniswood, Electrical Engineering Department

The Bioelectromagnetics group in the department of Electrical Engineering (University of Utah), routinely runs extremely large simulations of penetration of electromagnetic fields into the human body. We investigate radiation from communication devices such as mobile telephones and pagers and we have also carried out extensive studies of whole and partial body resonances in both human and primate models. These models have become extremely large in size due to two factors, the first being the requirement for detail and the second being a dependence in the modeling technique between resolution increase and frequency increase. The whole body model, which has a resolution of 2mm has over 20 million cells and requires over 1Gbyte of memory in a Finite Difference Time Domain (FDTD) simulation. The FDTD technique provides a method of modeling the propagation of electromagnetic fields through a dielectric medium (in this case the human body).

The fundamental aim of most simulations we carry out is to determine the Specific Absorption Rate (SAR), which is a mass normalized power absorption measure. In many simulations (and in fact until recently) we have been able to look at either small groups of numbers or in some cases even single numbers as a result of our simulations. For example, in a simulation using a mobile telephone next to a human head, we tend to look for the highest SAR as averaged over 1g of tissue. This is the key figure as it is the US standard for localized SAR. However in many cases much larger scale visualization is required of E-field and SAR values throughout the model. When the model was smaller, programs such as Matlab could be used to view this data, generally in 2D slices. Now there is a need for full 3D visualization of both the initial model and the simulated results and as this data is distributed over a number of processors, alternative tools are required.

Recently this group has started using more complex models of telephones derived from the CAD files used to design the device. These require a resolution of around 1mm and thus also require careful positioning next to the head model.

pV3 (denoting parallel visual 3) is a convenient solution to this when using the IBM SP- 2, with little modification to the existing code required. The user interface is enhanced with the additional package pV3-Gold, which adds a friendlier GUI. Both pV3 and pV3-Gold are available free of charge and contain extensive documentation for their use.

Adding pV3 to your simulation

When I first approached the use of pV3 in our simulations I was told that minor modifications were required for its implementation. In a nutshell its approach is to link together a workstation and the IBM SP-2, with visualization taking place on the workstation and simulation on the SP-2. The simulation continues essentially as normal, with a small amount of information being copied, at a user-defined period, to arrays which will be used by the visualization end of things on the workstation. Anyway my initial thought was that minor modifications could turn out to be quite major. However this is far from the case. Extra code is required but at the same time this is in fact almost trivial, as its only purpose is to tell pV3 the size and shape of your simulation space and how this is distributed over the parallel machine. A surface of the problem space is also required for each processor, which gives the limits of the data contained on it. Finally a function is required which will copy the data to be visualized into an array. This function is called from an update function which you must call at some point or points in your code. In our case the code is iterative and therefore we inserted an update function into the code which was called every 10 iterations. This update function will then ask the workstation which data it required and you need to react by copying the correct data to the array. For example in our simulation we can display E-fields, SAR or the model. Obviously this can be changed to the users needs. Most of this information and code can be taken from one of the examples which comes with pV3 and adjusted to suit the users needs.

Actually using pV3

The first thing to note when using pV3 is that you continue to run your simulation in the same way. If the pV3 server is not running then it will not be affected. However if you start the pV3 server then it will look for pV3 clients and link into them. How does it know where to look for them? Well you need to have a pvm process running on each of the processors of the parallel machine and the visualization workstation. Once this is set up you really are ready to go. Start the pV3-Gold interface and it will wait for the parallel machine to respond (in our case at the next update command). When it does, the interface appears and displays your default data (in our case the model). pV3 will determine which data sets are available and selecting these from a dialog causes the simulation to copy different information into the array to be used by the visualization workstation. It may sound complicated but it really isn't. You need to follow the following steps:

  1. Add in the initialization routines to your simulation code, which define the problem size and shape and the distribution of the data on the parallel machine.
  2. Add in the routine to copy the data to be visualized into an array. This array will be passed to the visualization workstation.
  3. Set up pvm3 on the workstation, with a hostfile containing the processors you will be using including the workstation in this. Run pvmd3 hostfile to start the demon.
  4. Start the simulation as normal.
  5. Start pV-Gold and wait for the first pV-Update. Everything should now be working fine.

If you have a long simulation, you can start pV3-Gold at the beginning and then exit from it until more visualization is required. We routinely do this when using a CAD derived telephone model. At the start of the simulation the telephone is positioned using pV3-Gold. Once the correct position is established the actual simulation is allowed to start. At any point during the simulation we can restart pV3-Gold to examine the data. Starting and stopping pV3 does not effect the simulation in any way.

Some points to note:

  1. The workstation is required to have a 3D OpenGL graphics accelerator.
  2. Although you may be using MPI for inter-process communication in your simulation, you will need pvm3.3 or higher for the communication between your processes and the workstation.

Computer Security Advisory Team

by David Huth, Assistant Director CHPC

Computer and network security? Who needs it, it sometimes hinders easy access to machines that we would like to get to and how can one ever remember all of those passwords. After all, the University has a mission to provide information, not to keep it locked behind security firewalls and the like.

The call comes in from UCLA, someone from has gained access to several machines at their site, launched SPAM attacks from some hosts and severely compromised others - including destruction of research data. We are a pass-through site, a perpetrator has compromised machines in another University of Utah Department, obtained a valid username and password into hostx, where the attack on UCLA was launched. The compromised Department, like UCLA, has several severely compromised machines including the destruction of research data. The compromised Department was vulnerable through several open doors, all of their machines are in the process of being rebuilt with a heightened level of security.

The calls come in from several internet providers, two hosts at the University, in two different Departments, are spewing unsolicited electronic mail causing a denial of service at the providers sites. The SPAM bots are shut down, again the Departments are pass-through sites. But how was unwarranted access gained into these hosts? Months prior to the denial of service attack, machines in yet another Department are compromised. The perpetrator launches a sniffer, collecting clear text username and password information on all TCP transactions across the building network. The sniffer logs provided easy access into the attack site hosts. Clean up efforts included rebuilding machines, implementing several security measures and forced password changes at several departments.

The call comes in from Israel, the University of Utah is streaming so much internet traffic into their site that valid network traffic is at a stand still. Reports from within the University indicate that the University network is experiencing heavy loads, yet hours pass before anyone realizes there is a problem. Israel blocks all University of Utah traffic and opens a trouble ticket with SPRINT. A compromised University of Utah machine is finally isolated and removed from the network, only hours before SPRINT could (and would) have taken the University of Utah off the internet.

These are but a few of the recent computer/network security incidents the University of Utah has experienced. The first of which (the Israel denial of service attack) prompted the formalization of the Computer Security Advisory Team (CSAT) and the creation of the Computer Security Response Team (CSRT). In each case CSAT has responded to and provided support in remedying the incidents.

The Computer Security Advisory Team exists to provide the University with proactive and reactive recommendations promoting campus wide network and computer security. CSAT is comprised of University computer professionals with a vested interest in security and experience with University networks and computer configurations and their inherent security vulnerabilities. It is CSATs mission to: formulate security policy recommendations pertaining to the use of University Information Resources; enhance the University's computer and network security implementation and minimize vulnerabilities; and to reduce response time to University security breaches.

To respond to potential University security breaches and minimize response to attacks, the Computer Security Response Team was initiated. CSRT is available to any entity on campus experiencing symptoms of a security breach. The Center for High Performance Computing provides funding and staff resources for the operation of CSAT and CSRT, which report to Cliff Drew, Associate Vice President of Academic Affairs.

In the event of suspect activity, the CSRT can be activated into action by sending electronic mail to General security questions or requests for information should be presented to CSAT by sending mail to CSAT posts security alerts to the general computing professional community and posts host specific security notices to members of host specific lists. Please visit for general security information, security archives and links to other security sites.

As the internet community continues to grow, the need for a heightened level of security is crucial - without the degradation of performance and usability. Individuals as well as the University have a lot to lose and as the University of Utah continues to be a highly visible research and educational institution unwelcome visitors are sure to make their presence known. Security implementations are necessary at three levels: 1) the host and server; 2) the local area network; and 3) the wide area network (connection to the internet). CSAT will address security implementations at these levels, incorporating confounding issues such as remote access, distributed file systems and rapidly increasing network speeds.

For more information, pose questions to

CHPC Installs New UltraSparcII Statistics Server

by Dr. Byron Davis, Staff Scientist

The CHPC has replaced its aging IBM 370 UNIX statistics workstation with a new UltraSparcII workstation. This new statistics server is named Sunspot is a Sun Ultra-30 running Solaris 2.5.1, with a processor rated at 300 MHz with 2 MB cache. Sunspot presently has 4 GB of scratch space, 1.25 GB swap space, and 500 MB of real memory with another 500 MB to be added to bring total real memory to 1 GB. The software packages available on this machine include:

  • SPSS Version 6.1.0 for Solaris is one of the top two comprehensive, integrated systems for statistical data analysis
  • SAS Version 6.12 for Solaris, is the other "top two" comprehensive, integrated system for statistical data analysis
  • BMDP Version 7.0 for Solaris is a general purpose statistical package with a number of unique and top rated procedures/algorithms
  • S-PLUS Version 3.4 for Solaris is a large system with over 2000 built-in functions and hundreds of additional functions stored in included libraries
  • STATIT Version 4.3.1 for Solaris is a full purpose statistical package that includes excellent custom procedure writing capabilities
  • SUDAAN Version 7.5 for Solaris is used to analyze data from complex sample surveys and other studies involving clustered data. Our version of Sudaan is SAS-callable
  • LISREL Version 8.14 for Solaris is one of the leading SEM (Structural Equation Modeling) programs specifically designed to accommodate models that include latent variables, measurement errors in both dependent and independent variables, reciprocal causation, simultaneity, and interdependence. Version 8 and above should read SPSS system files. The SIMPLIS command language simplifies LISREL input and helps minimize mistakes in the problem setup
  • PRELIS Version 2.14 for Solaris is a preprocessor for LISREL, but it also has independent utility as it facilitates analysis of binary, categorical, ordinal, censored, continuous, and/or incomplete data. It generates polychoric or polyserial correlations for ordinal data, Tobit covariance matrices for censored data etc. It also does imputation of missing values, tests of univariate and multivariate normality, bootstrap estimates, and multivariate multinomial probit regressions
  • HLM Version 4.0 for Sun can analyze 2-level and more hierarchical generalized linear models and non-linear models. It is also useful for generalized estimating equations, Fisher scoring/EM, standard errors for variance-covariance estimates, and plausible value analysis

To help provide better service and facilitate exchange of pertinent information among users, an electronic mailing list/listserv is now available for issues effecting and/or relevant to CHPC statistics users. The name of this list is as follows:

To subscribe to this list, send email to In the body of the message type:

subscribe stat-users [optional email address]

For more information about this and other lists, look under our web page at: and select the "What's New" item at the bottom of our home page. Then pick the "CHPC Announces User Information Lists" to get additional information about our user lists. If you wish to obtain an account on Sunspot so you can take advantage of our statistical resources, you can obtain an account request form from our web site under the selection "Forms and Policies."

If you have further questions and/or suggestions regarding statistical services available at the CHPC, please contact Dr. Byron Davis, ext: 5-5604, email:, campus mail: 414 INSCC building.

User Information Mail-Lists Now Available

by Julia Caldwell, Consulting Center Coordinator

User may now subscribe themselves to lists if they wish to be updated on system status, downtimes, or system changes. There are now five lists:

  • (SGI Origin 2000 Users)
  • (IBM SP Users)
  • (SGI Power Challenge Users)
  • (IBM RS6K Cluster Users)
  • (Statistics Users)

You may subscribe to any or all of these lists. To subscribe users should send mail to: The body of the message should read:

 subscribe listname [optional_emailaddress] 

Where listname is one of o2k-users, sp-users, powerchallenge-users, cluster-users or stat-users. You may optionally specify the email address where you wish to receive mail from this list.

You may subscribe to as many as you wish with one mail message. For example, to subscribe to all of these lists, the body of the message should read:

subscribe o2k-users
subscribe sp-users
subscribe powerchallenge-users
subscribe cluster-users
subscribe stat-users

Subscribing to any one of these lists will automatically also subscribe you to the list which is used to notify our user community of general information.

The Center for High Performance Computing has added three staff members to its System and Network support team. Steve Scott is responsible for general network infrastructure from the desktop computer to the network device and between network devices. He also maintains CD-ROM libraries and works with the general setup of machines and devices in the INSCC building. Wayne Bradford is part of the system support staff focusing on NT systems and their incorporation with UNIX environments. Joe Breen, the newest hire, will work closely with John Storm in the implementation and testing of network solutions within the INSCC building and for the campus in general.

Last Modified: October 06, 2008 @ 21:09:10