Local Measurement of Changes in Shape and Volume Between Serial Volumetric Medical Images: Application to Niemann-Pick Type C Disease Progression

by Jeffrey A. Weiss, Alexander I. Veress, Anton E. Bowden, Richard D. Rabbitt
Department of Bioengineering, University of Utah
Robert J. Gillies, Jean-Philipe Galons
Arizona Cancer Center, University of Arizona

Introduction

The accurate localization of deformation, including changes in shape, surface area and volume, is necessary for the quantification of growth or the effects of disease in biological tissues. In many cases, multiple time points can be imaged during growth or the course of a disease process to allow quantification based on each individual time point. To localize volume and shape changes it is necessary to determine a one-to-one correspondence between a "template" volumetric image dataset and a "target" dataset. We have developed a technique referred to as Warping that uses the principles of continuum mechanics in combination with forces that arise from pointwise intensity differences between template and target images to deform a template image into alignment with a target image (Rabbitt et al., 1995; Weiss et al. 1998).

In collaboration with Robert Gillies of the University of Arizona Cancer Center, we are using the Warping technique to study normal mouse brain development and changes that are induced by Niemann-Pick disease type C (NP-C). NP-C is a defect in cholesterol metabolism. This disease is diagnosed in thousands of children worldwide each year, and may affect many more due to mis-diagnoses. It is 100% fatal, usually early in the second decade of life. A major biochemical manifestation is the intracellular accumulation of non-esterified cholesterol. This is the result of impaired cholesterol trafficking and is specific for LDL-derived (not endogenously produced) cholesterol. In 1997, the gene for human and murine NP-C was identified. The gene product, NPC1, is a transmembrane glycoprotein. NPC1 knockout mice are now available to study the course of disease progression and to assess the efficacy of treatment regimens.

Currently, there is not a quantitative in vivo measurement of disease progression. Development of more effective therapies would benefit from non-invasive and quantitative measures. Magnetic Resonance (MR) techniques are ideally suited to provide quantitative, non-invasive information about progression of this disease in individual patients. The objectives of this collaboration are to first document the normal brain growth of unaffected mice, and then assess alterations in brain development in NP-C mice using paired volumetric MRI images of mouse brains taken during growth.

Methods

Overview of the Warping technique. The Warping algorithm is best understood in the context of a simple example (Figure 1). We desire a one-to-one mapping from a template image T (single slice or volumetric) to a target image S. In the context of the NP-C project, T may represent a normal mouse MRI image at one development point while S is the same mouse at a later stage of development, T and S could be images of an NP-C mouse at two stages of development, or T and S could represent NP-C and normal images at a particular timepoint. The deformation map designates how template material points move to align with the target image after Warping.

The template image T is defined as a solid or fluid that deforms based on forces derived by the pointwise intensity differences between T and S. The nonlinear mechanics problem, subject to the soft (Bayesian) or hard (Lagrange multiplier) constraint imposed by the image data, can be cast in the form of a potential functional. In addition to the standard mechanical terms, the modified weak form of the momentum equations contains an image-based, spatially inhomogeneous body force that arises due to the pointwise differences in intensity between the deforming template and the target. Finite element discretization of the displacement and image intensity fields in the weak form of the momentum equations yields a nonlinear system of equations. Subsequent linearization about a known configuration yields a system of linear equations that form the basis for an incremental-iterative solution procedure using a quasi-Newton method. The result is the mapping that most closely registers T and S, and a deformed version of T.

Figure 1:
            Simple Example
Figure 1

Computational aspects. Typical three-dimensional MRI images contain about 2 million voxels. Ideally, sampling considerations would suggest the use of one finite element for every eight voxels (about 250,000 elements). Appropriate sampling and averaging of the underlying voxel intensities to the finite element are performed. However, memory and computation time constraints force the use of meshes that are lower resolution (about 100,000 elements). For a structured mesh, this typically results in linear systems containing about 300,000 unknowns. The sparse symmetric "stiffness" matrix is inverted using SGI's library of sparse matrix solvers that take advantage of the SGI Origin2000's SMP architecture. Iterative linear equation solvers are ineffective for these problems (and for many other problems in computational solid mechanics). The large memory requirements (4-6 GB) necessitate the use of the CHPC hardware. Typical runs on four processors take 8-12 hours, which include numerous formations and factorizations of the stiffness matrix during the incremental-iterative solution procedure.

Example problems. Two test problems that are of relevance to the NP-C study are presented. First, a set of two-dimensional brain images are used (Figure 2). Second, two three-dimensional MRI datasets of normal mouse brains are warped into alignment to demonstrate the typical differences between normal mouse brains at the same development timepoint (Figure 3).

Results

Figure 2: Set of
            two-dimensional brain images
Figure 2

Figure 2 illustrates results obtained for the warping of the test images in Figure 1. This example demonstrates the criteria that can be used to judge the quality of registration. The top-left and middle-left images represent the template and target images, respectively, while the top-right image is the difference image. The goal is to drive the difference image to zero everywhere (all gray) via deformation of the template. The bottom-center image shows the deformed template after warping (compare to top-center), while the bottom-right image shows the difference images of the deformed template and the target. These images appear blurred in comparison to the initial template and target images due to the finite element interpolation of the intensity.

Figure 3 illustrates warping of 3D MRI of normal mouse brains for the NP-C study. The top row (left to right) shows volume renders of the template, target and deformed template images. The bottom row shows fringe plots of the relative volume (V/V0) of the deformed template. This result localizes the changes in volume that must occur to warp one brain into another. Although these normal brains are similar, localized contractions of the template image (illustrated by light and dark blue areas in the fringe plots) were necessary to align the datasets. The green circle indicates the areas of good registration while red circle indicates an area where improvement in registration is needed. The presented work represents work in progress.

Warping of 3D MRI of normal mouse
            brains
Figure 3

Conclusion and Future Work

Based on these preliminary results, we are confident that warping can be used to quantify local shape and volume differences between volumetric MRI images of mouse brains. We plan to add several enhancements to the software to facilitate the registration of these datasets, including pointwise specification of penalty parameters, modified solid constitutive models to allow for relaxation of volumetric stress, as well as an improved user interface to facilitate our colleagues' use of the software at the University of Arizona.

References

  1. Bowden AE, Rabbitt RD, Weiss JA: Anatomical registration and segmentation by warping template finite element models. In Laser-Tissue Interaction IX, Steven L. Jacques, Editor, Proc SPIE 3254:469-476, 1998.
  2. Rabbitt RD, Weiss JA, Christensen GE, Miller MI: Mapping of hyperelastic deformable templates using the finite element method. In Vision Geometry 1V, Melter RA, Wu AY and Bookstein FL Editors, Proc SPIE 2573:252-265,1995.
  3. Weiss JA, Rabbitt RD, Bowden AE: Incorporation of medical image data in finite element models to track strain in soft tissues. In Laser-Tissue Interaction IX, Steven L. Jacques, Editor, Proc SPIE 3254:477-484, 1998.

Acknowledgments

Funding was provided by NIH Grant #RO1-CA082813-01A1 and The Whitaker Foundation. An allocation of computer time from the Center for High Performance Computing (CHPC) at the University of Utah is gratefully acknowledged. CHPC's SGI Origin 2000 system is funded in part by the SGI Supercomputing Visualization Center Grant.

The following policy will be implemented over the next serveral months:

  • CHPC will periodically attempt to crack all passwords on the systems maintained by the center.
  • CHPC reserves the right to lock cracked accounts at any time, however we will attempt to notify users of weak accounts and give them an opportunity to remedy the situation prior to taking this action.
  • Once this action has been taken, users will be required to contact the help desk to get their accounts unlocked. Phone: 581-4439, email: consult@chpc.utah.edu

Please note: No one from CHPC will EVER ask you your password! If anyone contacts you and asks for your password, (claiming to be a CHPC employee) refuse and report it immediately to the CHPC help desk.

Password Maintenance

Some CHPC systems share a password database and some do not. At the present time, the passwords are maintained separately on each platform:

  • raptor - maintained separately.
  • inca/maya - a change on inca will result in a change on maya too. This system is separate from other systems.
  • icebox - maintained separately.
  • sp - maintained seperately (a change on one of the interactive nodes updates all of the nodes.)
  • sunspot - maintained separately.
  • Other CHPC NIS systems - (lab machines, desktops, etc.) change on one and all are updated. Most CHPC users don't have to worry about this. This pertains to CHPC staff and tenants in the INSCC building.
  • NT Domain (windows desk tops) maintained separately.

Choosing A Password

  • A good password is at least six preferably seven or eight characters long. There are four classes of characters: Lowercase, Uppercase, Numbers, and Symbols. You should incorporate at least two of these four classes into your new password.
  • Do not use actual words, whether they are in English or another language. Password cracking programs can be fed dictionaries of words, both English and foreign. These dictionaries are easily available on the net.
  • Avoid the names of fantasy characters (like Gandalf, Vader, Yoda, or Klingon).
  • Do not use the special codes from games (like xyzzy), which are very well known.
  • Do not use a word followed or preceeded by a number (like python6).
  • Exchanging l's (ells) and 1's (ones) or 3's and e's can be easily broken.
  • Do not use your license plate number, phone number, or yourspouse's, child's, lover's name or birthday.
  • Also, do not use repeated patterns (C!C!C!C!) or simple keyboard patterns (qwerty).
  • Do not write your password down anywhere.
  • Do not transmit your password via electronic mail.
  • Use secure shell (ssh) to log on to our systems rather than telnet or rlogin/rsh.
  • Use secure copy (scp) to transfer files rather than ftp.

CHPC Allocation Policy: Changes and Reminders

by Julia Harrison, Associate Director, CHPC

There are a number of upcoming changes to the CHPC Allocation Policy. On July 1st, 2001 we will begin allocation enforcement on the new Compaq Sierra Cluster, we will discontinue allocation enforcement on the IBM SP, and we will implement a "weighted charging" scheme on the Beowulf Cluster (icebox).

COMPAQ SIERRA CLUSTER

The Service Unit (SU) on the machine will be defined as one walltime hour per processor. Users will be expected to use a full node at a time (4 processor increments). If you request 4 processors and only use 1, you will be charged for 4. In other words, if your job uses one walltime hour on one node you will be charged 4 service units.

IBM SP

On July 1st we will discontinue the enforcement of allocations on the IBM SP. The policies for the "voth" nodes will continue (no time limit for voth users, 24 hour limit for others). Those groups with block allocations under the original agreement will have priority on the system until those hours are consumed for the quarter.

WEIGHTED CHARGING on ICEBOX

With the rapid growth of icebox, we now have processors ranging in speed from 350 Mhz to 1.3 Ghz. Beginning July 1st we will implement a weighted charging mechanism on the Beowulf Cluster (icebox). The metric used will be the clock speed of the fastest CHPC owned machine in the cluster (currently 950 Mhz). This means that you will be charged 1 service unit for 1 hour equivalent (wallclock) time on the "s950's".

Those PI's who have purchased nodes for the cluster will have their block allocation adjusted appropriately. For example, if your job ran on the 350 Mhz nodes for 10 hours on 10 nodes, the charge would be (350/950 X walltime_hours X number_of_nodes), or approximately 36.8 Service Units. Similarly, the same job running on the 950's would would be charged 100 SUs, and on the 1.3 Ghz, the charge would be 130 SUs. Please keep this in mind when preparing your allocation requests.

ALLOCATION REQUESTS DUE JUNE 1st, 2001

Proposals and allocation requests for computer time on the Compaq Sierra Cluster (sierra), the Beowulf Cluster (icebox) and/or the SGI Origin 2000 (raptor) are due by June 1st. We must have this information if you wish to be considered for an allocation of time for the Summer 2001 calendarquarter and/or subsequent three quarters. This is for additionalcomputer time above the default allocation of 10 Services Units per platform.

  • You may request computer time for up to four quarters.
  • Summer Quarter (Jul-Sep) allocations go into effect on July 1st, 2001
  • Only faculty members can request additional computer time for themselves and those working with them. Please consolidate all projects into one proposal to be listed under the requesting faculty member.
  • Please note: allocations may NOT be transferred between machines, or carried forward from previous quarters.
  • Please see http://www.chpc.utah.edu/policies/alloc_policy.html for full details and links to the allocation form.

CHPC's most recent purchase is the Compaq 32 processor Sierra Cluster. At the time of this writing, the system is in the process of being built, tested, and configured. This new cluster brings a unique computing platform to the University. All 8 nodes in this cluster are identical, each containing 4 processors with 8 Giga-bytes of memory. The 8 nodes are attached together through a QSW (quadrix) switch. The "quadrix" switch is capable of running 16 nodes, which allows future growth.

The actual complex has a "management" node that provides the computer administrators a central point of configuration. There is also a KVM switch that gives a single console access to the management node, and the first 2 nodes in the complex. The cluster was ordered with fiber channel disks. We have a dual fiber channel loop configuration with node 0 (sierra0) the master, and node 1 (sierra1) for fail over.

This cluster is very unique in its implementation. At the most fundamental level, the cluster runs TRU64 V5.0. There is another level of software called: AlphaServer SC System Software, which only runs on TRU64 V5.0. Due to the implementation of this software, the cluster often appears as "one" system image rather than 8. They accomplish this through a special file system called advfs. Advfs is a file system that takes the first node (sierra0), and transparently duplicates the following 3 file systems on all other nodes in the complex. "/", "/usr," and "/var" are replicated by default. In a nutshell if a change on node 0 (sierra0) is made -- it appears on all nodes. There are ways to customize particular nodes but only through special procedures. This software also allows node installation through replication from node 0. We will be implementing a high speed scratch space.

To fully utilize our cluster as a batch machine, we have purchased an interactive node, which is a Compaq DS-20. This DS-20 will be called sierra.chpc.utah.edu. Users will have the full suite of tools, and submit their work from this node. Due to this implementation, we isolate the 8 - 4 way SMP nodes to do only batch work.

In a typical Compaq shop, submitting jobs would be accomplished with the command "prun." It is our intention to implement the Pittsburgh Supercomputer Center's PBS port, and to have the Maui scheduler functional. When this is completed, we will be the first supercomputer center to run this sophisticated scheduler on a Compaq Sierra Cluster.

Because of the challenges as-sociated with memory and node sharing, the policy on the machine is that users allocate in multiples of 4 processors. In another words a user get the entire node, whether they use all of it or not. Any odd number request will be rounded up to the next multiple of 4 for allocations charging. Allocations will be charged on wall clock time per 4 processor node.

With our latest addition, we are excited to see more and more Open MP, and MPI shared codes. Compaq comes with a rich suite of software, and we will also have other common tools, such as Total View, Vampir Trace, KCC, etc.

Last Modified: October 06, 2008 @ 21:09:11