CHPC News, August 2002, Volume 13, Number 3

ADAPT: Telepresent Artistic Collaboratories

by Ellen Bromburg, University of Utah, Department of Dance, and Johannes Birringer, Ohio State University, Department of Dance

The boundaries between Art and Science and technology are blurring. If the two have been perceived as existing on opposite ends of a spectrum, one can imagine that spectrum now as curving, bringing those previously opposing ends together and creating fascinating hybrids that challenge our desire to create defined categories of practice. Much has been written about how major scientific discoveries were preceded by dreams, intuitions, leaps of faith; words commonly associated with artistic creation rather than quantifiable research. And now, many artists are working in highly technical digital laboratories. Replacing the cadmium yellows or the alizarin blues of the classical artist's palette are sensors, microchips, 3-D environments, computers, projectors, video cameras, mixers, network connections, software: the list goes on and on. The development of digital technology has facilitated the emergence of new forms and venues for art-making. And while one can trace the interweaving genealogy of these forms throughout history, digital technology is helping to give form to states of awareness and experience that were previously formless. Taking the spectrum metaphor a bit further, the merging ends of that hypothetical curve open to broader view with more global dimensions of understanding, experiencing, and representing time, space, and causality.

ADAPT dancers 1

Visual artists are not alone in utilizing digital tools. Dancers, choreographers, and other performers have established new practices as a result of the tremendous potential in these technologies. Historically there has always been a reciprocal relationship between performance environments, tools, processes, and artistic expression. The development of gas lighting for theaters at the beginning of the Romantic era in ballet had a very dramatic effect on the kind of work that was presented on stage. The mysterious and supernatural effects  which enhanced the gothic themes of sylphs and deadly supernatural beings so popular at the time are examples of how changes in technology supported changes in the aesthetic. Dance moved from gaslight in the 19th century to electricity, and perhaps the most famous dancer to utilize electricity was Loie Fuller, who began in the 1890's to exploit electricity in the most innovative manner.

We tend not to think of phonograph players as high tech, but they were considered as such when they first became available and subsequently replaced the accompanist in the dance studio.  From phonographs to cassette tapes to CD's there has been a continuous process of integrating sound technologies into the classroom and the theater.  Today we think nothing of computerized lighting boards, computerized elevators for set changes, innovation in lighting instrumentation, mobile head sets for communication backstage and CD's and DAT tapes for music playback. All of these innovations have had an effect on the work and its presentation.  Knowledge of and experience with these technologies contribute to the palette of possibilities available to choreographers and designers for their work in the theater and alternative spaces.

ADAPT dancers 2

The Department of Modern Dance and the Center for High Performance Computing at the University of Utah are currently involved in an exciting new project that builds on the rich history of collaboration between dance and technology. ADAPT (Association for Dance and Performance Telematics) was founded in December 2000 as an interdisciplinary collaboration between artists, technologists, and scholars from five educational institutions in the United States. ADAPT is dedicated to research and critical dialogue on performance and media in telematic space using advanced network technologies, such as those developed under the Internet2 initiative (see below). For our purposes, the word "telematics" refers to the practice of video-collaboration via the Internet and "telepresence" reflects the moment of interaction. Participating artists/institutions are: Johannes Birringer from the Ohio State University, Ellen Bromberg from the University of Utah, John Mitchell from the Arizona State University, Lisa Naugle from the University of California at Irvine, and Douglas Rosenberg from the University of Wisconsin, Madison.

The objectives of ADAPT are to:

  • Create a virtual site with telematic collaborative inquiry for the purpose of developing new models of practice and training techniques for the creation of networked dance and performance.
  • Develop a shared mediated space for investigating performance and creative collaboration through a distributed environment across time zones.
  • Situate research within a larger cultural and political context that acknowledges how mediated performances both frame and are framed by issues such as identity, privilege, and access.

Since March of 2001 members of ADAPT have been meeting on-line monthly to experiment with a variety of methods and processes for distance collaboration. What has emerged from these experiences is a fuller understanding of the nature of telematic space. While it is possible to create on-line events in which dance/performance, music, and other elements contribute to the presentation of a live event in a single location, our research has also led us to the create telepresent events that exist only on the Internet. In addition to our monthly online videoconferences, the group also maintains a regular discussion and planning mail list, and the discourse on telematics which has emerged can be visited on our websites:

These meetings are less than routine and require extensive media and network support. This support at CHPC is provided mainly by Jimmy Miklavcic (CHPC multi-media expert) and Joe Breen (Assistant Director for Networks). The effort required by these applications is in many ways as challenging as some of the most demanding scientific applications that have been the mainstream uses of CHPC resources.

There are several artistic and technical issues that are important to discuss here:

Internet

The Internet that we use today was originally created so government defense agencies and their collaborators at research universities could readily share  electronic information. With the advent of commercial and individual use, the Internet has doubled in size and traffic has increased fourfold annually since 1988. As the network became a commodity enterprise, the ability of the research community to experiment and create new applications using advanced network technologies was compromised. The Internet2 ( www.ucaid.edu) initiative, as well as similar activities on deploying new advanced networks to sustain innovative work  began in 1998 with the NSF (US National Science Foundation) and vBNS project. This loose association of academic and government researchers has re-energized the development of novel network applications that need an advanced infrastructure to operate. The Internet2 community has developed such  advanced network infrastructure, the ABILENE network (www.ucaid/edu/abiline), in which experimentation is possible.  Using these technologies it is possible to send and receive multiple streams of high-quality video, making this project possible for groups of artists such as ADAPT to share resources, expertise, and experiences.

Sensitive Studios

At this point in our project, we are at the very beginning of shaping our understanding of the aesthetic opportunities offered by interactive and distributed environments, especially if we take into consideration the possibilities of online interaction across vast distances and time zones. One of our main tasks is to transform our studios into virtual laboratories where we can rehearse new performance operations that will eventually be connected to media and art practices, interface designs, and visual and sonic languages in other cultural contexts. In this sense, the Internet provides an extended studio for creative production. We play with simultaneity and asynchronicity, loops and superimpositions, delays, breakdowns, and temporal suspensions that become part of these new kinds of cultural conversations.

Our concerns in exploring performance on the Net are the intersections of technology, body and code; the aesthetics and politics of programming; the poetics of online communication or online contact improvisation, and the relays between architectural structures, institutional structures, and distributed networks.  As practical research, telematic dance thus challenges the physical parameters of the studio frame and the framing of dance on film/video. The principles of networked dance (and especially the organization or structuring of content and the appearance of transmitted digital "performance objects") will evolve from our work within the sensitive studio environment and the collaborative techniques of real-time media creation.  Since the beginning of our work together, each site has continued to develop its studio space to facilitate these on-line events. Computers, projectors, video cameras and mixers, network connections, software, etc. all are the common tools necessary for this work. Each site has transformed at least one studio space so that it is wired and equipped for telematic connection, performer interaction, and live mixing of streaming video and audio signals.

"The delay created by the network liberated the "events" of motion from the bodies executing them. Like a molted skin, the dancer would leave the gesture behind to be picked up on screen in its interaction with the distanced partners. Watching this process created the sensation of a double present: the first now as it happened before us in real time and the second now as it was experienced in relationship with the other gestural artifacts from the other sites. In both "now's", we experienced something essentially vital about human movement that transcended the technology. It has been a common skepticism by many that technology supercedes the empathic, the visceral, the expressive . . . that it overrides imperfection which is at the core of the human. On the contrary, this experience facilitated empathy, vitality and desire." - Ellen Bromberg

Collaborative Process

We rely on an experiential process of investigation. Given that we are rehearsing the functionality of the technology as well as ideas of content and form, we employ a number of different methodologies in each session. For example, we might designate one site as the Director with other participating sites responding to instruction. Direction in this context can mean a multiplicity of tasks and sensitivities including communicating camera moves or framing, giving specific instructions for the performers' execution of movement qualities, use of props, facings/relationships to the frame, or ways of interacting with other performers from other sites. We begin the process with very specific ideas, and once all the participants are engaged within an environment of image, movement, and sound, we enter an improvisational experience that is metaphorically contained yet fluid enough to maintain a dynamic evolution. In these situations the concept of authorship vanishes and we all become equal participants in a telepresent event.

The members of each site conduct their research as a team, which means each videoconference is a collaboration among teams. The studio set up and physical space in each location may vary, and the online sessions connecting the five sites cross four different time zones. Each site adheres to specific hardware and software standards and communication protocols but new conventions and techniques of real-time performance dialogue (including delay and echo functions, mixing and re-mixing) are only beginning to be established. Conventions of the World Wide Web (browsers, windows, URLs, etc) are part of the configuration, while older notions of the book (web pages) and theatrical dialogue (person-to-person communication) are displaced by screen-based image interfaces. In essence, telematic dance exists as transmitted images for remote seeing, and thus it resembles online television/cinema.

Theoretical Concerns

Telematic communication has the unique ability to cut across political, economic and cultural boundaries. Telematics provides a bridge between individual artists in different parts of the globe in that it fosters exploration and the exchange of ideas. Direct connections between artists in disparate locales provides the basis for a new trajectory where the artists' closest colleagues and confidants are actually geographically dispersed, and where local tradition practices are readily shared electronically. The invisible networks created become online communities of artists and scholars who share their artistic and cultural information in the process of collaborating and creating distributed works of art. These works can be easily shared with an ever-growing electronic audience.

Telepresence is a considerable challenge for the field of dance since we have no existing aesthetic or cultural models for real-time dance interaction with a physically remote location, nor do we know much about the role or presence of our potential Internet audiences. The bridging of spatial distance via telecommunications (especially if we are operating in camera-originated environments) allows us to examine the emerging conventions of "networked dance;" which involve montage, layering, filtering, editing, mixing, and transcoding.

In telepresence the relations between the real and the virtual are always paradoxical, and the staging of online performances foreground the ambiguous nature of being "present" in a camera-originated and transmitted environment. The dancer in one site cannot physically affect or manipulate the information on the screen, but the dancer's response can be captured and transmitted, and thus entered into the continuous stream and mixed with other transmissions. As images rise and fall, as sound and voices are heard and then lost - all happening in the "now" across great distance - one can experience a very gentle flow of interconnectedness and of shared experience. These are the aspects of the work that are most compelling. The potential for meaning emerges from the collaboration of ideas and images, all converging in the ever-present "now": in the space between our divergent locations, time zones, and points of view. Telematic dance is polymorphous movement in a shared stream.

In this article, we will try to familiarize users with some aspects of programming on our computer systems with a goal of making their codes run faster and more efficiently.

An aspect that will probably have the highest impact on the code performance is the kind of compiler optimization selected during the program build. This will form the bulk of the article. In the last section, we will discuss some options for user-defined timing routines that are available on both Linux and Tru64 Unix, and should be portable to the majority of other computer systems based on Unix.

Compiler Optimizations

A program source that is correct and readable is often not organized for optimal execution. This is increasingly the case as processors become faster and their instruction sets more complicated. Usually, the first step in code development is to produce correct code without much worry about its speed. Subsequently, code should be optimized to provide optimum performance. Virtually all compilers provide users with a rich set of options to improve program performance, which are often confusing. In what follows we will go through what are the most common and useful optimization options and show how to induce them in three compilers that are in use at the CHPC: GNU compilers and Portland Group suite of compilers installed on the Icebox, and Compaq C and Fortran compilers on the Compaq Sierra. For further details of more advanced options, consult the user manuals [1-4] or man pages of the corresponding compilers.

Four optimization categories

Optimizations can be divided into four basic categories:

  • Local optimization is performed in blocks, which are formed by set of program statements with single input and output (e.g. a subroutine). Among the types of local optimizations are algebraic identity removal, constant folding (replacement of a constant in the expression by a number), common sub-expression elimination,
  • Global optimizations are performed over all the blocks in the program, in that the compiler optimizes execution and data flow throughout the entire program.
  • The third category involves loop optimizations. These include loop vectorization, unrolling, and parallelization. Vectorization transforms loops to improve memory performance, unrolling replicates the body of the loop to reduce loop branching overhead, and parallelization distributes the loop onto multiple processors.

Finally, there is function inlining. In this case,  the function call is replaced by the function body in the routine that calls this function. This sometimes speeds up execution by eliminating the function call overhead, but produces larger executable and sometime can also slow down the execution.

In general, both local and global optimizations provide significant speedup in the code execution. Loop transformations could be beneficial on advanced architectures such as the later generation of Intel compatible processors (Pentium II-IV, AMD Athlon) and Compaq Alpha chips, but the code should be timed with and without this option to find out if there is a performance benefit. The same thing can be said about function inlining.

In general, optimizations are invoked by -Ox option, where x is a number usually ranging from 0 to 5 with increasingly aggressive optimization features. More advanced optimizations such as the function inlining and others often require specialized compiler flags.

  • Local optimizations are invoked by -O1 in all three compilers.
  • Global optimizations are added by -O2 option.
  • Loop unrolling is invoked by -funroll -loops in GNU, -Munroll in PGI and   -unroll in Compaq. For simplicity, -O3 adds loop unrolling and function inlining in GNU and Compaq compilers along with local and global optimizations. In PGI this must be specified explicitly.

Processor and architecture based optimizations

An additional set of flags that can be beneficial to the performance is specifying the processor and architecture of the machine where the program is going to run. This turns on additional features that improve performance. For example, one can use multimedia instruction sets in Pentium and Athlon processors (SSE, SSE2, 3dNow) to parallelize vector operations. In the case of PGI compilers, this flag is -tp xx, where xx stands for the type of the processor, p6 for Pentium II (that is Pentium II and higher and Athlon and higher), athlon for AMD Athlon, and athlonxp for AthlonXP (processors that are in the newer dual-processor nodes on Icebox). As Icebox consists of Pentium II, III, and Athlon processors, the safest is to use -tp p6. However, if one is sure to use only Athlons, additional performance can be achieved with -tp athlon or -tp athlonxp, and with a set of additional flags that use the advanced instruction set of the Athlons. Advanced processor features can be enabled using -Mvect flag, which turns on loop vectorizing. This option can be useful when the program uses a lot of loops (e.g. vector and matrix operations). -Mvect  has several suboptions, but the default is probably the best choice for Pentium II and Athlon processors on the Icebox.

On the Compaq Alpha, the architecture flag is -arch xx, where xx in our case is ev67, the Alpha 21264A chip. One must also add -tune xx to tune the code to this particular architecture.

The GNU 2.96 compiler that is the default on Icebox does not provide any advanced optimizations for the Pentium and higher processors.

Composite optimization flags

Finally, both PGI and Compaq compilers include flag -fast, which is a combination of several above mentioned flags grouped together to provide the best possible performance in most cases. In the case of PGI, -fast is equal to -O2 -Munroll -Mnoframe. For the AthlonXP processors in Icebox, a new flag, -fastsse, combines -fast with vectorization option -Mvect and several other switches, which enable SSE instruction support. This can provide considerable speedup for vector operations as compared to older Athlon processors. The Compaq -fast includes, among others, -O2 arch host, -math_library fast, and -tune host. The -math_library fast flag performs certain math library routines much faster at a slight loss of accuracy. The loss of accuracy is not critical in most of the applications, but the users of this flag should be aware of this possibility, and check that their code produces results that are accurate enough.

Best optimization flags to use

From the discussion above it is obvious that the answer to a question of what will be the best flags to speed up one's application is not straightforward. The simplest approach which will yield highly optimized executable in most cases is to combine the architecture flags with -fast. Thus, on Icebox, for all processors except for the AthlonXPs, use -tp p6 -fast. On the AthlonXPs, use -tp athlonxp -fastsse. In addition, I would recommend to experiment with -Munroll, -Minline, and -Mvect to see if they provide additional performance increase. For details, consult PGI's User's Guide [5]. On the Compaq Sierra, a recommended starting flag list  would be -arch ev67 -tune ev67 -fast. Then add loop unrolling and function inlining,  -unroll and -inline, to see if they make any difference.

Program segments timing

It is generally a useful thing to time critical sections of the code. While profilers can do this, they are usually routine or code-line based so they will not selectively time sections of code we may consider the most important. The way out of this is to time these sections explicitly by calling system timing functions. There are many date and time functions in both Fortran and C. However, they often differ from system to system and thus are not portable. In what follows we will use several approaches that should provide portable timing on a wide range of Unix systems.

In the case of Fortran, there is an intrinsic function SECNDS() which returns time in seconds since the start of the executable. An example of a program using this function is in Figure 1.

The main disadvantage of SECNDS() on Linux is that it is not very precise. It rounds the time to seconds. Time with the microsecond precision is returned by Unix intrinsic function gettimeofday(). Figure 2 shows a simple Fortran wrapper routine for this function and how to call it from a Fortran program. SECNDS() does return time in microseconds on Tru64, so on Sierra any of the two methods will work with acceptable precision.

For a C program, system function gettimeofday() can be accessed directly. Figure 3 shows how this can be achieved.

Conclusions

In this article, we tried to inform the users of the possibilities of improving their performance without any need of reprogramming their codes. As always, the author will be happy to answer any questions that may arise when reading this article or when trying to implement the information from the article.

References

Last Modified: October 06, 2008 @ 21:09:14