The Interactive Multi-media Computer System using SGI and NeXT ...

2 downloads 48 Views 22KB Size Report
This system consists of SGI computer for visual data processing and NeXT ... The software for real-time image processing and 3D animation on SGI computer ...
The Interactive Multi-media Computer System using SGI and NeXT/ISPW computers Osamu Takashiro Takayuki Rai Sonology Department, Kunitachi College of Music 5-5-1, Kashiwa-cho, Tachikawa-shi, Tokyo, 190 Japan [email protected], [email protected]

abstract Here, we would like to describe the interactive multi-media computer system, which handles both audio and visual data in real-time. This system consists of SGI computer for visual data processing and NeXT computer and IRCAM Signal Processing Workstation for audio data processing. These two computers are connected via Ethernet and can transmit data back and forth. The system realizes the cross interaction among audio in/out and visual in/out.

1.Introduction The system has been developed for the creation of the interactive multi-media art. This system enables user to control visual events as well as audio events in Max programming environment. Therefor, it makes easier for composers to realize the interaction among audio in/out and visual in/out. With this system several works have been already realized by O.Takashiro. Those are the interactive multi-media installation presented in October 1996 and two other works for concerts. This paper discusses technical details of the system and the possibility for the creation of interactive multi-media computer art/music.

2 System construction The system hardware consists of a NeXT computer, an IRCAM Signal Processing Workstation(ISPW), an SGI Indigo2XZ computer with Galileo Video I/O board, video cameras and some MIDI equipment. SGI computer realizes the real-time image processing with incoming video images via the breakout box, and also creates 3D image objects. ISPW realizes the real-time audio signal processing as well as handling MIDI data in/out. NeXT computer, that is the host computer of ISPW, and SGI computer connected through Ethernet. The software for real-time image processing and 3D animation on SGI computer was written in C, OpenGL, OpenInventor, and IRIS Video Library by O.Takashiro. The FTS(version 0.26) client program was written in C for NeXT computer.

3 Communication process between NeXT/ISPW and SGI computer The system realizes two-way communication between NeXT/ISPW and SGI computers. Figure 1 shows flow of data in this system. NeXT computer and SGI computer communicate via Ethernet using datagram socket and UDP(User Datagram Protocol). The communication between FTS client process running on NeXT and Max patches running on ISPW is realized using FTS’s "portal" objects. The main program running on SGI computer creates a child process using UNIX‘s system call "fork()". This child process receives data from NeXT/ISPW via Ethernet and transfers it to its parent process using UNIX‘s shared memory segment. The parent process which executes image processing, and the child process which handles data communication with NeXT computer, run independently. Therefore the execution of image processing can maintain the constant calculation speed without being interrupted by data coming from NeXT computer.

ISPW

SGI computer the process for image processing

MAX on FTS "portal"

shared memory segment socket

the process for FTS client and network

socket the process for network

Ethernet

NeXT Figure 1: flow of data in this system.

4 Real-time image processing Following real-time image processings are realized on SGI computer: - pixel transformation - geometric transformation - overlapping video frame information and computer-generated graphics - creation of 3D image objects - analysis of incoming video images In order to realize image processing in real-time, incoming visual data are compressed into one third (120 x 60 pixel size) in SGI computer. In the current version, eight kinds of pixel transformation processes are implemented. These are "delay", "edge detect", "emboss", "RGB bit shift", "location shift", "strobe", and "alpha blending". In "delay" processing, the ring-buffer is allocated in the memory to keep forty successive frames of incoming images. It enables approximately three seconds delay effect. Five kinds of geometric transformation processes; "expansion and contraction", "rotation", "division", "waving" and "dispersion", are employed. It is possible to use several pixel and geometric transformations at the same time. Three types of image blending are employed. These are "croma keying", "luma keying" and "transitions". In "croma keying" an image can be overlaied on another image with specified key color. In "luma keying" an image can be overlaied on another with specified level of luminance. Three kinds of transitions; fades, tiles, and wipes are available too. It is also possible to add Inventor 3D objects files stored in the hard disk.

4.1 Analysis of incoming images This software can perform the real-time image analysis of incoming images.The process keeps calculating changes of the bit map data between two successive video frames in the memory. A single frame can be divided into several fields and the bit map data calculation can be performed on each divided field independently. With this method it can detect the movement of the object, for instance, human body;dancer or performer in front of the camera. The analysed data is transmitted to NeXT/ISPW computer and can control any parameters for audio signal processing as well as image processing.

5 Handling visual events in Max patch Some Max subpathces are written in order to handle visual events in Max. Using these subpatches,

composers can write Max programs that include visual-event-control and realizes the interaction between visual events and audio events. Figure 2 is an example patch of visual-event-control.

Figure2: an example patch of visual-event-control

6 Creation of the work using the system Recently, the interactive multi-media piece entitled "The Three Variations" for percussion is realized with this system by O.Takashiro. Figure 3 shows a system diagram of this piece. In this work, three cameras are shooting percussion player performing and transformed visual image as well as moving 3D image objects are displayed on the screen behind the percussion. The performance of percussion player controls both visual image on the screen and sound from loud-speakers in real-time. Thus, the real-time interaction between music and visual image is realized.

screen

player Timpani

P LS

LS P

Conga

Vb.

sensor interface

DTS

70

monitor display to projector NeXT/ ISPW

Indigo2XZ Ethernet visual data

MIDI synthesizer

audio data MIDI data

Mixer

mic video camera

to loud-speakers(LSP) sensor Figure 3: the system diagram of "The Three Variations"

7 Conclusion In order to maintain real-time image processing, this system sacrifices the quality of visual images. Also the Ethernet’s data transfer speed(10Mbps) sometimes causes a delay of data communication between two computers. Recently, the higher performance SGI computers become available, and Fast Ethernet(100Mbps) is getting standard. With these enhanced new environment the performance of this system will be improved. Also a dual processor machine and Max for SGI computer may give the system more flexibility and possibilities in future.

8 References [1] Creek, P and Curtis, C. IRIS Media Libraries Programming Guide. Silicon Graphics, Inc. [2] Neider, J, et al. OpenGL Programming Guide(1993). Addison-Wesley. [3] Wernecke, J. The Open Inventor Mentor(1994). Addison -Wesley. [4] P, Miller. FTS: A Real-Time Monitor for Multiprocessor Music Synthesis(1991), Computer Music Journal 15:3, MIT press