groove & VAmpire - (1970)

-(Generated Real-time Output Operations on Voltage-controlled Equipment)
-(Video And Music Program for Interactive Realtime Exploration/Experimentation).


In 1970, Max Mathews, who had coded the first real computer music synthesis program back in 1957, pioneered GROOVE (Generated Real-time Output Operations on Voltage-controlled Equipment), the first fully developed hybrid system for music synthesis, utilising a HoneywellDDP-224 computer with a simple cathode ray tube display, disk and tape storage devices. A frame buffered composite NTSC output was integrated, and a synesthesic realtime instrument was born. Laurie Spiegel, synchronously composed music and animated video. Laurie wrote software for a FORTRAN IV with a RAND tablet for input to create a new visual instrument in the mid '70's. Click to see some of the output: 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8.

Due to massive hardware changes at Bell Labs, the GROOVE system met with an untimely demise in the mid '70's. Undaunted, Laurie hacked things back together, and evolved the mid-1960s massive roomsized DDP-224 computer platform into the VAMPIRE (Video And Music Program for Interactive Realtime Exploration/Experimentation). She eventually I got to know Dr. Kenneth Knowlton, the computer graphics pioneer and master of evolutionary algorithms, and began to work together on various projects. After learning some graphics coding there, Laurie became intrigued with the idea of trying to make musical structure visible and embarked on the strange mission of bringing GROOVE's compositional capabilities to bear on the frame buffer output, particularly the ideas of time functions, transfer functions, and interconnectible software modules. The modified graphical GROOVE evolved from beginnings as a program called RTV (Realtime Video), and Ken Knowlton contributed a way of addressing the system with an array of instructions. As a composer of music, Laurie discovered that she enjoyed playing the drawing parameters in real time like a musical instrument. "I could move around in an image and change the size, color, texture, color and other parameters in real time as I drew it, using knobs and switches. I would draw with one hand while manipulating the various visual parameters with my other hand using the 3D joystick, switches, push buttons and knobs."

"Things got to the stage in visual improvisation at which I had found myself needing to switch over from improvising to composing in audible music several years earlier. The capabilities available to me had gotten to be more than I could sensitively and intelligently control in realtime in one pass to any where near the limits of what I felt was their aesthetic potential. Concurrently, I had become increasingly interested in the use of algorithms and powerful evolutionary parameters in sonic composing, and the idea of organic or other visual growth processes algorithmicly described and controlled with realtime interactive input, and of composing temporal structures that could be stored, replayed, edited, added to ("overdubbed" or "multitracked"), refined, and realized in either audio or video output modalities, based on a single set of processes or composed functions, made an interface of the drawing system with GROOVE's compositional and function-oriented software an almost inevitable and irresistible path to take. It would be possible to compose a single set of functions of time that could be manifest in the human sensory world interchangeably as amplitudes, pitches, stereo sound placements, et cetera, or as image size, location, color, or texture (et cetera), or (conceivably, ultimately) in both sensory modalities at once.

There are fewer parameters of sound to deal with than there are for images. In a hybrid system such as GROOVE, which used fixed waveform analog oscillators and computer controlled analog filters and voltage controlled oscillators, each "voice" may have frequency amplitude, filter cutoff, and possibly, filter Q, reverb mixture, or stereo location. A visual "voice" may have x, y, and possibly z axis locations, size in each of these dimensions, color, texture, hue, saturation, value (or other color parameters), plus logical operation on screen contents (write, and, or, exclusive or), and in the case of a recognizable entity, scaling and rotation variables (for solid objects roll, pitch and yaw) in two or three dimensions. (I did not deal with transformations of solid objects in this relatively primitive realtime digital visual instrument and composing system.)

In essence, what this system ultimately provided for the short time that it ran before its untimely demise, was a instrument for composing abstract patterns of change over time by recording human input into a computer via an array of devices the interpretation and use of each of which could be programmed and the data from which could be stored, replayed, reinterpreted and reused. The set of time functions created could be further altered by any transformation one wished to program and then used to control any parameter of image or of sound (when transfered back to GROOVE's audio-interfaced computer by computer tape or disk). Unfortunately, due to the requirement of separate computers in separarte rooms at the Labs, it was not physically possible to use a single set of recorded (and/or computed) time functions to control both image and sound simultaneously, though in principle this would have been possible.

Like any other vampire, this one consistently got most of its nourishment out of me in the middle of the night, especially just before dawn. It did so from 1974 through 1979, at which time its CORE was dismantled, which was the digital equivalent of having a stake driven through its art.

-Return to the Video Synth Main Page
-Go onto the next Video Synth

reception | tools | library | lab | hangar |school