a work in progress
Graphics programming and architecture
My long term interest has been to create and solve problems encountered in the visualization of music as an abstract phenomenon. This self-authored system for translating extant compositions into visual performance involvesInformation theory with a complex layering of systems. The art works which have emerged from this process have embodied principles of Intermedia, as defined by Dick Higgins, the late avant-garde theorist and Fluxus artist. Intermedia is a completely different concept from multimedia, although it can be included in a multimedia environment. While with multimedia, content is presented in more than one medium simultaneously, intermedia is a combination of structural elements or syntax which come from more than one medium but are combined into one.
I will describe the journey from the 2D world of my earlier visualizations of music into the 3D, fully immersed, stereoscopic world of the CAVE, a virtual reality theater originally developed by the Electronic Visualization Lab at the University of Illinois, Chicago. This project has received support from NCSA at the University of Illinois at Urbana-Champaign including time in their CAVE with a team led by Donna Cox and also funds for software support. SGI has given hardware support and EAI has given Sense8 s World Tool Kit. Ars Electronica provided the initial research and development money. Robert Putnam from the Scientific Computing and Visualization Group at Boston University is doing the interactive, kinetic sound placement and 3D localization of sound. Absolute Reality Corp. is doing all of the graphics programming and 3D modeling.
In 1996 I began to work on the visual translation of Quanta and Hymn to Matter, which is a large scale symphonic and choral work from the 1970s by the American composer, Dary John Mizelle. This work is heterophonic and timbre based in structure. Heterophony is a term which describes a non-harmonic polyphony. It is a widely used principle in the second half of the 20th century in western music, but has always played a large part in the genres of primitive, folk and non-Western music. Timbre is the quality of a tone played on a specific instrument which is distinct from the same tone on another instrument. It also describes the different kinds of sounds produced by playing one instrument in different ways. These qualities are a product of the harmonics generated by a tone.
|Instrument timbre images|
|Instrument Timbre Images|
|Vowel chart and vocal timbre|
|Quanta and Hymn to Matter has a full chorus, complete with eight multi-phonic soloists. I decided that differences between soprano, alto, tenor and bass are issues of pitch range. In tracking the timbre changes produced by the human voice I went back to the system I created with vowel sounds for my work on Kurt Schwitters Ursonate. This system was a transformation of the Bruckner system which showed harmonic movement and quality. First I divided the sixteen vowel sounds in German into rounded and unrounded groups. The rounded values were assigned to the list of colors from the cool side of a twelve step color wheel, with unrounded coming from a warm list of colors. As the tongue moves down in the mouth when pronouncing different vowels, the colors would be moving from warmer to cooler/ lighter to darker and would come from the appropriate list. The front/back location in the mouth for the production of a vowel determines what percentage of complimentary color is added to the final color mixture. For Quanta and Hymn to Matter I extended the chart to include English vowels which are not present in the German list. Because the timbre of voice parts changes with every vowel change, vocal parts have very frequent color changes. Certain extended vocal techniques such as tensed octaves, which sound a lot like crush tones in the strings, take a color addition similar to crush tones. Other extended vocal techniques are to be visualized through collage techniques.
The timbre color changes its saturation becoming lighter and darker depending on the change in dynamics; louder has more saturation and softer has less. This leads directly to what this complex system of color is applied to; what the surface is modifying.
|Landscape environment and visual textures
|Landscapes for instrument families|
|Each landscape has its own specific time of day and therefore light. I have preserved this quality in VR space by giving only a full overhead light for illumination of the modeled drawings with a black sky. The multi-time variation is important to preserve in this other worldly, timeless space.|
|All of the 3d models of landscapes have and are being done by Richard Rodriguez at Absolute Reality Corp. (We have completed 5 out of 8 total landscapes at this time.) All of the texture maps are in my hand. What we discovered in this process is that I needed to draw all textures outside of the computer environment. In each instance Rodriguez would create a flat version of the wire frame which I would print out in a large, tiled version and make the texture map in a large scale on my drawing table with real pencil. The different landscape formations required different techniques, which I will describe. I found that it was better to cut up the original texture drawing and scan it in directly in order to keep a high resolution. All landscapes are units in a world where they can be mirrored together infinitely, or as many times as needed to accommodate a specific piece of music. Creating the texture map automatically by computer from the original drawing would not have been possible, since each drawing is a two dimensional view from a single perspective.
The model for Strings was definitely the most complex piece, and also the one which is the largest landscape. Because it is a real place we were able to begin with a topological map of Fonts Point and make an accurate drawing of the 3 washes which are the basis for the land formations. Rodriguez used Animi Tek WorldBuilder 1.0 to build it initially, but because the 3D geometry was far too large at 500,000 polygons. He then created a height map from it, applied this as a displacement for a lower resolution geometry, which came in at 4,000 polygons. It is made from 24 equal parts (8x4), each one having a resolution of 1024 x 1024. The final version was done in 3D Studio Max 2.5 Our aim for the CAVE environment is to have a very low poly count with a high resolution texture map.
In order for me to make an orthographic texture map which would cover all surfaces of the model, Rodriguez gave me screen shots of the model from 4 different angles of view. The last view was straight overhead, and this one I tiled and printed out to fit my drawing table, put tracing paper over it and drew all of the water carved desert hill textures.
Sandpaper texture has become a ground cover for everything without another specific texture. It mirrors itself on all 4 sides so that it can be repeated seamlessly and was only modeled on a bump map, the rocks being portrayed are so close to the ground it is useless to make actual objects out of them. Where sandpaper runs into other textures Elastic Reality was used to make the blending gradient..
Because the Vocals landscape exists on the ends of the VR space their modeling is modified. The original drawing was cut out, points were placed on these sections and pushed in and out, representing spatial points in the original rock structure. This allows for a very low polygon count, always a goal when working in a real time VR situation like the CAVE. Caligari trueSpace 4.2 was used for both Vocal and Woodwinds because it permits polygons that are larger than 3 points, a feature not common to all 3D programs and one which supports low polygon numbers. Some of the rocks towers in Woodwinds were modeled by themselves out of cylinders, unwrapped into flat versions of the wire frames and sent to me. I printed out 12-15 inch high frames, over which I made an unwrapped texture map on my drawing table, scanned it back in to the computer, where it was wrapped around the model. Rock groups were made into planar models, mirrored on the backside and with a modified modeling as in the vocals.
In all of the musical visualizations that I have made during the last 20 years there has been a very close relationship between the images, through which the structure of the music is viewed, and the music itself. My work has always used landscape and/or architectural images which related structurally, chronologically, or in the case with my visual performance of Kurt Schwitters' Ursonate, with the large scale installation art works of Schwitters himself. In the case of Mizelle's music, he had created his composition using the text of Chardin's poem. Therefore I to went to Hymn to Matter for direction in creating a meaningful context in which to perform Quanta and Hymn to Matter. It turns out that this is a far more flexible world, containing elements which directly connect to other music, by other composers. I decided to keep all of the landscapes in gray scale so that the only color you see takes your eyes directly to the structure above the landscape which is created by and as the music plays. This is a world which is about different lines and shapes and is a flexible musical environment. The individual landscape units are each connected to a family of instruments, so compositions using instruments belonging to these groups can be performed here, deriving meaning from the images embedded in their structure, and in the environment from which they came. For instance, I could see music from Edgar Varese in this world.
Problems with Quanta; Solutions with Q
At this point Charlie Morrow, who composes in MIDI and sound files in a state of the art sound studio, entered the project. He played for me a dynamic 8 minute piece , including kinetic sound,which is both heterophonic and timbre based. Morrow originally composed it to be played in collaboration with Jerry Rothenberg jubilantly reciting his poem Paris Elegies. This work has literally been pluged and played in the desert environment created orginally created for Q&HM , and in this new incarnation is called Q. It has 41 tracks of sound, including woodwinds, strings, brass and a very large representation from different percussion instruments. These include bamboo wind chimes, wood blocks, xylophone, snare drums, timpani, bass drum, triangle, tubular chimes, thunder sheet , rocks, sand paper blocks, glass harmonica, glass wind chimes, piano, celeste, and kalimba.
|The music is represented by a computer file of MIDI data. Each note as it is played creates a rectangular strip located in a specific place over the landscape which is determined by time and pitch. Pitch is the y axis and determines how high it is placed in space. Time is the x axis, going from left to right, so time says where the strip begins and ends in space. The initial attack of the note, which is how loud it begins, determines the width, or z axis of the rectangular strip, which remains constant until the note ends. Further changes in loudness or softness are reflected in the saturation of the timbre color which is part of the surface of the strip. Each strip is also embedded with a texture map taken from the original pencil drawings of the landscapes. For instance, all stringed strips receive images which are taken from an endless line of mirrored desert floor hills and washes called Strings. The texture is taken from higher and lower positions in the drawing based on pitch information as in my 2D wall visualizations. The new elements coming from Virtual Reality are that the strips hang in 3d space according to pitch, with chords appearing as stacks of strips. These geometrical forms are created as the music plays, synchronizing the sound and visual representations. Many paintings are created which together form a sculpture exiting in 3D space instead of one flat image.