I collaborated with sound designer and composer Matt Otto and to create the visual component to his thesis work, AlgoRhythym.

From his website:

Inspired by the work of composers like Steven Reich, Lukas Ligeti, Brian Eno, and Conlon Nancarrow as well as mathematicians like Wacław Sierpiński and Benoit Mandelbrot. I composed a half hour of original music based on the fractal algorithms of Sierpinski's Triangle, Node Counter Sequencing, and a waveform to MIDI interpolation that triggered a modern player piano live in the concert hall. I also explored various software programs including Ableton Live, MaxMSP and Isadora. 

My work is a combination of playback of found imagery, and live generated visuals that were driven by the audio and MIDI systems. While all algorithmically divided, there are elements of randomness that make each performance unique. It was created and programmed in Isadora, allowing me the flexibility of making changes quickly during the development process, and also allowing the live generation of effects during performance. It is shown on a two projector system, projecting onto a set of two scrims, upstage and downstage of each other. 

See his website for more information, including a PDF description of the project. Also included is my description of the system, which follows here below the video.


The Isadora programming for AlgoRhythm had each piece broken down into a separate scene. The triggering of one scene to the next, and fades in between, was done by MIDI control. In fact, a very large part of the effects were controlled by MIDI.

The first piece was the most basic (Fig 1). It was a simple playback cue of a video that was designed and edited to the music.

 

The second piece was the most complicated program-wise, being comprised of a main scene, with two subsections. The main scene (Fig 2a) controlled the opening sequence of fades, as well as the triggering of the further sections. The first subsection, labeled Part 1 sets the tone for the style of the video design for this piece (Fig 2b). It takes one file, comprised of many clips of old computer generated graphics, and randomly chooses a play position in the clip. This is updated every time there is a new note played over MIDI, and is occurring three times; one channel on the cyc surface, and two layers on the US scrim.

At the next part of the music, Part 2 activates and crossfades in (Fig 2c). This section run the second and third parts of the piece. In the second part, a continuation of the random jumping of play position is happening, although to an altered clip which includes rotation and zoom changes. Throughout the second part, there are three pauses in the music where we wanted a specific image to be shown. Since the position of the video is random, this is achieved with three specific actors (Specific Position A, B, C) to force a play position, triggered by a specific MIDI note.

At the third part, the Colorizer is activated, and random RGB values (again based on MIDI note hits) are assigned to the channels of video.

The third piece is controlled by one scene with a variety of Envelope Generators controlling the fade and activation times of various values (Figs 3a and 3b). Again, everything is triggered by MIDI notes. The video transitions from one base geometric clip to another, and back again, all while having a Kaleidoscope effect applied to create the triangular shape. The X and Y positions of the video, as well as the variables of the Kaleidoscope effect, are controlled by MIDI notes on various channels, run through a custom value smoother. A Time Blur effect is also applied at some points.

The final piece is similar to the first, insofar as it is mostly playback of pre-rendered content (Fig 4). However, the mixing, crossfading and play control is all timed out by Envelope Generators. As well, a small counter was added to list the current lighting cue (Lx Cue), as the last cues were timed based on the music and video position.