Scrawls from Preston...

Powered by Pelican.

Tue 02 November 2010

An object oriented approach to light show sequencing

There is a large hobbiest community out there of people putting together light shows synchronized to music. These are mostly around xmas string lights during the holidays.

The lights are controlled via a hardware board that takes signals from a computer and operates multiple dimmers to control the brightness of the lights. These dimmers range in the number of channels, but can be connected together to add more channel capacity. It is not unusual to have hundreds of channels in a residential size show.

The sequencing of these dimmer channels is generally done with software that divides the show into a set of time chunks represented as columns, with each channel being a row - resulting in a grid. To sequence a show, a light intensity level needs to be set for each chunk of time, usually many per second, for many channels. This grid can be huge and time consuming to fill, and doesn't seem an intuitive canvas to experiment with creativity in response to the music. I thought there should be a better way.

I'm not alone. The problem with the grid and a hope for a better way are well outlined in this piece called "[The Death of the Grid](http://www.planetchristmas.com/magazine/Spring2010/index.htm#/18/)", which I did not discover till after I started down this path.

Other approaches that try to abstract away the grid is in development software called [Prancer](http://diylightanimation.com/index.php?topic=3290.0), and commercial software LightShow Pro which takes an interesting approach of using video images mapped to a light layout called [layer transitions](http://vimeo.com/11969010).

Being on a Mac and a Python programmer, I wanted to look at coming up with an easy way to create expressive sequences on my gear without too much work.

Now there is an existing protocol for capturing musical expressiveness via computer and that is the MIDI protocol which was designed to capture the act of playing a musical instrument into a digital signal. I felt like this was a good input protocol to adopt, as there is lots of hardware out there to support it, and it was built to capture an expressive range of actions. This would need to be converted into DMX - which is the protocol that controls the lights (there are other lighting protocols, but DMX seems to be the main non-proprietary standard, and the smartest to work with).

Now each protocol has its own features, and they are not a perfect match - but they are good enough. Midi has 16 channels, each with 127 notes, and a whole mess of controller channels (see [here](http://www.midi.org/techspecs/midimessages.php)) with a message data value range of 0-127. DMX consists of a universe of 512 channels, with values of 0-255.

What I wanted to write was something that abstracts the channels of DMX involved in a set of lights, creating light objects (which may consist of one or more physical light sets), and allow these to be controlled expressively through MIDI.

![Program Structure](https://ptone.com/dablog/wp-content/uploads/2010/11/program_structure.png)

Now I'm not the first to consider the combination of midi and dmx - the most comprehensive review is [this paper](http://www.innovateshowcontrols.com/support/downloads/midi-dmx.pdf) (but also see ([1](http://response-box.com/gear/decabox-midi-to-dmx-bridge/),[2](http://www.anyma.ch/2007/research/udmx-midi-interface-10/),[3](http://www.chromakinetics.com/DMX/miniStageConsoleMac.html), and I'm sure others). I felt there was still a lot of room of putting a higher level of abstraction between the basics of MIDI notes to DMX channels.

Here are two quick screen captures showing the start of this project in action (if you have quicktime, you can just copy the URLs and open them directly in quicktime player)

The first shows just the basics of a simple light setup:

[A single light](http://www.ptone.com/lights_demo/single%20light.mov)

The second shows the addition of a couple more lights, and a couple methods of creating simple groups

[Three Lights](http://www.ptone.com/lights_demo/three%20lights.mov)

Here is more information about the [A.ttack D.ecay S.ustain R.elease envelope](http://en.wikiaudio.org/ADSR_envelope)

The remaining features I have implemented or planned are:

* combining lights or light groups into a "sequence" (not to be confused with a show sequence - more of a chase sequence). These sequences have parameters like duration and speed, loop patterns etc.

* Calculating RGB values for Hue, and blending between them over time or space.

* Adding physics tween motion - such as acceleration, bounce, or elastic to sequence groups.

The idea will be to record multiple "live sessions" and then merge them. The complexity may prevent the realtime processing of the midi-track into DMX - and so a MIDI file would be compiled against a show setup to produce either a vixen sequence, or some other similar DMX "player file" (really just binary data of chunks of 512byte DMX data).

This will eventually all be open sourced. I'm not planning on any GUI at the moment, I hope to come up with a clean API that actually makes it pretty easy to write out the scripts.


https://ptone.com/dablog