In May and June I organised the symposium strand of the Cybersonica festival of electronic and interactive music at the Institute of Contemporary Arts in London. I also managed the editing of the proceedings, along with John Eacott and Richard Barbrook.
You can download the abstracts of the papers as a 208 KB pdf file.
Some Loose Notes and Thoughts from the Cybersonica Symposium, 19-21 June 2003, ICA
© David Jennings 02003, licensed under Creative Commons license.
Embodiment
David Toop poses (implicitly) the question: what if our hearing were different? What if we could hear other parts of the spectrum? This is part of his grand project, suggesting that we can use sound to rethink our position in space (think of whales). And not just space, but time too. Toop is intrigued, while re-mastering New and Rediscovered Musical Instruments, to discover the sounds of a band or orchestra from the next-door studio captured in the recording he made of his own voice refracted through the inside of a piano. Spaces hold traces of previous times. Space holds time. Toop won’t use words like ‘haunted’ but he repeatedly hints at them. He asks (explicitly): what if we could hear all the sounds we ever heard in our lives all at once? What kind of memory would that be?
I’m wondering: our bodies are diaphragms, membranes. So by vibrating they can act as speakers, microphones, devices for recording and playback (though this vocabulary is not meant to relay any tacky cyborg intent). The membrane translates between sound and movement.
Toop and Eastley‘s session began almost as performance art, with Toop reading prepared text, slow and grave, over Eastley’s ambient manipulations. Fitting, then, that it should leave its own echoes later in the symposium, with one of the speakers (Anthony Moore? I can’t remember) reminding us of the phase of sound waves, how one can cancel another out, so the sum of all the sound you’ve ever heard could be quite quiet. Perhaps even the sound of your blood circulating and your nervous system in Cage’s famous anechoic chamber could be cancelled out. Pure silence. And Paul Miller notes the ‘echoes’ of classical civilisation in the fluted columns of the ICA’s Nash Room — home to the symposium — commenting how some things stick around longer than perhaps we realise.
Michel Waisvisz‘s Hands are the opposite to the traditional laptop performance: his presence as a body playing these hands is unavoidable, and much of the music directly expresses his expansive gestures. Michel speaks of working with puppeteers. Because of the way they are taught to pay close attention to the animation of inanimate materials, they are excellent at drawing out the dramatic potential of instruments such as those created at STEIM (a subset of these are at http://www.steim.org/steim/products.html but not including The Hands or the Cracklebox — I bought a cracklebox from Michel for £30: he told me it comes with a lifetime guarantee). But Michel makes it clear he has no argument with laptop performance. As far as he is concerned, anything goes. He makes no claims to privilege his very physical way of generating music and sound over any another.
At the Unfoldings sleeping bag party on midsummer’s night, I felt the physical presence of sound in a different way, with a speaker (embedded in the cushions on the gallery floor) pulsing noises directly against my stomach, noises that are programmed partly by my own movements and those of others on the cushions, and partly by the artists observing what we are doing.
Environment
The other side of the coin from bodies is environment — the space in which our bodies move. Cybersonica’s artists are animating the environment with sound and creating works that draw their animation from the environment. It’s hard not to relate this back to Brian Eno’s 1980s prescriptions for musics situated in the big here and the long now — using art as a means of enriching your awareness of your self in space and time.
Max Eastley uses kinetic sculptures driven by wind, water and electricity resulting in a number of large outdoor installations including permanent ones at Capol Manor Gardens, Enfield and at the Devil’s Glen, Wicklow, Ireland.
Sonic City by Ramia Mazé and Lalya Gaye brings together the interaction between body and environment. You don a ‘wearable’ computer (currently a chunky smock with a Sony Vaio strapped between your shoulder blades, but these things all shrink radically when fully developed), and then this plays back music through your half-ear headphones. The music you hear is ‘driven’ partly by sensing the environment around you (buildings, cars, other people, movement) and partly by sensing your own actions (walking, running etc). Presumably if you mimic John Travolta’s walk correctly, you hear Stayin’ Alive.
Tim Cole gave a short presentation of which I missed the larger part (thanks to David Toop arriving). Tim developed the original Koan algorithmic composition software which Brian Eno trumpeted as the ‘next big thing’ in 1996. But Tim didn’t make much money out of it (and neither did Brian, I guess): it’s too marginal and specialist. So it brings you down to earth to see Tim’s company now bought out by http://tao-group.com, and him hawking ways in which his clever software can do dumb things like polyphonic ringtones. But there may still be scope for something more imaginative: Tim’s parent company is involved in the latest mobile device that senses its orientation to the user and adapts its screen accordingly (see press release). Put this orientation-sensing together with John Eacott‘s Intelligent Street research and Howard Rheingold’s Smart Mobs multi-user wireless interaction, and there ought to be potential for some interesting interactions between mobile ‘band members’ moving around in social spaces.
We need to think about the location and orientation in space of the listeners as well as the performers. Towards the end of the 1970s Robert Fripp described his production of the Roches’ albums as undermining the ‘tyranny of the centre’ imposed by normal stereo separation. His production switches voices and instruments between left and right channels, so that there is no fixed location where the music sounds ‘as the producers intended it’. Sven Mann and Thom Kubli‘s notes for their performance extend this idea with surround sound: “This work offers a situation with no signalled listener alignment, in contrast to the standard cinematic set-up, which assumes an aligned viewer, facing the screen. The listener is encompassed with the acoustic scenery, confronted with an auditory environment, where his/her perception is connected to the sensation of physical space. The sensation of sound being closely related to space gives the listener an involuntary closeness of the auditory situation (immersion).”
Programming
Alex McLean started by reading out a quote, which sadly I didn’t quite catch, that he intended to illustrate the link between programming and movement. He seems to have a real sense of the aesthetics — as well as the politics — of programming. He’s unsatisfied with using industry-standard sequencing software, because he wants to get ‘under the bonnet’ — if I understand him correctly — of music making. If he has a musical idea, he prefers to write a program from scratch to embody that idea. And anyway, he says, it only takes hackers like him about 30 minutes to write!
While Alex says he doesn’t like to use too much randomness in his programs, John Burton (a.k.a. Leafcutter John) has written one that generates purely random remixes of tracks (inspired by the problem of spending long periods working on remixes for little payment). If you can make it random, presumably you could, with a little more analysis, generate a number of heuristics to apply for different styles of remix.
Using very cheap and basic sensors, John has also created little musical instruments based on switches that he can arrange on the desk next to his Powerbook and operate by passing and/or holding his hand over them. As the switches are so crude it’s difficult to predict exactly what movements they will sense and when changes in the sounds will be triggered. This introduces a different kind of randomness, or indeterminacy. The indeterminacy of someone who lacks virtuosity on his instrument, has a weak grasp of the beat, or can’t read music!?
In line with their incredibly lucid presentations, Alex, John and Tom Betts (a.k.a. nullpointer) discuss the practice of showing the audience what they see themselves on the screen, getting away from the nagging doubt in some laptop performances that all the music is pre-programmed and the performers are just replying to their email. In Alex’s case the apparent transparency of the practice is partial unless you are a programmer, for Alex is not moving sliders, tapping out new rhythms or adding new melodies; he is writing lines of code. Writing code still seems to me an essentially unmusical form of performance: the movement of fingers on a piano keyboard embodies and substantiates the music; the fingers on a computer keyboard have no rhythm or musical fluency. Timing is different matter on a musical instrument: you can’t hit backspace on a piano or guitar. But these concerns are swept aside by the three well-spoken young men. As with Michel Waisvisz, anything goes.
Politics
There’s an implicit politics in the work of Alex, John and Tom. They’re open source boys, closely involved with the London branch of the dorkbot movement, and they have an ambivalent relationship with money. (Dorkbot events and performances have to be free, and Alex’s promotion of his Cybersonica slot to the dorkbot mailing list promises to find ways to help the impecunious side-step the ICA ‘door tax’.)
The work of Alex and his colleagues displaces the normal distinctions between composition, performance and recording. All three activities take place at the level of code. Consequently Alex says he finds it hard to make recordings because he’s not sure when a piece of his work is ‘finished’. Anyway, he’s not sure why you’d want to produce a recording. His work is not available at your favourite indie record store.
Paul Miller’s (DJ Spooky) presentation mentions politics a little more directly (as a black American, he’s currently remixing DW Griffith’s Birth of a Nation). ‘Empires inspire some form of standardisation,’ he says, citing the standardisation of longitude and time in relation to the Greenwich standard under the British Empire. But these standards are overwritten many times in many ways: ‘as soon as there’s a grid, there’s an impulse to scramble the grid’.
Tying this back to his work, Paul says that DJ-ing is about the language of memory. In the archetypal folk tradition, performers took the songs that their listeners often already knew and re-worked and developed them. With global media, anyone can get the same records anywhere: as a DJ, he takes these songs, re-works and develops them. In his collage style, rhythm is what holds things together.
And finally he kind of explained how he came to take the DJ Spooky name originally, when his identity remained hidden: the Spooky bit is the idea that ‘you press play and there’s no one there.’