2010-11-12

Sound-as-Space

The Spatial Sound With Supercollider workshop has come to a close.  Thank you to the folks at CCRMA and GAFFTA for hosting!

Before expanding on some of the techniques that came up in the workshop, I think it would be useful to take a few steps backward and explain what it means to spatialize sound.

We can think of sound as a process which occurs in time  i.e.  I listened to Rachmaninoff's Symphony No. 2 in E Minor, Op. 27 in Allegro molto for nine minutes and 38 seconds on my ipod.  Even when listening to sound on headphones--there is a spatiality.  Sound is always resonating in a space, even if it is the tiny space between the headphones and the inner ear.

We can also think of sound as an object--a source--which produces vibrations in space i.e.  I can hear a garbage truck outside my house and up the street.  A great deal of sound spatialization is about decoding the ways that we are able to construct the location of a sound's source.

Sound will reach the ears at slightly different times.
The sound is then perceived to have a directionality.

Just as we need two eyes to perceive depth, we need two ears to locate sound.  The phase difference in a wave (meaning two waves traveling out-of-sync) when a sound hits one ear and then the other is one method our brain uses to locate a sound.  Other techniques which relate to binaural hearing depend on the particular form of our outer ear and the acoustic shadows that are cast by our head when sounds are to the left, right, or at some elevation above the horizontal plane.

So far these spatialization techniques refer to a source-listener relationship.  A sound--a pressure wave--is emitted from a source and after a period of time and a certain reduction in the signal strength it reaches our ears and is perceived to have a distance and direction from the listener.

Yet there is another, more interesting (or at least more architectural) way to think about spatializing sound, and that is to think of sound as the space between source and listener.  So rather than thinking about what emits the sound (the object) or what the sound is perceived as (by the listener), we simply think about sound as the vibration of something within a space.  That 'something' is usually air, but anything which can vibrate--that is, all matter--can transmit a sound.  There is space inside of a chunk of lead.  But I digress: the important point here is that to become aware of sound as a field condition, one must divorce oneself from the position of an observer listening to a source of sound.

There are a set of questions situated around how this spatial field is constructed and inhabited.  If we are talking about producing this space virtually, about capturing this space in some kind of digital format, how do we go about doing that?  And once it is captured, how is it reproduced?  And finally, how do we understand and navigate through a virtual space projected into a real space?

The 8-channel speaker setup at the GAFFTA workshop.
The dashed-line indicates a virtual space reconstructed by the listener.  
In the case of our workshop, we had an eight-speaker array on which we listened to some pretty awesome pieces of sound.  (Some were composed by one of the co-teachers in the workshop, Fernando Lopez-Lezcano, for even larger arrays of speakers).  I think a lot of the stuff we listened to can be called soundscapes.  These sonic landscapes were projected into our space and moved around the room via the speakers, but not necessarily constructing a geometrically-specific virtual space.

Listening in this 8-channel environment is ideal at the sweet-spot.  Located at the center of the array, this location provided the most accurate spatialization, as the speakers are calculated to project sound into this infinitely small point in space at the same time.

One thing I am curious about is how to move that sweet spot.  Let's say you have a crowd of people within the array, and you want to project, let's say, a claustrophobic space on certain individuals within the crowd.  I posed this possibility to the workshop leaders, and we decided that it is possible to move the sweet-spot around by adjusting the time delay of the speakers.  As long as the array is calibrated precisely, there could be a moving location inside of the array that could receive sound from all the sources at the same time.  Imagine running around with your ears out trying to follow this invisible point in space!

A virtual space is shuffled around inside of the speaker array.


It gets much more complicated, apparently, when you try to move this 'claustrophobic space' around along with the sweet-spot.  Apparently this would require decoding the space inside the loudspeaker.  I'm still not sure what that means, but it sounds like fun to me.  The workshop coordinators, however, didn't think it was possible.  The problem, I believe, is that we are dealing with simple trigonometry when we project a sound into a point in space (the location of the listener).  However, an architectural fragment of spatial sound, with three dimensions, is not a point in space-- it is a field condition.  And a single speaker is not capable of mapping this space--it depends upon the array of speakers to do this.

So, how can we project a spatial field into another spatial field?  Overlapping soundscrapers.  I'm working on it.

2010-11-09

A Machine for Slicing Rooms

I am almost finished with a four-day workshop hosted by the Gray Area Foundation for the Arts (GAFFTA) in the Tenderloin district of San Francisco.  The title is "Spatial Sound with Supercollider".  In this workshop we have concerned ourselves with the manifold possibilities for taking sound and expanding it from the standard stereo listening environment.  This spatialization of sound is distinct from widely available standards such as Dolby Surround Sound in that we are digging into the components which reproduce spatial effects.  The program called Supercollider allows an incredible amount of control in how a recorded sound, even a mono-channel recording, can be reproduced for the listener with rich three-dimensionality.

In the workshop we have ooh'd and aah'd over the complexity involved in producing something like third-order ambisonics, where a sound source appears to cease emerging from the speakers and instead has a presence of its own in the room.

(red represents the  positive phase and green represents the negative phase of a sound)

In the above image, 16 different spherical dispersion models are used to reproduce a single 3-dimensional space.  The math to assemble these models and then disperse them into a multi-channel speaker array is heavy though not advanced, being entirely based upon sines and cosines.  These models for the dispersion of sound can be seen as the atomization of a space.  Each globe represents a directionality that, when combined with the other 'atoms' and decoded to a speaker array of your design, will produce a precise virtual environment.  Sound is not merely a temporal phenomenon; we are hearing space when we listen to a spatialized recording.

I find the analog versions of spatializing sound just as interesting as the digital models.  And for those sound engineers and musicians who are nostalgic for the sound that these analog reverb machines once produced, there are digital filters which simulate all the distortion and 'badness' that these setups are known for.

Plate reverb is particularly interesting because it is a two-dimensional machine that can produce the reverberation characteristics of a room, deceitfully producing a three-dimensional space.  It works simply like this: an electro-mechanical transducer produces vibration in a large metal plate which is held in tension.  A pickup (or two pickups for stereo sound) then records the new sound which has taken on the reverberation properties of the sheet of metal.  Check out this video which demonstrates some plate reverb effects.

The EMT 240 uses a 12" square gold foil sheet in tension to simulate the reverb in a room.

The plate is like an architectural slice of a room, where introduced sound produces wave-effects in the tensioned metal sheet just as it would in a real space.  Imagine then a whole building section represented inside a plate reverb machine.



Scientifically, we might be able to produce precise reverb filters of building sections, just as the physicist  Wallace Sabine did for the New Theatre, New York in 1913.  These plans and sections, modeled using the Schlieren method, describe the complex fluid dynamics of reverberant sound.

We are covering lots of fascinating techniques including HRTF (Head Related Transfer Function).  Each of these systems require the decoding of a mechanism which tells the brain where to find a sound in space.  It's a highly architectural exercise.