The Spatial Sound With Supercollider workshop has come to a close.  Thank you to the folks at CCRMA and GAFFTA for hosting!

Before expanding on some of the techniques that came up in the workshop, I think it would be useful to take a few steps backward and explain what it means to spatialize sound.

We can think of sound as a process which occurs in time  i.e.  I listened to Rachmaninoff's Symphony No. 2 in E Minor, Op. 27 in Allegro molto for nine minutes and 38 seconds on my ipod.  Even when listening to sound on headphones--there is a spatiality.  Sound is always resonating in a space, even if it is the tiny space between the headphones and the inner ear.

We can also think of sound as an object--a source--which produces vibrations in space i.e.  I can hear a garbage truck outside my house and up the street.  A great deal of sound spatialization is about decoding the ways that we are able to construct the location of a sound's source.

Sound will reach the ears at slightly different times.
The sound is then perceived to have a directionality.

Just as we need two eyes to perceive depth, we need two ears to locate sound.  The phase difference in a wave (meaning two waves traveling out-of-sync) when a sound hits one ear and then the other is one method our brain uses to locate a sound.  Other techniques which relate to binaural hearing depend on the particular form of our outer ear and the acoustic shadows that are cast by our head when sounds are to the left, right, or at some elevation above the horizontal plane.

So far these spatialization techniques refer to a source-listener relationship.  A sound--a pressure wave--is emitted from a source and after a period of time and a certain reduction in the signal strength it reaches our ears and is perceived to have a distance and direction from the listener.

Yet there is another, more interesting (or at least more architectural) way to think about spatializing sound, and that is to think of sound as the space between source and listener.  So rather than thinking about what emits the sound (the object) or what the sound is perceived as (by the listener), we simply think about sound as the vibration of something within a space.  That 'something' is usually air, but anything which can vibrate--that is, all matter--can transmit a sound.  There is space inside of a chunk of lead.  But I digress: the important point here is that to become aware of sound as a field condition, one must divorce oneself from the position of an observer listening to a source of sound.

There are a set of questions situated around how this spatial field is constructed and inhabited.  If we are talking about producing this space virtually, about capturing this space in some kind of digital format, how do we go about doing that?  And once it is captured, how is it reproduced?  And finally, how do we understand and navigate through a virtual space projected into a real space?

The 8-channel speaker setup at the GAFFTA workshop.
The dashed-line indicates a virtual space reconstructed by the listener.  
In the case of our workshop, we had an eight-speaker array on which we listened to some pretty awesome pieces of sound.  (Some were composed by one of the co-teachers in the workshop, Fernando Lopez-Lezcano, for even larger arrays of speakers).  I think a lot of the stuff we listened to can be called soundscapes.  These sonic landscapes were projected into our space and moved around the room via the speakers, but not necessarily constructing a geometrically-specific virtual space.

Listening in this 8-channel environment is ideal at the sweet-spot.  Located at the center of the array, this location provided the most accurate spatialization, as the speakers are calculated to project sound into this infinitely small point in space at the same time.

One thing I am curious about is how to move that sweet spot.  Let's say you have a crowd of people within the array, and you want to project, let's say, a claustrophobic space on certain individuals within the crowd.  I posed this possibility to the workshop leaders, and we decided that it is possible to move the sweet-spot around by adjusting the time delay of the speakers.  As long as the array is calibrated precisely, there could be a moving location inside of the array that could receive sound from all the sources at the same time.  Imagine running around with your ears out trying to follow this invisible point in space!

A virtual space is shuffled around inside of the speaker array.

It gets much more complicated, apparently, when you try to move this 'claustrophobic space' around along with the sweet-spot.  Apparently this would require decoding the space inside the loudspeaker.  I'm still not sure what that means, but it sounds like fun to me.  The workshop coordinators, however, didn't think it was possible.  The problem, I believe, is that we are dealing with simple trigonometry when we project a sound into a point in space (the location of the listener).  However, an architectural fragment of spatial sound, with three dimensions, is not a point in space-- it is a field condition.  And a single speaker is not capable of mapping this space--it depends upon the array of speakers to do this.

So, how can we project a spatial field into another spatial field?  Overlapping soundscrapers.  I'm working on it.