Bridging the Gap
While technologies providing visual immersion, and realtime implementations of tele-presence in virtual environments have continued to advance, the creation of auditory immersion has not progressed at the same pace. The difficulties of creating a useable tool for the creation and shaping of an auditory environment for virtual environmemnts has been hampered by changing audio hardware and APIs, as well as changing virtual reality applications.
A Multi-Layer Approach
To address these issues, and allow the virtual reality artits to spend their time creating rather than adapting old code, the tools for the creation of spatialized sound have been divided into three areas that can leverage current development and standards of digital audio.
The first area consists of using a standard messaging format to rather than proprietary messaging formats to control the sound server:
Open Sound Control (OSC) messages are being implemented in yg and through readily available C/C++ libraries can be implemented in any virtual reality application.
The second area consists of a middle layer manager/interpreter, written in python for its wealth of implemented libraries (OSC, MySQL, threads, etc) and fast development cycle, which abstracts the server interface for the creation of sound.
Using the the same OSC messaging used by the virtual reality app, the python server will communicate with an audio server 'backend', created from SuperCollider (and eventually other audio server environments.) By using these audio servers through a common messaging system, this server can utilize the support and development of these native packages as they progress though new hardware and software cycles.