Making the Inaudible Audible: Strategies and Disagreements

Research —
‘Making the Inaudible Audible: Strategies and Disagreements’, Proceedings of International Symposium of Electronic Arts, ISEA 2010 Ruhr, Dortmund.
Excerpt: The study of environmental sound highlights the limitations of human perception. Sonification and audification predominantly use scientific methods that favor transformation of sound to the sweet spot in the middle of our hearing range. This approach overlooks the different perceptual effects of high and low, loud and soft, fast and slow sounds. It is my contention that in our interpretation of how to make inaudible sound audible, we must consider the strengths and limits of human hearing and listening.
The work of acoustic ecology focuses on listening to emphasise an awareness of the overall soundscape (Schafer 1977). This is usually limited to areas where it directly affects human presence, and it is largely because underwater and ultrasonic are inaudible to us that we are unaware of the impact of anthropogenic over biotic and abiotic sounds. Acoustic levels underwater are unregulated, and given that sound is essential to marine life, the impact of additional sounds is having considerable consequences. (Stocker 2002, Slabbekoorn 2010)
Sounds can be inaudible or unperceivable to us in different ways. The basic parameters are sounds that lie out of our frequency range (above 20,000 hz and below 20 hz), beyond our amplitude sensitivity (either too quiet or loud), and of a time frame that may be imperceptible to us (too fast or slow). To compare the scientific to musical terminologies: frequency/pitch, amplitude/volume, and time/rhythm or form. In what ways can these sounds be folded into our relatively narrow perceptual bandwidth?
Scientists and composers, limited to discipline-specific methodologies, are driven by different motivations and priorities in the analysis and use of sound. As a result approaches to making the inaudible audible generally fall into two camps of analytically strict systems or more intuitive translations. Can scientists, musicians and artists learn from each other in this relatively new area? To what extent do we question the “ostensible neutrality of these listening technologies” (Kahn 1999; 200), given that listening is both personal and contextual? (LaBelle 2007) When making the inaudible audible, what happens if we consider not simply what we hear, but how we listen?
I identify two distinct but overlapping approaches to making the inaudible audible: audification by scaling existing vibratory signals into human hearing range; and sonification of data by translation and mapping onto a choice of sounds. Audification uses the existing signal as its basis, while sonification requires compositional strategies of mapping data (non-vibratory information) onto sounds. Another common strategy is visualisation where sound is represented graphically depicting the parameters of frequency and amplitude over time.