But there is an emerging area of data analysis and communication that performance improvement professionals should keep on their radar: the use of sound to present and explore data. Called sonification, this relatively new field (although many concepts date back decades) involves the “display” of data using sound. A simple example is the sound created by a Geiger Counter.
One of the reasons for the interest in the potential for using sound — audio — to present data is that hearing in humans has many advantages over our sense of sight. According to The Economist
…hearing presents a different set of strengths from that of vision. Humans can hear frequencies across three orders of magnitude, from about 20 to 20,000 hertz, and can discern tiny differences in those frequencies. They can deal with volumes from that of a pin dropping to a rock concert. Perhaps hearing’s greatest strength is its temporal resolution, 100 times finer than that of vision: you will know if your favourite band’s drummer is as little as a few thousandths of a second late on the beat.
…hearing can discern much more than just pitch and pace: it can pick up multiple instruments with differing timbres, keeping track of each note’s pitch, and even its attack and decay—how fast it rises and falls in volume. That is where another kind of sonification called parameter mapping comes in. This can turn scads of data sources into one stream of sound. The rise and fall of one variable is mapped to, say, the volume of a synthesized violin, while the shift of another is mapped to its pitch. Many different timbres of sound can be added, resulting in a data-rich soundscape.
Human hearing is extremely good at dealing with such noisy input. Making out what your interlocutor is saying at a crowded cocktail party, or noticing the approach of a predator among the cacophony of birds and bugs in a rain forest, can be done. “Where there are subtle changes of noise, or patterns in the change, that’s where the ear really takes over,” says Andy Hunt, of the University of York, in Britain.
If you’re interested, there is a free resource available online (The Sonification Handbook) that includes theory and examples of sounds to illustrated concepts. The site Earthzine also has examples as does the Experiential Music Lab, which conducts research and experiments in new ways to generate music such as the movement and location of people in a room or their “Twitter Radio” that converts hashtag symbols into sounds based on their emotional meaning and frequency of use on Twitter.
Rather than looking for patterns or anomalies in data — such as medical or financial data sets — one could use sounds to identify “discordant” sounds that jump out to the listener and signal an area requiring further investigation. One of the potential advantages of this method is that it frees the other senses, such as sight, to work on other tasks while the ears listen for unusual disruptions in an otherwise predictable and smooth flow of a musical stream of sound in a certain key. For example
David Worrall of the Australian National University believes parameter-mapping sonifications are particularly suited to monitoring of complex systems. When he began to work with the Fraunhofer Institute, in Germany, on a project to analyse network activity as a proxy for how much different departments were co-operating, those in the institute’s IT department were sceptical. But through regular listening to his sonification, he says “they started to notice patterns in the flow of the network—this would happen, then that, and then suddenly the printer drivers would go down.” What began as a managerial query has become a network-monitoring tool. Dr Worrall has also developed a sonification of the stockmarket for an Australian government research group, in order to spot insider trading. (Source: The Economist, March 19, 2016)