As International Conference on Auditory Display (ICAD) was, metaphorically, up the rail line in Newcastle, I thought that it was a must attend conference.
The Monday kicked off with the session on Assisting with Every Day Life. Christoph Urbanietz presented on continuing work to aid navigation for the people with visual impairments. The approach, using a Raspberry Pi, focuses on the getting meaningful data from the surroundings and leaving the interpretation to the user. Nida Aziz followed with creating auditory routes for the visually impaired. The design considerations are considerable in terms of time and synchronising all the types of audio. The Sonification of Solar Harmonics showed a toolkit to use solar data for sonifications and followed by a paper on using audio in brain stem surgery for positional guidance. I had not thought of using sound within the designing colours for sound for the visually impaired. It seems to have some issue with remembered sounds and translating RGB colours into audio. That is something to consider for my work where aspects of data are being translated.
Alexandra Supper’s keynote took an ethnographic view of the field. Identifying goals such as legitimacy, objectivity, and wider questions of the representation in practice, she explored conferences, practices, and the negotiation processes through discourse. Her initial considerations of the knotty problem of sonification and visualisation before focusing on the techniques. Data karaoke was one practice focusing on mixing the sonic and physical bodies that illustrates, embodies and lends authority. I think that this is something to be tried and focuses one back onto what needs paying attention to in the practice. Reviewing the standards and how one defines the sonification community, she looked at the way that its history is written and how its divergences show the ground work to create a new field. The talk raised a few avenues that need exploring.
After lunch, we have a talk on sonification and Tourette’s syndrome. The sound design was linked to both Schaeffer’s and Chion’s sound object. Visual arts were the focus of the next talk and making art accessible through sound using machine learning and sentiment analysis from the artwork. Audio graphs were the focus of the talk on consumption so embedding sound as part of the presentation. Sound’s phenomenology was raised to engage with the projected sound and the underlying data to hint it underlying connections. Whilst I do like this, part of me is concerned about dark patterns and directing the listener, though this appears in visualisation as well. Nees’s Eight Components Of A Design Theory Of Sonification was a rich paper in design but also sustainability of the sonification artifact, something that is close to my heart really.
The final session was on perception. The first project looked at the use of sound within a physical space for immersion which provoked questions about the virtual and physical at the time. Considering the design mapping, previous experience had an effect on the interaction with the sonification. (This was something that came up when I chatted to the colour mapping team as well.) My take away was the choice of mapping, in terms of both parameters and the actual emitted sounds from the following talks, will affect the interaction so it needs testing.
I’m not going to describe the algorave but it was interesting. I do find them to be slightly mixed bags in terms of performativity, music and style. Nick Collins’s set was perhaps the highlight for me.