In an ongoing experiment, I am playing around with representing sonification in HTML to enable sharing it within a webpage, focusing on the Event and AudioObject schemas. Following some currently unpublished thesis work, I was curious about trying to model the thing that I was writing about and putting it into the mark-up to allow machines to get the data and then to either replicate it or to alter that model.
As an initial version, I drew upon the microdata representation which can be parsed in using Python’s microdata package.
I am not convinced by the results so far. I really need an explicit link between the event(s) and sound(s). For my purpose , rdfa may be a better option to do the mark-up. I also need to describe the model that I am using as well.