1 – Anthropocene in C Major (Excerpts) [12 min] – Jamie Perera
https://vimeo.com/721061525/8d10667338
‘Anthropocene In C Major’ is an experience of human impact on earth, felt through an immersive AV installation that turns data into sound.
From 12,000 years ago to the present, participants will hear breakthroughs like the invention of the wheel and the Industrial Revolution, but also data trends that show the exploitation of people and the planet. At what now seems like a breaking point for our species, what can we learn from listening to the past, and what meaning can this bring to our present? Anthropocene In C Major provokes a response to climate change for a species paralysed by its own extractive structures. It invites us to confront and understand our own ecological and systemic grief, through the form of sonification, and within the scale of our modern existence on Earth.
2 – Covid-19 Genomic Navigator [5 min]- Ka Hei Cheng
COVID-19 Genomic Navigator is a motion sensing glove that sonifies the protein genomic data of COVID-19, which was extracted from The National Center for Biotechnology Information (NCBI) and Protein Data Bank (PDB). The PDB files show the structure of the proteins at the atomic and amino acid scale, which contributes to the multi-dimensional parameter mapping of the synthesized sounds. Three sets of data were selected in the project, including 7B3C, 7D4F, and 7KRP, which are all RNA-dependent RNA polymerase (RdRp) related. It was formatted into prn or csv files and was read in Max patches, which sends OSC messages to Kyma (Symbolic Sound), which performs audio signal processing in multiple synthesizers through parameter mapping. The three protein data sets distribute evenly in space while the motion sensing glove navigates and sonifies the data in both physical space and time synchronously. The motion sensing glove captures the data of hand gestures and movement, then sends from the Arduino to Max patch through OSC messages, which controls the graphical user interface (GUI) in Max that manages the parameters of the synthesizers in Kyma through OSC messages. The project also incorporates machine learning, which interprets the data from the glove, specifically.
Gesture recognition and dynamic time warping (DTW) from Wekinator of the motion sensing glove are used to perform machine learning and to catalyze the sonification of the interaction between the different protein data. The motion sensor data is sent from Max to Wekinator for data analysis. The output result of gesture recognition from Wekinator sends back to Max and trigger programming events to manifest the musical characteristics of the proteins through the control parameters of the synthesizers in Max. Both the protein genomic data and gestural data are reformatted and programmed to run into several synthesizers for audification and sonification. The data of five categorized glove gestures are labelled, recorded, analyzed, and trained in the machine learning model before the performance. When running in real-time, some gestures will not be categorized by the machine learning system. Non-categorized gestures would be used in musical expression alongside with the five categorized gestures to build the growth and development of the piece.
3 – Sound of the social – sonification of Experience Society [2 min 30 sec] – Joachim Allhoff
https://soundcloud.com/joachim-allhoff/social-milieus-sonification-1
In ’Experience Society’ (Erlebnisgesellschaft) German social sci- entist Gerhard Schulze describes five social milieus. They are characterized and named through leisure activities and chosen lifestyle: high-class milieu, self-realization milieu, integration mi- lieu, harmony milieu and entertainment milieu. For this sonification project (concert piece) data from the ’Experience Society’ has been processed and converted using the Sonification Sandbox software. The data based on typical preferences for leisure activities and chosen lifestyles for each of these five milieus.
4 – Adrok Sonification Concert: Three Improvisations [10 min 26 sec] – Sijin Chen et al.
- https://vimeo.com/702167534
- https://vimeo.com/702167497
- https://vimeo.com/702167705
Artistic team: Make Li, Sijin Chen, James Harkins, Lin Zhang
Scientific team: Kees van den Doel, Colin Stove, Gordon Stove
Introduction
This project is the result of an interdisciplinary, cross-cultural and multi-locational collaboration between Edinburgh, Vancouver, Berlin, Guangzhou and Shanghai. Scientists and sound artists teamed up for the production of the recorded concert of three works. Our sonic raw materials are radar and microwave data provided by Adrok Ltd., which is normally obtained as part of some imaging of either the subsurface of the earth or, in one example, the human heart. The scanning and imaging technology utilized by Adrok are somewhat similar to seismic probing, except radar and microwaves are used instead of sound waves. Recently at Adrok we have introduced a listening-based method for equipment quality testing. The current collaboration between artists and scientists tries to achieve two goals. First, to give musicians a free hand to create music from our data, we hope to discover new methods to extract useful information from our data. Second, by sharing the scientific context of our data with the artist we hope to achieve a fusion of the different senses of beauty in organized mathematical structures in nature and in free human creations. Three distinctive works are created by using the same set of ADR data.
STATEMENTS OF THE ARTISTS
Improvisation 1 (Make Li, Sijin Chen)
By using the piano material as the signal source of the rhythm part and trigger sampling synthesizer, the strength and pitch of the piano are used to encapsulate the original signal of ADROK (R7WOODEN Wreck-X1 in Capillary-firth of Forth-ALL). Output the original signal to the channel and send it to IR Reverb; The IR of the reverberator sampled the “resonance” information of the ADROK signal again. Divide the signal into two tracks and improvise at different times,to control the balance between hearing and feedback effect. For the rhythm part design, the work uses (GCS-Sonic (4) GCS’s HEART Music-Spiral) material to form fluid variations through extreme LFO and pitch variations.
The creation of this work attempts to explore the imitation of non- “natural” sound to natural environment noise in timbre creation. The improvisation part of IR is triggered by piano to become the ambient sound. The material of the heartbeat data is used for the imagination of “sinking or diving” and “inner rhythm.” We try to bring out the poetics of data from rock and heart scanning, therefore, enable a “travel” in time, space and culture. The heartbeat data is collected from a scientist (Dr. Colin Stove) in Edinburg. A personal meditation is merged into depersonalised “sound of rock from Shetland.” The material of piano music is an improvisation on a piano made in Berlin more than 80 years ago, recorded in Berlin. The piano improvisation is a reaction to the sonifications and again mixed in the final work, which is produced in studio in Guangzhou. The information of locations, personal and cultural backgrounds is embedded in the acoustic result but leave it open to listeners to re-interpret and re-situate.
Improvisation 2 (James Harkins)
The aim of the work was twofold: first, to explore analytic sonification strategies for these data sets, and second, to apply these in a free, artistic context.
I was struck by the observation that a set of scans would show broad similarities with minute differences between them, where the differences indicate features of interest. It would be easier to subtract out the similarities in the frequency domain. For my demonstration, I use the heart scans (“GCS-SONIC(4) GCS’s HEART Music-ALL”), where the data had already been assembled into one audio file. For analysis, these need to be split into frames. Because the shapes are mostly similar, autocorrelation is a good technique to identify the period of repetition. Then an automated process, coded in the SuperCollider programming language, reads each frame in turn, resamples up to the next power of two size, performs a Fast Fourier Transform and appends the spectral data in polar format to a new data file to use for sonification.
SuperCollider’s extension plugins include a phase-vocoder buffer reader, for which the data were prepared in the previous step. To differentiate figure from ground, I read two adjacent frames, one at the requested position and the one immediately preceding, and subtract their magnitudes, leaving behind partials that differ. Moving slowly through the data set reveals obvious points of interest. (It is perhaps not enough to compare only two adjacent frames; this technique could be improved by estimating the stable components over a wider segment of the file.)
Playing further with the sound, a phase vocoder spectral enhancer plugin helps to create dynamic shapes within notes, or accentuate percussive attacks. I then reformatted this synthesis code to be compatible with my live-coding programming dialect, ddwChucklib-livecode (which also runs within SuperCollider, and is released as a Quark extension package) and prepared a short performance with an ambient layer of long tones (which is later cut up into rhythm) and a percussive layer with irregular rhythms.
Improvisation 3 (Lin Zhang)
Listening through the provided Adrok material, I discovered some samples with intriguing timber quality already. Even though the documented signal physically means different, but the psychological sonic impression can trigger alternative cultural and musical memory or imagination. The target of this composition is set to implement original signal wave file as intact as possible, construct with fundamental music principle from around world, roots rhythmical signature and tranquility sound scape, mixed with a dark drone metal compression.
5 – Sounding Out Pollution [7 min] – Robert Jarvis
https://www.youtube.com/embed/YpBIpTcNXbk
The ‘Sounding Out Pollution’ composition was created using data supplied by WM-Air at
University of Birmingham, and is in three sections, as follows:
1 LOCATION MATTERS (The importance of place)
https://www.youtube.com/embed/4Rv7j2Bkm0Q
The yearly average air quality readings for a range of urban and rural locations are presented. As one might expect, these vary enormously for the different settings, and so presented me with a wide numeric range. I, therefore, needed to use a sound that could not only comfortably reflect that range, but also might connect with the pastoral and urban settings as well as be able to convey a sort of increasing tension or urgency as locations with higher pollutant levels were introduced. For these reasons, I decided to present this data through the sound of the string family using the deep-toned
double bass for the lowest readings, through to the cello, viola and violin for the highest readings.
The data readings were converted directly into frequency and were assigned the closest musical notes. These were then synchronised with the displayed air quality readings, with the nitric oxide(NO) readings sounding in the left speaker and the nitrogen dioxide levels in the right speaker. The map on the left hand side of the screen displays the whereabouts of the various chosen locations.
2 PICK YOUR MOMENT (The difference the time of day makes)
https://www.youtube.com/embed/Z5C2p509thc
Nitrogen Dioxide levels across the West Midlands region of the UK are presented both visually and aurally. The map displays the seven different counties of the west midlands and the changing levels of nitrogen dioxide, hour by hour, throughout an average day. I have mapped these as follows.
First of all, each of the counties has its own place within the stereo spectrum (west to east being represented as left to right). Then the different levels of pollution, as indicated by the map, are assigned a musical tone, with the higher levels being mapped to higher pitch. Finally, the volume for each of these levels is mapped to the pollution spread and the amount of area that a particular level of pollution extends to as indicated on the map. So, if a pollution level extends halfway across its county, then it is given a volume setting of 0.5; if it’s a third, then it would be 0.33, and so on.
3 CHOOSE YOUR PATH CAREFULLY (Taking the scenic route)
https://www.youtube.com/embed/X0nMgM8QgkA
Nitrogen Dioxide and particulate matter levels are presented for a series of locations on an imagined route through the centre of the city of Birmingham from its rural outskirts. The journey (from Lickey Hills to Sutton Park) is traced on the accompanying map and synchronised with the sound of the rising and falling particulate pollutants PM2.5 and PM10 (in the left-hand and right-hand loudspeakers respectively). At each location, the nitrogen dioxide levels are presented as three-note chords representing the minimum, mean and maximum levels of modelled air quality data for each site. According to Google Maps, it is possible to cycle this route in 1 hour 59 minutes. With this composition the journey is made in just under two minutes!
6 – Sonora_V19 [4 min] – Robert Jarvis
https://www.youtube.com/embed/XVG9eAcTLXA
A sonic interpretation of the reported daily active cases of COVID-19, from nineteen countries, and beginning the 22nd Jan, 2020. The work is updated on a monthly basis, and this latest published version covers the time period extending to the 10th June, 2022.
The data-driven artwork focuses on nineteen of the earliest countries to report the SARS-CoV-2 virus, beginning with China, and then followed by Thailand, Japan, South Korea, USA, Singapore, France, Malaysia, Germany, Italy, Sweden, UK, Iran, Spain, Switzerland, Austria, Norway, Netherlands and Belgium (in that order). Each country has been given its own harmonic relative to the low ‘C’ assigned China (whose tone you can hear beginning first). As the number of active cases rises and falls, so too do the volumes of each country’s tone, and the result is a slowly changing timbre depicting the reported transmission of the disease. Each day is attributed one quarter of a second.
The data attributed to each country is shown as part of the piece’s visual element – in a graph format plotting the number of reported active cases per 1000 people for each of the nineteen countries, and this is gradually revealed to the listener in synchronisation with the piece’s development.
The data source is drawn from the daily updated COVID-19 Data Repository by the Center for Systems Science and Engineering at John Hopkins University. This information is then worked with using Symbolic Sound’s Kyma software and plotted using Javascript in order to convert the numbers into sound and image, and the result is updated accordingly.
‘SonoraV19’ was first presented on the 24th March 2020, and since then has been featured by Sound Art Radio (Apr’20), Radio New Zealand (Apr’20), the digiArtFest (May’20), ‘Art The Science’ (May’20), In Toto Virtual (Jun’20), the Confinement Chronicle (Jul’20), the NewArtFest (Aug’20), Lisbon’s Museu Nacional de História e da Ciência (Jul’21), Auckland’s World of Light exhibition and now the International Conference On Auditory Display (Jun22).
7 – How Many More [2 min 21 sec]- Robert Jarvis
http://alturl.com/ubo3q
An audio-visual work that takes its inspiration from statistics relating to the almost daily mass shootings in the United States of America. When complete, the artwork will incorporate the entire mass shooting index for the year 2019.
The composition is scored for trombone, piano, electronics and projected text, and will probably work easiest as a ‘video’ piece, with the screen displaying the text whilst the recorded audio is presented.
As there is no standard definition of what a mass shooting is, my proposed artwork defines this as an incident where at least four people are shot. The piece begins, and the events of the year unfold…. The trombone plays one note for every person killed, and the piano for every person wounded. The first piano note for each displayed slide also corresponds to the latitude of the location of the shooting. The electronics carry the ‘memory’ of the fatally shot and come to the fore on the days when there are no displayed incidents. As the piece develops the electronics take on a more prominent role as the killings accumulate.
8 – Signal to Noise Loops v4 [5 min 11 sec] – Roddy
Signal to Noise Loops v4 is a data-driven audiovisual piece. It is informed by principles from the fields of IoT, Sonification, Generative Music, and Cybernetics. The piece maps data from noise sensors placed around Dublin City to control a generative algorithm that creates the music. Data is mapped to control the sound synthesis algorithms that define the timbre of individual musical voices and data is also mapped to control post-processing effects applied in the piece.
The first movement consists of data recorded from noise level sensors around Dublin in March 2019. This is before the COVID-19 pandemic and the bustling nature of the city is well represented. The second movement consists of data recorded in March 2020 when restrictive and social distancing measures were introduced culminating in a full lockdown on March 27th. This section is notably more sedate.
The piece was created with Python, Ableton Live, and Processing.
9 – Jingle, Pluck, and Hum: Sonifications of Space Imagery Data [2 min 37 sec] – Matt Russo et el.
Much of our Universe is too distant for anyone to visit in person, but we can still explore it. Telescopes give us a chance to understand what objects in our Universe are like in different types of light. By translating the inherently digital data (in the form of ones and zeroes) captured by telescopes in space into images, astronomers can create visual representations of what would otherwise be invisible to us.
But what about experiencing these data with other senses, like hearing? Sonification is the process that translates data into sound. Our new project brings parts of our Milky Way galaxy, and of the greater Universe beyond it, to listeners for the first time.
10 – 5000 Exoplanets: Listen to the Sounds of Discovery [1 min 17 sec] – Matt Russo and Andrew Santaguida
https://www.system-sounds.com/5000exoplanets/
On March 21, 2022, the official count of known exoplanets passed 5000! To celebrate, NASA asked us to animate the planet discoveries in time and convert them into music. A circle appears at the position of each exoplanet as it is discovered with a colour that indicates which method was used to find it (see below). The size of the circle indicates the relative size of the planet’s orbit and the pitch of the note indicates the relative orbital period of the planet. Planets with longer orbital periods (lower orbital frequencies) are heard as low notes and planets with shorter orbital periods (higher orbital frequencies) are heard as higher notes. The volume and intensity of the note depends on how many planets with similar orbital periods were announced at the same time. The discovery of a single planet will be quiet and soft while the discovery of many planets with similar periods is loud and intense. You can also experience the animation as a 360° video.
Radial Velocity (Pink), Transit (Purple), Imaging (Orange), Microlensing (Green), Timing Variations (pulsar, transit, eclipse, pulsation) (Red), Orbital brightness Modulation (Yellow), Astrometry (Grey), Disk Kinematics (Blue)
The earlier version created for the discovery of the first 4000 exoplanets was featured as the Astronomy Picture of the Day on July 10, 2019 and has been viewed by over 1 million people on Youtube and Instagram! Check out the fantastic article about it by Phil Plait for Bad Astronomy and an excellent video explanation by Anton Petrov!