Sonification – Algorithmic Composition

Today’s algorithmic composition tutorial uses sonification as a composition tool. Sonification uses data that is typically not musical and involves remapping this to musical parameters to create a composition. 

Sonification can be used to hear information in a set of data that might be otherwise difficult to perceive, common examples include Geiger counters, sonar and medical monitoring [ECG]. When creating sonification algorithmic compositions we are interested in creating interesting or aesthetically pleasing sounds and music by mapping non-musical data directly to musical parameters.

Here is some example output in PureData, the MaxMSP version works in the same way:


You can build the patch yourself (recommended) in Max or Pd or download a version at the end of this post.

The Sonification Process:
There are four simple steps involved in creating a sonification composition.
1.            Find some interesting data
2.            Decide which musical parameters you want to map the data to
3.            Fit the input data to the correct range for your chosen musical parameters (normalise)
4.            Output remapped data to MIDI synths, audio devices etc
Step 1 – Find an interesting set of data
First of all we need to source a data set. You can use any data you like stock markets, global temperatures, population changes, census data, economic information, record sales, server activity records, sports data – any set of numbers you can lay your hands on, here are some possible sources: Research portal, JASA data, UK govt data
 
It’s important to select your source data carefully. Data that has discernable patterns or interesting contours works particular well for sonifications. Data that is largely static and unchanging will not be interesting, similarly data that is noise-like is usually of limited use.

Here we’ve loaded some example data into a coll object in PureData, the nstore message stores the data with an index number and description e.g. 1 temp.

 And similarly in Max:
sonification algorithmic composition maxmsp dataStep 2 – Map Data to Musical Parameters

Now the data is stored in a coll object we can recall it at will. Any coll object with the same name will access the same set of data. Here we have a metro connected to a counter this reads through the first series of data stored in our coll.algorithmic composition puredata sonification

And in Max:
algorithmic composition maxmsp sonification
The second step is the most creative of the sonification process. It involves making creative decisions about which musical parameters to map the data to e.g. pitch, rhythm, timbre and so on.

To start with we will map our data to pitch. However this is more of an involved question than it initially appears, for example if we choose to map our data to pitch, we also have to choose how these pitches will be represented:

Frequency (20Hz – 20kHz): this is useful for direct control of synths, filter cutoffs etc, but is not intuitively musical and typically needs conversion if working with common musical scales.

MIDI notes (0 – 127): assumes 12 note division of the octave, no representation of C# versus Db

MIDI cents:  OpenMusic uses a MIDIcents representation. This is equivalent to MIDI notes * 100 so middle c is 6000. This enables microtonal music and alternative divisions of the octave, for example dividing the octave into 24.
Pitchclass (0-11): using a pitchclass representation, each octave is equivalent as are enharmonic notes e.g. there is no separate representation of C# and Db
Scale degree (1-7 for a diatonic scale): A representation giving the scale degree, all octaves are equivalent. This is useful as we can deal with uneven steps in the scale easily.
Although the notes will sound the same a C major arpeggio  (C, E, G) from middle C could be represented:
Frequencies: 261.626Hz, 329.628Hz, 391.995Hz
MIDI notes: 60, 64, 66
MIDIcents: 6000,6400, 6600
Pitches: C, E, G
Pitchclass: 0, 4, 7
Scale degree: 1, 3, 5
All of these are different ways of representing the same thing. How we choose to represent pitches influences how we think about the music we are creating.
Step 3 – normalise data to fit musical parameters
In Max we can use the scale object. Scale maps an input range of float or integer values to an output range. The number is converted according to the following expression
 y = b e-a c x  c
where x is the input, y is the output, a, b, and c are the three typed-in arguments, and e is the base of the natural logarithm (approximately 2.718282).
In PureData we can use an expression to achieve the same normalisation effect.
expr $f4 + ($f1 – $f2) * ($f5 – $f4) / ($f3 – $f2 )
1 = number
2 = input min
3 = input max
4 = output min
5 = output max

Step 4 – Output

Adding in some MIDI objects allows us to hear our data sonified in a simple way:

algorithmic composition sonification maxmspAnd in PureData adding in a makenote and noteout objects and normalising our output data to one octave of a chromatic scale looks like this.sonification puredata algorithmic composition

You should now have some basic musical output to your MIDI synth. Now the patch is setup it’s easy to experiment with mapping the data to different pitch ranges. For example:

  • Try adjusting the normalisation (the scale object in Max or expr object in PureData) to map the data across two octaves instead of one by changing the output range from 12 to 24 – or any other pitch range.
  • The + 60 object sets our lowest pitch as MIDI note 60, this can be modified easily to set a different pitch range.
  • Invert the data range by having a high output minimum and a lower maximum, so ascending data creates descending melodic lines.
Mapping to Scales
As an alternative to mapping our data to chromatic pitches we can use a different pitch representation and map our scale to the notes of a diatonic scale.
First we need to define a few scales in a table in Max:
sonfication algorithmic composition maxmsp
As the contents of tables are defined slightly differently in PureData this looks like this:
sonification algorithmic composition puredata
The above screenshots show the scale intervals for a Major, Harmonic Minor, Melodic Minor and Natural Minor scale being stored in a separate table for each scale. These are stored as pitchclasses, if we need to transpose or modulate our composition to another key we can add or subtrack to these scale notes before the makenote object e.g. to transpose up a tone to D add a + 2 object. Now rather than mapping to chromatic pitches we’ll map to scale pitches so we’ll need to modify our normalisation to reflect this.
sonification puredata algorithmic compositionHere we’ve changed the output range to be 0 to 6 to reflect the seven different scale degrees of the major scale. Similarly in Max:
sonification maxmsp algorithmic composition
We are now mapping our octave to a major scale, as we have already stored a number of scales you could try changing the name of the table that is being looked up to map to an alternative scale e.g. table harmonic-minor-scale.
We can also map the data to a scale over more than one octave.
sonification algorithmic composition puredata
Here we’ve changed the output range in this example to be 0 to 20. Using % (modulo) to give us the individual scale degrees and / (divide) to give us the octave. The process is the same in Max, although in the previous example we mapped across 3 octaves of the scale there’s no requirement to map to full octaves, you could map your data to 2 1/2 octaves or any other pitch range by changing the output values of the scale object (expr in PureData):
sonification algorithmic composition maxmsp
So far we have mapped our data to MIDI pitches. The next example maps the data to frequencies and uses this to control the cutoff frequency of a band pass filter that is fed with noise. This gives a sweeping windsound effect.
As an alternative to MIDI output we’ll map our wind data to the filter cutoff of a bandpass filter that is fed with noise. As we’re know working with frequencies rather than MIDI notes we’ve changed the output of the scale object to remap any incoming data between 200Hz and 1200Hz:
algorithmic composition maxmsp sonification
The patch is setup in a very similar way in PureData with a bp~ object as our band pass filter, rather than the reson~ object found in MaxMSP.
puredata algorithmic composition sonification
Although all of the notes are created by mapping the data directly to pitches we have made another of creative decisions along the way, so there are many ways of realising different sonifications of the same source data. So far we have only mapped to pitch however we still have a number of variables we can alter:
scale – chromatic, major, melodic minor, natural minor, harmonic minor (any other scales can be defined easily)
base pitch (the lowest pitch )
pitch range (range above our lowest pitch e.g. 2 octaves)
the data set used to map to pitches
As with any composition we also have to make musical decisions concerning which instrument plays when, timbres and instrumentation, dynamics, tempo etc. In another post we’ll look at sonifying these elements from data but for now we’ll make these choices based on aesthetic decisions.
Summary
In the youtube example we have added several copies of the sonification patch so we have one for each of our data sets (temperature, sunshine, rainfall and windspeed). The interesting thing about using weather data is that we should hear some relationship between the four sets of data.
We’ve also added a score subpatch with sends and receives to turn on and off each section and control the variables mentioned above (min pitch, pitch range etc).

After the getting the patch to work play around with your own settings and modifications and check out part two of this sonification algorithmic composition tutorial. Future posts will continue the idea of mapping and explore sonifying rhythm, timbre and other musical parameters. Have fun and feel free to post links to your sonification compositions below.

Post a comment if you’ve any questions on this patch.

More algorithmic composition tutorials soon!

 

15 thoughts on “Sonification – Algorithmic Composition

  1. Larry Bounds

    Is there any way to get a working copy of this program? I’m new to PD and this one is beyond my abilities. I did get the serial comp program to work! Thanks, LB

    Reply
  2. fol5

    well, thanks for this, it’s great!
    got a question, how do you proceed to load external data (from the cited site) into the message in Max or Pd for a real time processing?

    Xthanks…

    Reply
    1. Algorithmic Composer Post author

      thanks for the message, in this example I just copied and pasted the data into a message but you could upload the data from a text file if it’s formatted. I’ll look at this again in another post.

      Reply
  3. Graham Dunne

    Thanks, as ever – very inspiring stuff. Just one question…in the youtube version, what exactly do you have going on in the velocity patch? I think it’s the only missing link…

    Reply
    1. Algorithmic Composer Post author

      Thanks for the quote and well spotted! The velocity is a bit of a cheat as it’s not mapped from data, the patch chooses a random velocity between a set minimum and maximum value – this velocity range changes throughout the piece. The Pd patch is available for download at the end of the post, Max patch to follow soon!

      Reply
  4. Adri

    What means this message? warning (coll): no coll file ‘C:/Users/AA/Desktop/data’. Thanks a lot I’m a beginner with pd.

    Reply
    1. Algorithmic Composer Post author

      Hi
      thanks for the message, you can save the contents of the coll object to a separate file to avoid loading it into the coll each time. You can do this adding a ‘write’ message to the coll. However you don’t need to do this and can just ignore the error – the patch should work fine. Have a look at the coll help file for more info.

      Reply
  5. Lara P. Wilder

    AeSonToolkit is a Max/MSP framework for aesthetic sonification. It includes objects for importing data, for formatting and synchronising realtime data, for transforming data, for mapping data to sound and musical parameters and for synthesizing sound. It requires the free FTM extensions to Max/MSP ( ftm.ircam.fr ).

    Reply
  6. Pingback: The ITP Sonic Lab

Leave a Reply