Music is a physical phenomena: we can hear and sometimes feel sound waves, we can look at printed scores and chord charts and hold CD’s but these contain only a representation of musical information. How we represent music and each of the many musical characteristics is an important decision for the algorithmic composer. When creating algorithmic music we have to make choices about how we will represent musical information. This in turns impacts how we think about that musical information and affects what we can and cannot do with it.
Today’s algorithmic composition tutorial explores some of these issues and our algorithmic composition looks again at using sonification – a mapping of non-musical data to musical parameters to create an algorithmic piece of music. The key to sonification is how the data is mapped to musical parameters so in this post we’re using the same data with a more flexible interface that allows you to experiment with how the data is mapped to musical parameters.
Here’s a quick video demo of the Algorithmic Composition Sonification tool in action:
Here’s a breakdown of each of the sections, you can also download the patch at the end of the post.
In this post we’ll create an automatic Breakbeat cutter in Max that will play randomised selections from a sampled drum loop. We’ll also use this together with a Markov melody generator. You can hear some sample algorithmic composition output in this example and download the patch at the end of the post:
In this post we’ll create an automatic Breakbeat cutter that will play randomised selections from a sampled drum loop. We’ll also use this together with a Markov melody generator. You can hear some sample algorithmic composition output in this example and download the patch at the end of the post:
Today’s algorithmic composition tutorial uses sonification as a composition tool. Sonification uses data that is typically not musical and involves remapping this to musical parameters to create a composition.
Sonification can be used to hear information in a set of data that might be otherwise difficult to perceive, common examples include Geiger counters, sonar and medical monitoring [ECG]. When creating sonification algorithmic compositions we are interested in creating interesting or aesthetically pleasing sounds and music by mapping non-musical data directly to musical parameters.
Here is some example output in PureData, the MaxMSP version works in the same way:
You can build the patch yourself (recommended) in Max or Pd or download a version at the end of this post.
Today’s algorithmic composition tutorial looks at manipulating a tone row in Max and PureData to generate musical material. We’ll also have a look at one technique that’s useful in generating more fully formed compositions in Pd and Max than some of the musical sketches we’ve generated so far.
You can hear some sample output from the patch here:
You can download example patches at the end of the post.