Music Generation by Learning and Improvizing in a Particular Style : a Few Examples.

 

 

All the artificial examples in this page have been generated by the IP (Incremental Parsing and Generating) algorithm.

A few examples comparing the IP algorithm and the PST (Prediction Suffix Tree) one can be found on :

http://www.ircam.fr/equipes/repmus/MachineImpro/IPPST

This work has been carried by S. Dubnov (Ben Gurion University, Israel ), G. A ssayag (Ircam-CNRS, France), O. Lartillot (Ircam-CNRS, France), G. Bejerano (Hebrew University, Israel).

email: lartillo@ircam.fr, dubnov@bgumail.bgu.ac.il, assayag@ircam.fr, jill@cs.huji.ac.il


The example 5 has been realized in collaboration with Marc Chemillier (University of Caen, France) who wrote the real-time generation and interaction environment in which the IP artifial improvizer has been plugged.

 

 

 

Example 1

1.1 Original improvization by Chick Corea

Listen to Corea (mp3)

1.2 Three machine improvizations generated after learning 1.1

Listen to Impro 1 (mp3)

Listen to Impro 2 (mp3)

Listen to Impro 3 (mp3)

Example 2

One machine improvization generated after learning "Donna Lee" by Charlie Parker

Listen to Impro (mp3)

Comment :

From a midifile containing an arrangement of this standard (theme exposition plus chorus). Took only the sax and bass channels. The strange bass rhythm behaviour is due to a bug in the quantization algorithm, we kept it because the somewhat free style that results is an interesting remainder of some jazz tendancies in the sixties.

The machine impro begins with a recombinant variant of the theme, then dives into a bop style chorus.

Example 3

One machine improvization generated after learning J.S. bach Ricercar

Listen to Impro (mp3)

Comment :

Bach's ricercar is a six voice fugue. The information is extremely constrained, so the analysis/generation algorithm has very few choices for continuations. It tends to reproduce the original. But if you listen carefully, you'll hear that there are discrete bifurcations where it recombines differently from the original.

Example 4

A study in the style of Jazz guitarist Pat Martino. Here's an idea of the original style (Blue Bossa) :

Listen to Pat Martino (mp3)

The learning process was based on a Midifile containing a transcription of Martino chorusing on Blue Bossa. After generating a few machine choruses, and choosing carefully a one that would fit, we mixed it back into Martino's audio recording, in a place where only the rhythmic section was playing (plus some piano). The machine impro is played with an (ugly) synthetic Midi Sax sound.

Listen to Mix (mp3)

Comment :

That experience was done in order to evaluate i f the techniques used could make sense in a performance situation, with a musician playing with his clone. The result is encourageing, but in a real-time experiment, we would have to extract the beat and the harmony in order to control what's happening. In this case, we just inserted the machine impro by hand, tuning the tempo so it would fit with the audio.

Example 5

A Real-Time performance experiment.

In this experiment, 2 systems are connected. On one side, there is a real-time environment that generates music (in that case, a Funk-Blues grid), and listens to what's coming through a midi input. A performer plays on a midi device, improvizing voicings and choruses on the grid. When he stops playing, the recorded midi data is sent to the second environment (OpenMusic running our Learning/Improvizing algorithms). The sequence is incrementally learned, enriching continuously the current style database. Because the rhythm section is generated, we know the beat/harmony segmentation. So what's really learned is more than in the previous examples. The machine learns the correlation between the beat structure, the harmonic structure, and what's played by the performer. When the machine improvization occurs, it's aligned on the grid, which wouldn't happen so easily in the "normal" experiment.

The two systems are synchronized so as to provide a smooth real-time performance situation. As soon as the style data-base allows it, the machine improvizer begins playing, so the performer can play in interaction with it, with his playing still being learned in parallel : the process is thus a cumulative one, which becomes rich and interesting after a while.

Sequence 5.1.

you'll hear the real-time system playing the grid (bass, drums, sound samples, and generated chords). The smooth generated chords, in the background are algorithmically computed, so they are ever changing, but they come along with a harmonic label, so the learning process may work.

After a few bars, the human performer begins playing (normal piano sound). What he plays is learned into the style database, but the machine improvizer is turned off so you won't hear it.

Listen to Sequence 5.1

Sequence 5.2.

The machine improvizer begins to play. It does'nt know much at that time, so it's kind of dumb. The sound used is a "ducky" synthesizer sound, so you'll recognise it easily.

After a while, the human performer (piano sound) begins to interact with the artificial one, thus enriching the data base further on.

Listen to Sequence 5.2

Sequence 5.3.

The Human performer introduces weird chords (slightly out of harmony/rhythm).

Listen how the artificial improvizer inserts later on this new material into its play, and how the performer gets into more complex interaction.

Listen to Sequence 5.3

Sequence 5.4.

In these sequences the human performer went to get a cup of coffee, so the artificial one is playing on his own.

Listen to Sequence 5.4