We demonstrate a system that creates a real-time accompaniment for a live musician performing a non-improvisatory piece of music. The system listens to the live player by performing a hidden Markov model analysis of the player's acoustic signal. A belief network uses this information, a musical score, and past rehearsals, to create a sequence of evolving predictions for future note-onsets in the soloist and accompaniment. These predictions are used to guide the time-stretched resynthesis of prerecorded orchestral audio using a phase vocoder.
Subjects: 1.1 Art And Music; 3.4 Probabilistic Reasoning