A human and a computer play together, on a grand piano, music for 2 pianists; for example, a Schubert piece. The computer “listens” to the human playing, it recognises, follows, anticipates, adapts to expressive nuances and timing details, adding its own playing decisions, in this way becoming a truly musical companion. This computational music accompanist permits, for the first time, naturally expressive co-performance between a human and a machine, based on learned computational models of expressivity.

This is not an art work, but an outcome of long-term scientific research that wishes to contribute to a deeper understanding of the phenomenon of expressivity in music, with computational and AI methods. Expressive performance is at the heart of music: it makes music come alive, transforms it, can deeply move us. Our research has produced a lot of new insights; the ACCompanion is a side-effect, but a beautiful and possibly useful one. It is arguably the first demonstration of a machine playing together with a human in a truly musical way.

The computer’s playing is based on performance principles that were discovered by the machine itself, by learning from a unique data resource: Chopin’s complete solo piano works as played by the late Nikita Magaloff in Vienna (1989), on a Bösendorfer computer-monitored concert piano, graciously given to us by his widow, Mme. Irene M.

NOTE: The demo links show an early prototype; the one we will demonstrate will be more reactive, precise, musical.

Tags: Artificial Intelligence, Music, AI, Computational Music.

Further Activities to have a look at