Have a personal or library account? Click to login
Steerable Music Generation which Satisfies Long-Range Dependency Constraints Cover

Steerable Music Generation which Satisfies Long-Range Dependency Constraints

By: Paul Bodily and  Dan Ventura  
Open Access
|Mar 2022

Abstract

Although music is full of repetitive motifs and themes, artificially intelligent temporal sequence models have yet to demonstrate the ability to model or generate musical compositions that satisfy steerable, long-range constraints needed to evoke such repetitions. Markovian approaches inherently assume a strictly limited range of memory while neural approaches—despite recent advances in evoking long-range dependencies—remain largely unsteerable. More recent models have been published that attempt to evoke repetitive motifs by imposing unary constraints at intervals or by collating copies of musical segments. Although the results of these methods satisfy long-range dependencies, they come with significant—potentially prohibitive—sacrifices in the musical coherence of the generated composition or in the breadth of satisfying compositions which the model can create. We present REGUALR non-homogeneous Markov models as a solution to the long-range dependency problem which uses RELATIONAL automata to enforce binary constraints to compose music with repeating motifs. The solution we present preserves musical coherence (i.e., Markovian constraints) for the duration of the generated compositions and significantly increases the range of satisfying compositions that can be generated.

DOI: https://doi.org/10.5334/tismir.97 | Journal eISSN: 2514-3298
Language: English
Submitted on: Feb 28, 2021
Accepted on: Feb 11, 2022
Published on: Mar 25, 2022
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2022 Paul Bodily, Dan Ventura, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.