Abstract
Up until recently, most approaches to music generation were based on deductive logic: generative rules were devised on the basis of musicians’ preferences, subjective appreciation and dominant music theories. Machine learning (ML) introduced a paradigm shift: vast datasets of existing music are used to train neural networks capable of generating new compositions supposedly without embedding predefined musical rules. We first outline how rule-based systems depend on a series of reductionist processes and assumptions about music that affect what can be generated. We then examine ML-based generative music systems and show that they are still unable to generate the full theoretical space of musical possibilities, they are still grounded on reductionist processes and their soundness is still affected by unquestioned assumptions. We also identify the limitations of semantic bridges used to form musical meaning and the epistemic framework of cascading modules. Finally, we propose that the artistic potential of ML systems might lie beyond attempts to replicate human music-making methods.
