Abstract
Being able to accurately predict whether note combinations will be perceived as consonant or dissonant is important to the computational study of music. In this paper, we present and evaluate computational tools aimed at characterising the consonance of simultaneous symbolic chords. While many previous computational approaches to consonance aim to simulate perceptual or psychoacoustic processes, here, we prioritise measure performance and flexibility – aiming to provide accurate measures of consonance even when only limited information is available (such as given by chord labels). We model consonance based on the individual contributions of all pairwise combinations of tones that make up any given chord. Each pair is assigned a weight, optimised using existing behavioural data of perceived consonance (Bowling et al., 2018). We compare two sets of weights, using the 12 intervals in an octave and using interval classes (the smallest interval between pitches, ignoring the octave) – providing measures that are invariant to different chord voicings. We also compare methods of combining weights by either summing or averaging (i.e. summing the empirical distribution) values for all pairwise intervals/classes. Finally, we investigate the effects on performance of the conditional inclusion or exclusion of intervals within chords. Measures of consonance were used to predict ratings from three prior behavioural experiments with Western listeners. Optimised measures were shown to have a strong correlation with ratings, both when using interval and interval class, and consistently outperformed previous computational models of consonance.
