Have a personal or library account? Click to login
Nuanced Music Emotion Recognition via a Semi‑Supervised Multi‑Relational Graph Neural Network Cover

Nuanced Music Emotion Recognition via a Semi‑Supervised Multi‑Relational Graph Neural Network

Open Access
|Jun 2025

Abstract

Music emotion recognition (MER) seeks to understand the complex emotional landscapes elicited by music, acknowledging music’s profound social and psychological roles beyond traditional tasks such as genre classification or content similarity. MER relies heavily on high‑quality emotional annotations, which serve as the foundation for training models to recognize emotions. However, collecting these annotations is both complex and costly, leading to limited availability of large‑scale datasets for MER. Recent efforts in MER for automatically extracting emotion have focused on learning track representations in a supervised manner. However, these approaches mainly use simplified emotion models due to limited datasets or a lack of necessity for sophisticated emotion models and ignore hidden inter‑track relations, which are beneficial in a semi‑supervised learning setting. This paper proposes a novel approach to MER by constructing a multi‑relational graph that encapsulates different facets of music. We leverage graph neural networks to model intricate inter‑track relationships and capture structurally induced representations from user data, such as listening histories, genres, and tags. Our model, the semi‑supervised multi‑relational graph neural network for emotion recognition (SRGNN‑Emo), innovates by combining graph‑based modeling with semi‑supervised learning, using rich user data to extract nuanced emotional profiles from music tracks. Through extensive experimentation, SRGNN‑Emo demonstrates significant improvements in R2 and root mean squared error metrics for predicting the intensity of nine continuous emotions (Geneva Emotional Music Scale), demonstrating its superior capability in capturing and predicting complex emotional expressions in music.

DOI: https://doi.org/10.5334/tismir.235 | Journal eISSN: 2514-3298
Language: English
Submitted on: Oct 29, 2024
Accepted on: Apr 30, 2025
Published on: Jun 11, 2025
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2025 Andreas Peintner, Marta Moscati, Yu Kinoshita, Richard Vogl, Peter Knees, Markus Schedl, Hannah Strauss, Marcel Zentner, Eva Zangerle, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.