Have a personal or library account? Click to login
A Bimodal Deep Model to Capture Emotions from Music Tracks Cover
Open Access
|Mar 2025

Abstract

This work aims to develop a deep model for automatically labeling music tracks in terms of induced emotions. The machine learning architecture consists of two components: one dedicated to lyric processing based on Natural Language Processing (NLP) and another devoted to music processing. These two components are combined at the decision-making level. To achieve this, a range of neural networks are explored for the task of emotion extraction from both lyrics and music. For lyric classification, three architectures are compared, i.e., a 4-layer neural network, FastText, and a transformer-based approach. For music classification, the architectures investigated include InceptionV3, a collection of models from the ResNet family, and a joint architecture combining Inception and ResNet. SVM serves as a baseline in both threads. The study explores three datasets of songs accompanied by lyrics, with MoodyLyrics4Q selected and preprocessed for model training. The bimodal approach, incorporating both lyrics and audio modules, achieves a classification accuracy of 60.7% in identifying emotions evoked by music pieces. The MoodyLyrics4Q dataset used in this study encompasses musical pieces spanning diverse genres, including rock, jazz, electronic, pop, blues, and country. The algorithms demonstrate reliable performance across the dataset, highlighting their robustness in handling a wide variety of musical styles.

Language: English
Page range: 215 - 235
Submitted on: Nov 28, 2024
Accepted on: Feb 21, 2025
Published on: Mar 18, 2025
Published by: SAN University
In partnership with: Paradigm Publishing Services
Publication frequency: 4 times per year

© 2025 Jan Tobolewski, Michał Sakowicz, Jordi Turmo, Bożena Kostek, published by SAN University
This work is licensed under the Creative Commons Attribution 4.0 License.