Have a personal or library account? Click to login
Depict or Discern? Fingerprinting Musical Taste from Explicit Preferences Cover

Depict or Discern? Fingerprinting Musical Taste from Explicit Preferences

Open Access
|Jan 2024

Abstract

The notion of personal taste in general, and musical taste in particular, is pervasive in the literature on recommender systems, but also in cultural sociology and psychology. However, definitions and measurement methods strongly differ from one study to another. In this paper, we question two different views on taste that can be retrieved from the literature: either something that is distinctive of an individual, or something that essentially captures the extent and diversity of their preferences. Relying upon a dataset that contains the complete list of musical items liked by individual users of a streaming service, as well as streaming logs, we propose two methods to compute fingerprints of their musical taste. The first one explicitly targets a uniqueness property, aiming at selecting items that uniquely identify a user in the crowd. The second approach focuses on a representativeness task that is fundamental in recommendation, i.e. building a summary depiction of the user’s preferences that can be leveraged to propose other items of interest. We demonstrate that the two methods lead to conflicting solutions, hence highlighting the need to precisely acknowledge which point of view applies when addressing a computational question related to taste. We also raise the question of users’ identifiability through their online activity on music streaming platforms, and beyond.

DOI: https://doi.org/10.5334/tismir.158 | Journal eISSN: 2514-3298
Language: English
Submitted on: Dec 23, 2022
Accepted on: Nov 20, 2023
Published on: Jan 22, 2024
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2024 Kristina Matrosova, Manuel Moussallam, Thomas Louail, Olivier Bodini, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.