Have a personal or library account? Click to login
PBSCR: The Piano Bootleg Score Composer Recognition Dataset Cover
By: Arhan Jain,  Alec Bunn,  Austin Pham and  TJ Tsai  
Open Access
|Sep 2024

References

  1. Anan, Y., Hatano, K., Bannai, H., Takeda, M., and Satoh, K. (2012). Polyphonic music classification on symbolic data using dissimilarity functions. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 229234).
  2. Brinkman, A., Shanahan, D., and Sapp, C. (2016). Musical stylometry, machine learning and attribution studies: A semi-supervised approach to the works of Josquin. In Proceedings of the Biennial International Conference on Music Perception and Cognition (pp. 9197).
  3. Chou, Y.-H., Chen, I.-C., Chang, C.-J., Ching, J., and Yang, Y.-H. (2021). MidiBERT-piano: Largescale pre-training for symbolic music understanding. arXiv preprint arXiv:2107.05223.
  4. Costa, L. F. P., and Salazar, A. E. C. (2019). Dodecaphonic composer identification based on complex networks. In 2019 8th Brazilian Conference on Intelligent Systems (BRACIS) (pp. 765770).
  5. Deepaisarn, S., Buaruk, S., Chokphantavee, S., Prathipasen, P., and Sornlertlamvanich, V. (2022). Visual-based musical data representation for composer classification. In IEEE International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP) (pp. 15).
  6. Deepaisarn, S., Chokphantavee, S., Prathipasen, P., Buaruk, S., and Sornlertlamvanich, V. (2023). NLP-based music processing for composer classification. Scientific Reports, 13(1), 13228. 10.1038/s41598-023-40673-3
  7. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. (2021). An image is worth 16×16 words: Transformers for image recognition at scale. In International Conference on Learning Representations.
  8. Foscarin, F., Hoedt, K., Praher, V., Flexer, A., and Widmer, G. (2022). Concept-based techniques for “musicologist-friendly” explanations in a deep music classifier. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 876883).
  9. Fradet, N., Gutowski, N., Chhel, F., and Briot, J.-P. (2023). Byte pair encoding for symbolic music. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (pp. 20012020).
  10. Gage, P. (1994). A new algorithm for data compression. C Users Journal, 12(2), 2338.
  11. Goienetxea, I., Mendialdua, I., and Sierra, B. (2018). On the use of matrix based representation to deal with automatic composer recognition. In Australasian Joint Conference on Artificial Intelligence (pp. 531536). Springer.
  12. Hajj, N., Filo, M., and Awad, M. (2018). Automated composer recognition for multi-voice piano compositions using rhythmic features, n-grams and modified cortical algorithms. Complex & Intelligent Systems, 4, 5565.
  13. He, K., Chen, X., Xie, S., Li, Y., Dollár, P., and Girshick, R. (2022). Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1600016009).
  14. Hedges, T., Roy, P., and Pachet, F. (2014). Predicting the composer and style of jazz chord progressions. Journal of New Music Research, 43(3), 276290.
  15. Herlands, W., Der, R., Greenberg, Y., and Levin, S. (2014). A machine learning approach to musically meaningful homogeneous style classification. In Proceedings of the AAAI Conference on Artificial Intelligence.
  16. Herremans, D., Martens, D., and Sörensen, K. (2016). Composer classification models for music-theory building. In Computational Music Analysis (pp. 369392).
  17. Herremans, D., Sörensen, K., and Martens, D. (2015). Classification and generation of composer-specific music using global feature models and variable neighborhood search. Computer Music Journal, 39(3), 7191.
  18. Hontanilla, M., Pérez-Sancho, C., and Inesta, J. M. (2013). Modeling musical style with language models for composer recognition. In Proceedings of the 6th Iberian Conference on Pattern Recognition and Image Analysis (pp. 740748). Springer.
  19. Hsiao, W.-Y., Liu, J.-Y., Yeh, Y.-C., and Yang, Y.-H. (2021). Compound word transformer: Learning to compose full-song music over dynamic directed hypergraphs. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 178186).
  20. Huang, Y.-S., and Yang, Y.-H. (2020). Pop music transformer: Beat-based modeling and generation of expressive pop piano compositions. In Proceedings of the 28th ACM International Conference on Multimedia (pp. 11801188).
  21. Kempfert, K. C., and Wong, S. W. (2020). Where does Haydn end and Mozart begin? Composer classification of string quartets. Journal of New Music Research, 49(5), 457476.
  22. Kher, R. (2022). Music composer recognition from MIDI representation using deep learning and n-gram based methods (Master’s thesis). Dalhousie University.
  23. Kim, S., Lee, H., Park, S., Lee, J., and Choi, K. (2020). Deep composer classification using symbolic representation. In Late-Breaking Demo Session of the International Society for Music Information Retrieval Conference.
  24. Kong, Q., Choi, K., and Wang, Y. (2020). Large-scale MIDI-based composer classification. arXiv preprint arXiv:2010.14805. https://arxiv.org/abs/2010.14805
  25. Kong, Q., Li, B., Chen, J., and Wang, Y. (2022). GiantMIDI-Piano: A large-scale MIDI dataset for classical piano music. Transactions of the International Society for Music Information Retrieval, 5(1), 8798.
  26. Kumar, A., Raghunathan, A., Jones, R., Ma, T., and Liang, P. (2022). Fine-tuning can distort pretrained features and underperform out-of-distribution. In International Conference on Learning Representations.
  27. Li, Z., Gong, R., Chen, Y., and Su, K. (2023). Finegrained position helps memorizing more, a novel music compound transformer model with feature interaction fusion. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 52035212).
  28. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. https://arxiv.org/abs/1907.11692
  29. McKay, C., and Fujinaga, I. (2006). jSymbolic: A feature extractor for MIDI files. In Proceedings of the International Computer Music Conference.
  30. Micchi, G. (2018). A neural network for composer classification. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR).
  31. Pape, L., de Gruijl, J., and Wiering, M. (2008). Democratic liquid state machines for music recognition. In Speech, Audio, Image and Biomedical Signal Processing Using Neural Networks (pp. 191215). Springer.
  32. Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Kureger, G., and Sutskever, I. (2021). Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (pp. 87488763).
  33. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
  34. Raffel, C. (2016). Learning-Based Methods for Comparing Sequences, With Applications to Audio-to-MIDI Alignment and Matching. Columbia University.
  35. Revathi, A., Vashista, D. V., Teja, K. S. S., and Nagakrishnan, R. (2020). A robust music composer identification system based on cepstral feature and models. In Advances in Communication Systems and Networks (pp. 3544).
  36. Saboo, K. N. C. P., and Rajendran, B. (2015). Composer classification based on temporal coding in adaptive spiking neural networks. In International Joint Conference on Neural Networks (IJCNN) (pp. 18).
  37. Sadeghian, P., Wilson, C., Goeddel, S., and Olmsted, A. (2017). Classification of music by composer using fuzzy min–max neural networks. In Proceedings of the 12th International Conference for Internet Technology and Secured Transactions (ICITST) (pp. 189192).
  38. Shuvaev, S., Giffar, H., Kouk, S., Gaffar, H., and Koulakov, A. A. (2017). Representations of sound in deep learning of audio features from music. arXiv preprint arXiv:1712.02898. https://arxiv.org/abs/1712.02898
  39. Simonetta, F., Llorens, A., Serrano, M., GarcíaPortugués, E., and Torrente, Á. (2023). Optimizing feature extraction for symbolic music. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR).
  40. Takamoto, A., Yoshida, M., Umemura, K., and Ichikawa, Y. (2018). Feature selection for composer classification method using quantity of information. In IEEE International Conference on Knowledge and Smart Technology (KST) (pp. 3033).
  41. Tsai, T., and Ji, K. (2020). Composer style classification of piano sheet music images using language model pretraining. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 176183).
  42. Tsai, T., Yang, D., Shan, M., Tanprasert, T., and Jenrungrot, T. (2020). Using cell phone pictures of sheet music to retrieve MIDI passages. IEEE Transactions on Multimedia, 22(12), 31153127.
  43. Velarde, G., Chacón, C. C., Meredith, D., Weyde, T., and Grachten, M. (2018). Convolution-based classification of audio and symbolic representations of music. Journal of New Music Research, 47(3), (pp. 191205).
  44. Velarde, G., Weyde, T., Chacón, C. E. C., Meredith, D., and Grachten, M. (2016). Composer recognition based on 2d-filtered piano-rolls. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 115121).
  45. Verma, H., and Thickstun, J. (2019). Convolutional composer classification. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 549556).
  46. Walwadkar, D., Shatri, E., Timms, B., and Fazekas, G. (2022). CompldNet: Sheet music composer identification using deep neural network. In Proceedings of the 4th International Workshop on Reading Music Systems (pp. 914).
  47. Wołkowicz, J., and Kešelj, V. (2013). Evaluation of n-gram-based classification approaches on classical music corpora. In International Conference on Mathematics and Computation in Music (pp. 213225).
  48. Wu, S., Yu, D., Tan, X., and Sun, M. (2023). CLaMP: Contrastive language-music pre-training for cross-modal symbolic music information retrieval. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 157165).
  49. Yang, D., Goutam, A., Ji, K., and Tsai, T. (2022). Large-scale multimodal piano music identification using marketplace fingerprinting. Algorithms, 15(5), 146.
  50. Yang, D., Ji, K., and Tsai, T. (2021). A deeper look at sheet music composer classification using self-supervised pretraining. Applied Sciences, 11(4), 1387.
  51. Yang, D., Tanprasert, T., Jenrungrot, T., Shan, M., and Tsai, T. (2019). MIDI passage retrieval using cell phone pictures of sheet music. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 916923).
  52. Yang, D., and Tsai, T. (2020). Camera-based piano sheet music identification. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 481488).
  53. Yang, D., and Tsai, T. (2021a). Composer classification with cross-modal transfer learning and musically informed augmentation. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 802809).
  54. Yang, D., and Tsai, T. (2021b). Piano sheet music identification using dynamic n-gram fingerprinting. Transactions of the International Society for Music Information Retrieval, 4(1), 4251.
  55. Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. (2018). MixUp: Beyond empirical risk minimization. In International Conference on Learning Representations.
  56. Zhang, H., Karystinaios, E., Dixon, S., Widmer, G., and Cancino-Chacón, C. E. (2023). Symbolic music representations for classification tasks: A systematic evaluation. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 848858).
DOI: https://doi.org/10.5334/tismir.185 | Journal eISSN: 2514-3298
Language: English
Submitted on: Feb 3, 2024
Accepted on: Jul 31, 2024
Published on: Sep 14, 2024
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2024 Arhan Jain, Alec Bunn, Austin Pham, TJ Tsai, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.