Have a personal or library account? Click to login
Dagstuhl ChoirSet: A Multitrack Dataset for MIR Research on Choral Singing Cover

Dagstuhl ChoirSet: A Multitrack Dataset for MIR Research on Choral Singing

Open Access
|Jul 2020

References

  1. 1Alldahl, P.-G. (1990). Choral Intonation. Gehrmans Musikförlag.
  2. 2Benetos, E., Dixon, S., Duan, Z., & Ewert, S. (2019). Automatic music transcription: An overview. IEEE Signal Processing Magazine, 36(1), 2030. DOI: 10.1109/MSP.2018.2869928
  3. 3Bittner, R. M. (2018). Data-driven fundamental frequency estimation. PhD thesis, New York University.
  4. 4Bittner, R. M., Fuentes, M., Rubinstein, D., Jansson, A., Choi, K., & Kell, T. (2019). mirdata: Software for reproducible usage of datasets. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 99106. Delft, The Netherlands.
  5. 5Bittner, R. M., Humphrey, E., & Bello, J. P. (2016). PySox: Leveraging the audio signal processing power of SoX in Python. In Late Breaking and Demo Papers, International Society for Music Information Retrieval Conference (ISMIR) Conference.
  6. 6Bittner, R. M., McFee, B., Salamon, J., Li, P., & Bello, J. P. (2017). Deep salience representations for F0 tracking in polyphonic music. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 6370. Suzhou, China.
  7. 7Bittner, R. M., Salamon, J., Tierney, M., Mauch, M., Cannam, C., & Bello, J. P. (2014). MedleyDB: A multitrack dataset for annotation-intensive MIR research. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 155160. Taipei, Taiwan.
  8. 8Blaauw, M., & Bonada, J. (2017). A neural parametric singing synthesizer modeling timbre and expression from natural songs. Applied Sciences, 7(1313). DOI: 10.3390/app7121313
  9. 9Böck, S., Davies, M. E. P., & Knees, P. (2019). Multitask learning of tempo and beat: Learning one to improve the other. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 486493. Delft, The Netherlands.
  10. 10Cannam, C., Jewell, M. O., Rhodes, C., Sandler, M., & d’Inverno, M. (2010a). Linked data and you: Bringing music research software into the semantic web. Journal of New Music Research, 39(4), 313325. DOI: 10.1080/09298215.2010.522715
  11. 11Cannam, C., Landone, C., & Sandler, M. B. (2010b). Sonic Visualiser: An open source application for viewing, analysing, and annotating music audio files. In Proceedings of the International Conference on Multimedia, pages 14671468. Florence, Italy. DOI: 10.1145/1873951.1874248
  12. 12Cano, E., FitzGerald, D., Liutkus, A., Plumbley, M. D., & Stöter, F. (2019). Musical source separation: An introduction. IEEE Signal Processing Magazine, 36(1), 3140. DOI: 10.1109/MSP.2018.2874719
  13. 13Cano, E., Schuller, G., & Dittmar, C. (2014). Pitchinformed solo and accompaniment separation towards its use in music education applications. EURASIP Journal on Advances in Signal Processing, 2014(23). DOI: 10.1186/1687-6180-2014-23
  14. 14Chandna, P., Blaauw, M., Bonada, J., & Gómez, E. (2019). A vocoder based method for singing voice extraction. In Proceedings of the 44th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK. IEEE. DOI: 10.1109/ICASSP.2019.8683323
  15. 15Cuesta, H., Gómez, E., & Chandna, P. (2019). A framework for multi-f0 modeling in SATB choir recordings. In Proceedings of the Sound and Music Computing (SMC) Conference, pages 447453.
  16. 16Cuesta, H., Gómez, E., Martorell, A., & Loáiciga, F. (2018). Analysis of intonation in unison choir singing. In Proceedings of the International Conference of Music Perception and Cognition (ICMPC), pages 125130. Graz, Austria.
  17. 17Dai, J., & Dixon, S. (2017). Analysis of interactive intonation in unaccompanied SATB ensembles. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 599605. Suzhou, China.
  18. 18Dai, J., & Dixon, S. (2019). Singing together: Pitch accuracy and interaction in unaccompanied unison and duet singing. The Journal of the Acoustical Society of America, 145(2), 663675. DOI: 10.1121/1.5087817
  19. 19Devaney, J. (2011). An Empirical Study of the Influence of Musical Context on Intonation Practices in Solo Singers and SATB Ensembles. PhD thesis, McGill University, Montreal, Canada.
  20. 20Devaney, J., & Ellis, D. P. W. (2008). An empirical approach to studying intonation tendencies in polyphonic vocal performances. Journal of Interdisciplinary Music Studies, 2(1&2), 141156.
  21. 21Ewert, S., Müller, M., & Grosche, P. (2009). High resolution audio synchronization using chroma onset features. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 18691872. Taipei, Taiwan. DOI: 10.1109/ICASSP.2009.4959972
  22. 22Gasser, M., Arzt, A., Gadermaier, T., Grachten, M., & Widmer, G. (2015). Classical music on the web – user interfaces and data representations. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 571577. Málaga, Spain.
  23. 23Gnann, V., Kitza, M., Becker, J., & Spiertz, M. (2011). Least-squares local tuning frequency estimation for choir music. In Proceedings of the Audio Engineering Society (AES) Convention, New York City, USA.
  24. 24Gómez, E., Gkiokas, A., Liem, C., Samiotis, I. P., Gutierrez, N., Santos, P., Crawford, T., Weigl, D. M., Goebl, W., Tilburg, M., Sarasua, Á., & Freiburg, B. (2020). Towards richer online public-domain archives of classical music. Submitted to Human Computation Journal.
  25. 25Goto, M., & Muraoka, Y. (1997). Issues in evaluating beat tracking systems. In Working Notes of the IJCAI-97 Workshop on Issues in AI and Music- Evaluation and Assessment, pages 916.
  26. 26Gouyon, F., Dixon, S., Pampalk, E., & Widmer, G. (2004). Evaluating rhythmic descriptors for musical genre classification. In Proceedings of the Audio Engineering Society (AES) International Conference, London, UK.
  27. 27Harte, C., Sandler, M. B., Abdallah, S., & Gómez, E. (2005). Symbolic representation of musical chords: A proposed syntax for text annotations. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 6671. London, UK.
  28. 28Howard, D. M. (2007). Intonation drift in a capella soprano, alto, tenor, bass quartet singing with key modulation. Journal of Voice, 21(3), 300315. DOI: 10.1016/j.jvoice.2005.12.005
  29. 29Howard, D. M., Daffern, H., & Brereton, J. (2013). Four-part choral synthesis system for investigating intonation in a cappella choral singing. Logopedics Phoniatrics Vocology, 38(3), 135142. DOI: 10.3109/14015439.2013.812143
  30. 30Jeong, D., Kwon, T., Park, C., & Nam, J. (2017). PerformScore: Toward performance visualization with the score on the web browser. In Demos and Late Breaking News of the International Society for Music Information Retrieval Conference (ISMIR), Suzhou, China.
  31. 31Kim, J. W., Salamon, J., Li, P., & Bello, J. P. (2018). Crepe: A convolutional representation for pitch estimation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 161165. Calgary, Canada. DOI: 10.1109/ICASSP.2018.8461329
  32. 32Klapuri, A. P. (2006). Multiple fundamental frequency estimation by summing harmonic amplitudes. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 216221.
  33. 33Klapuri, A. P. (2008). Multipitch analysis of polyphonic music and speech signals using an auditory model. IEEE Transactions on Audio, Speech, and Language Processing, 16(2), 255266. DOI: 10.1109/TASL.2007.908129
  34. 34Mauch, M., Cannam, C., Bittner, R., Fazekas, G., Salamon, J., Dai, J., Bello, J., & Dixon, S. (2015). Computer-aided melody note transcription using the Tony software: Accuracy and efficiency. In Proceedings of the International Conference on Technologies for Music Notation and Representation.
  35. 35Mauch, M., & Dixon, S. (2014). pYIN: A fundamental frequency estimator using probabilistic threshold distributions. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 659663. Florence, Italy. DOI: 10.1109/ICASSP.2014.6853678
  36. 36Mauch, M., Frieler, K., & Dixon, S. (2014). Intonation in unaccompanied singing: Accuracy, drift, and a model of reference pitch memory. Journal of the Acoustical Society of America, 136(1), 401411. DOI: 10.1121/1.4881915
  37. 37McLeod, A., Schramm, R., Steedman, M., & Benetos, E. (2017). Automatic transcription of polyphonic vocal music. Applied Sciences, 7(12). DOI: 10.3390/app7121285
  38. 38Müller, M., Gómez, E., & Yang, Y. (2019). Computational methods for melody and voice processing in music recordings (Dagstuhl seminar 19052). Dagstuhl Reports, 9(1), 125177.
  39. 39Müller, M., Kurth, F., & Röder, T. (2004). Towards an efficient algorithm for automatic score-to-audio synchronization. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 365372. Barcelona, Spain.
  40. 40Panteli, M. (2018). Computational analysis of world music corpora. PhD thesis, Queen Mary University of London, UK.
  41. 41Poliner, G. E., Ellis, D. P., Ehmann, A. F., Gómez, E., Streich, S., & Ong, B. (2007). Melody transcription from music audio: Approaches and evaluation. IEEE Transactions on Audio, Speech, and Language Processing, 15(4), 12471256. DOI: 10.1109/TASL.2006.889797
  42. 42Raffel, C., & Ellis, D. P. W. (2014). Intuitive analysis, creation and manipulation of MIDI data with pretty_midi. In Demos and Late Breaking News of the International Society for Music Information Retrieval Conference (ISMIR), Taipei, Taiwan.
  43. 43Raffel, C., McFee, B., Humphrey, E. J., Salamon, J., Nieto, O., Liang, D., & Ellis, D. P. W. (2014). MIR_EVAL: A transparent implementation of common MIR metrics. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 367372. Taipei, Taiwan.
  44. 44Robertson, A. (2012). Decoding tempo and timing variations in music recordings from beat annotations. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 475480.
  45. 45Rosenzweig, S., Scherbaum, F., Shugliashvili, D., Arifi-Müller, V., & Müller, M. (2020). Erkomaishvili Dataset: A curated corpus of traditional Georgian vocal music for computational musicology. Transactions of the International Society for Music Information Retrieval (TISMIR), 3(1), 3141. DOI: 10.5334/tismir.44
  46. 46Röwenstrunk, D., Prätzlich, T., Betzwieser, T., Müller, M., Szwillus, G., & Veit, J. (2015). Das Gesamtkunstwerk Oper aus Datensicht – Aspekte des Umgangs mit einer heterogenen Datenlage im BMBF-Projekt “Freischütz Digital”. Datenbank-Spektrum, 15(1), 6572. DOI: 10.1007/s13222-015-0179-0
  47. 47Salamon, J., Gómez, E., Ellis, D. P. W., & Richard, G. (2014). Melody extraction from polyphonic music signals: Approaches, applications, and challenges. IEEE Signal Processing Magazine, 31(2), 118134. DOI: 10.1109/MSP.2013.2271648
  48. 48Scherbaum, F., Loos, W., Kane, F., & Vollmer, D. (2015). Body vibrations as source of information for the analysis of polyphonic vocal music. In Proceedings of the International Workshop on Folk Music Analysis, pages 8993. Paris, France.
  49. 49Scherbaum, F., Mzhavanadze, N., Rosenzweig, S., & Müller, M. (2019). Multi-media recordings of traditional Georgian vocal music for computational analysis. In Proceedings of the International Workshop on Folk Music Analysis, pages 16. Birmingham, UK.
  50. 50Scherbaum, F., Rosenzweig, S., Müller, M., Vollmer, D., & Mzhavanadze, N. (2018). Throat microphones for vocal music analysis. In Demos and Late Breaking News of the International Society for Music Information Retrieval Conference (ISMIR), Paris, France.
  51. 51Schramm, R., & Benetos, E. (2017). Automatic transcription of a cappella recordings from multiple singers. In AES International Conference on Semantic Audio. Audio Engineering Society.
  52. 52Schramm, R., McLeod, A., Steedman, M., & Benetos, E. (2017). Multi-pitch detection and voice assignment for a cappella recordings of multiple singers. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 552559. Suzhou, China.
  53. 53Serra, X. (2014). Creating research corpora for the computational study of music: The case of the CompMusic project. In Proceedings of the AES International Conference on Semantic Audio, London, UK.
  54. 54Su, L., Chuang, T.-Y., & Yang, Y.-H. (2016). Exploiting frequency, periodicity and harmonicity using advanced time-frequency concentration techniques for multipitch estimation of choir and symphony. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 393399. New York City, USA.
  55. 55Sundberg, J. (1987). The Science of the Singing Voice. Northern Illinois University Press.
  56. 56Thomas, V., Fremerey, C., Müller, M., & Clausen, M. (2012). Linking sheet music and audio – challenges and new approaches. In Müller, M., Goto, M., & Schedl, M., Editors, Multimodal Music Processing, volume 3 of Dagstuhl Follow-Ups, pages 122. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, Dagstuhl, Germany.
  57. 57van Kranenburg, P., de Bruin, M., & Volk, A. (2019). Documenting a song culture: The Dutch Song Database as a resource for musicological research. International Journal on Digital Libraries, 20(1), 1323. DOI: 10.1007/s00799-017-0228-4
  58. 58Weiß, C., Schlecht, S. J., Rosenzweig, S., & Müller, M. (2019). Towards measuring intonation quality of choir recordings: A case study on Bruckner’s Locus Iste. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 276283. Delft, The Netherlands.
  59. 59Werner, N., Balke, S., Stöter, F.-R., Müller, M., & Edler, B. (2017). trackswitch.js: A versatile webbased audio player for presenting scientific results. In Proceedings of the Web Audio Conference (WAC), London, UK.
  60. 60Zalkow, F., Rosenzweig, S., Graulich, J., Dietz, L., Lemnaouar, E. M., & Müller, M. (2018). A web-based interface for score following and track switching in choral music. In Demos and Late Breaking News of the International Society for Music Information Retrieval Conference (ISMIR), Paris, France.
  61. 61Zapata, J. R., Davies, M. E. P., & Gómez, E. (2014). Multi-feature beat tracking. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(4), 816825. DOI: 10.1109/TASLP.2014.2305252
DOI: https://doi.org/10.5334/tismir.48 | Journal eISSN: 2514-3298
Language: English
Submitted on: Feb 14, 2020
Accepted on: Jun 10, 2020
Published on: Jul 29, 2020
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2020 Sebastian Rosenzweig, Helena Cuesta, Christof Weiß, Frank Scherbaum, Emilia Gómez, Meinard Müller, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.