References
- 1Alldahl, P.-G. (1990). Choral Intonation. Gehrmans Musikförlag.
- 2Benetos, E., Dixon, S., Duan, Z., & Ewert, S. (2019). Automatic music transcription: An overview. IEEE Signal Processing Magazine, 36(1), 20–30. DOI: 10.1109/MSP.2018.2869928
- 3Bittner, R. M. (2018). Data-driven fundamental frequency estimation. PhD thesis, New York University.
- 4Bittner, R. M., Fuentes, M., Rubinstein, D., Jansson, A., Choi, K., & Kell, T. (2019). mirdata: Software for reproducible usage of datasets. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 99–106. Delft, The Netherlands.
- 5Bittner, R. M., Humphrey, E., & Bello, J. P. (2016). PySox: Leveraging the audio signal processing power of SoX in Python. In Late Breaking and Demo Papers, International Society for Music Information Retrieval Conference (ISMIR) Conference.
- 6Bittner, R. M., McFee, B., Salamon, J., Li, P., & Bello, J. P. (2017). Deep salience representations for F0 tracking in polyphonic music. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 63–70. Suzhou, China.
- 7Bittner, R. M., Salamon, J., Tierney, M., Mauch, M., Cannam, C., & Bello, J. P. (2014). MedleyDB: A multitrack dataset for annotation-intensive MIR research. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 155–160. Taipei, Taiwan.
- 8Blaauw, M., & Bonada, J. (2017). A neural parametric singing synthesizer modeling timbre and expression from natural songs. Applied Sciences, 7(1313). DOI: 10.3390/app7121313
- 9Böck, S., Davies, M. E. P., & Knees, P. (2019). Multitask learning of tempo and beat: Learning one to improve the other. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 486–493. Delft, The Netherlands.
- 10Cannam, C., Jewell, M. O., Rhodes, C., Sandler, M., & d’Inverno, M. (2010a). Linked data and you: Bringing music research software into the semantic web. Journal of New Music Research, 39(4), 313–325. DOI: 10.1080/09298215.2010.522715
- 11Cannam, C., Landone, C., & Sandler, M. B. (2010b). Sonic Visualiser: An open source application for viewing, analysing, and annotating music audio files. In Proceedings of the International Conference on Multimedia, pages 1467–1468. Florence, Italy. DOI: 10.1145/1873951.1874248
- 12Cano, E., FitzGerald, D., Liutkus, A., Plumbley, M. D., & Stöter, F. (2019). Musical source separation: An introduction. IEEE Signal Processing Magazine, 36(1), 31–40. DOI: 10.1109/MSP.2018.2874719
- 13Cano, E., Schuller, G., & Dittmar, C. (2014). Pitchinformed solo and accompaniment separation towards its use in music education applications. EURASIP Journal on Advances in Signal Processing, 2014(23). DOI: 10.1186/1687-6180-2014-23
- 14Chandna, P., Blaauw, M., Bonada, J., & Gómez, E. (2019). A vocoder based method for singing voice extraction. In Proceedings of the 44th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK.
IEEE . DOI: 10.1109/ICASSP.2019.8683323 - 15Cuesta, H., Gómez, E., & Chandna, P. (2019). A framework for multi-f0 modeling in SATB choir recordings. In Proceedings of the Sound and Music Computing (SMC) Conference, pages 447–453.
- 16Cuesta, H., Gómez, E., Martorell, A., & Loáiciga, F. (2018). Analysis of intonation in unison choir singing. In Proceedings of the International Conference of Music Perception and Cognition (ICMPC), pages 125–130. Graz, Austria.
- 17Dai, J., & Dixon, S. (2017). Analysis of interactive intonation in unaccompanied SATB ensembles. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 599–605. Suzhou, China.
- 18Dai, J., & Dixon, S. (2019). Singing together: Pitch accuracy and interaction in unaccompanied unison and duet singing. The Journal of the Acoustical Society of America, 145(2), 663–675. DOI: 10.1121/1.5087817
- 19Devaney, J. (2011). An Empirical Study of the Influence of Musical Context on Intonation Practices in Solo Singers and SATB Ensembles. PhD thesis, McGill University, Montreal, Canada.
- 20Devaney, J., & Ellis, D. P. W. (2008). An empirical approach to studying intonation tendencies in polyphonic vocal performances. Journal of Interdisciplinary Music Studies, 2(1&2), 141–156.
- 21Ewert, S., Müller, M., & Grosche, P. (2009). High resolution audio synchronization using chroma onset features. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 1869–1872. Taipei, Taiwan. DOI: 10.1109/ICASSP.2009.4959972
- 22Gasser, M., Arzt, A., Gadermaier, T., Grachten, M., & Widmer, G. (2015). Classical music on the web – user interfaces and data representations. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 571–577. Málaga, Spain.
- 23Gnann, V., Kitza, M., Becker, J., & Spiertz, M. (2011). Least-squares local tuning frequency estimation for choir music. In Proceedings of the Audio Engineering Society (AES) Convention, New York City, USA.
- 24Gómez, E., Gkiokas, A., Liem, C., Samiotis, I. P., Gutierrez, N., Santos, P., Crawford, T., Weigl, D. M., Goebl, W., Tilburg, M., Sarasua, Á., & Freiburg, B. (2020). Towards richer online public-domain archives of classical music. Submitted to Human Computation Journal.
- 25Goto, M., & Muraoka, Y. (1997). Issues in evaluating beat tracking systems. In Working Notes of the IJCAI-97 Workshop on Issues in AI and Music- Evaluation and Assessment, pages 9–16.
- 26Gouyon, F., Dixon, S., Pampalk, E., & Widmer, G. (2004). Evaluating rhythmic descriptors for musical genre classification. In Proceedings of the Audio Engineering Society (AES) International Conference, London, UK.
- 27Harte, C., Sandler, M. B., Abdallah, S., & Gómez, E. (2005). Symbolic representation of musical chords: A proposed syntax for text annotations. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 66–71. London, UK.
- 28Howard, D. M. (2007). Intonation drift in a capella soprano, alto, tenor, bass quartet singing with key modulation. Journal of Voice, 21(3), 300–315. DOI: 10.1016/j.jvoice.2005.12.005
- 29Howard, D. M., Daffern, H., & Brereton, J. (2013). Four-part choral synthesis system for investigating intonation in a cappella choral singing. Logopedics Phoniatrics Vocology, 38(3), 135–142. DOI: 10.3109/14015439.2013.812143
- 30Jeong, D., Kwon, T., Park, C., & Nam, J. (2017). PerformScore: Toward performance visualization with the score on the web browser. In Demos and Late Breaking News of the International Society for Music Information Retrieval Conference (ISMIR), Suzhou, China.
- 31Kim, J. W., Salamon, J., Li, P., & Bello, J. P. (2018). Crepe: A convolutional representation for pitch estimation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 161–165. Calgary, Canada. DOI: 10.1109/ICASSP.2018.8461329
- 32Klapuri, A. P. (2006). Multiple fundamental frequency estimation by summing harmonic amplitudes. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 216–221.
- 33Klapuri, A. P. (2008). Multipitch analysis of polyphonic music and speech signals using an auditory model. IEEE Transactions on Audio, Speech, and Language Processing, 16(2), 255–266. DOI: 10.1109/TASL.2007.908129
- 34Mauch, M., Cannam, C., Bittner, R., Fazekas, G., Salamon, J., Dai, J., Bello, J., & Dixon, S. (2015). Computer-aided melody note transcription using the Tony software: Accuracy and efficiency. In Proceedings of the International Conference on Technologies for Music Notation and Representation.
- 35Mauch, M., & Dixon, S. (2014). pYIN: A fundamental frequency estimator using probabilistic threshold distributions. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 659–663. Florence, Italy. DOI: 10.1109/ICASSP.2014.6853678
- 36Mauch, M., Frieler, K., & Dixon, S. (2014). Intonation in unaccompanied singing: Accuracy, drift, and a model of reference pitch memory. Journal of the Acoustical Society of America, 136(1), 401–411. DOI: 10.1121/1.4881915
- 37McLeod, A., Schramm, R., Steedman, M., & Benetos, E. (2017). Automatic transcription of polyphonic vocal music. Applied Sciences, 7(12). DOI: 10.3390/app7121285
- 38Müller, M., Gómez, E., & Yang, Y. (2019). Computational methods for melody and voice processing in music recordings (Dagstuhl seminar 19052). Dagstuhl Reports, 9(1), 125–177.
- 39Müller, M., Kurth, F., & Röder, T. (2004). Towards an efficient algorithm for automatic score-to-audio synchronization. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 365–372. Barcelona, Spain.
- 40Panteli, M. (2018). Computational analysis of world music corpora. PhD thesis, Queen Mary University of London, UK.
- 41Poliner, G. E., Ellis, D. P., Ehmann, A. F., Gómez, E., Streich, S., & Ong, B. (2007). Melody transcription from music audio: Approaches and evaluation. IEEE Transactions on Audio, Speech, and Language Processing, 15(4), 1247–1256. DOI: 10.1109/TASL.2006.889797
- 42Raffel, C., & Ellis, D. P. W. (2014). Intuitive analysis, creation and manipulation of MIDI data with pretty_midi. In Demos and Late Breaking News of the International Society for Music Information Retrieval Conference (ISMIR), Taipei, Taiwan.
- 43Raffel, C., McFee, B., Humphrey, E. J., Salamon, J., Nieto, O., Liang, D., & Ellis, D. P. W. (2014). MIR_EVAL: A transparent implementation of common MIR metrics. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 367–372. Taipei, Taiwan.
- 44Robertson, A. (2012). Decoding tempo and timing variations in music recordings from beat annotations. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 475–480.
- 45Rosenzweig, S., Scherbaum, F., Shugliashvili, D., Arifi-Müller, V., & Müller, M. (2020). Erkomaishvili Dataset: A curated corpus of traditional Georgian vocal music for computational musicology. Transactions of the International Society for Music Information Retrieval (TISMIR), 3(1), 31–41. DOI: 10.5334/tismir.44
- 46Röwenstrunk, D., Prätzlich, T., Betzwieser, T., Müller, M., Szwillus, G., & Veit, J. (2015). Das Gesamtkunstwerk Oper aus Datensicht – Aspekte des Umgangs mit einer heterogenen Datenlage im BMBF-Projekt “Freischütz Digital”. Datenbank-Spektrum, 15(1), 65–72. DOI: 10.1007/s13222-015-0179-0
- 47Salamon, J., Gómez, E., Ellis, D. P. W., & Richard, G. (2014). Melody extraction from polyphonic music signals: Approaches, applications, and challenges. IEEE Signal Processing Magazine, 31(2), 118–134. DOI: 10.1109/MSP.2013.2271648
- 48Scherbaum, F., Loos, W., Kane, F., & Vollmer, D. (2015). Body vibrations as source of information for the analysis of polyphonic vocal music. In Proceedings of the International Workshop on Folk Music Analysis, pages 89–93. Paris, France.
- 49Scherbaum, F., Mzhavanadze, N., Rosenzweig, S., & Müller, M. (2019). Multi-media recordings of traditional Georgian vocal music for computational analysis. In Proceedings of the International Workshop on Folk Music Analysis, pages 1–6. Birmingham, UK.
- 50Scherbaum, F., Rosenzweig, S., Müller, M., Vollmer, D., & Mzhavanadze, N. (2018). Throat microphones for vocal music analysis. In Demos and Late Breaking News of the International Society for Music Information Retrieval Conference (ISMIR), Paris, France.
- 51Schramm, R., & Benetos, E. (2017). Automatic transcription of a cappella recordings from multiple singers. In AES International Conference on Semantic Audio.
Audio Engineering Society . - 52Schramm, R., McLeod, A., Steedman, M., & Benetos, E. (2017). Multi-pitch detection and voice assignment for a cappella recordings of multiple singers. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 552–559. Suzhou, China.
- 53Serra, X. (2014). Creating research corpora for the computational study of music: The case of the CompMusic project. In Proceedings of the AES International Conference on Semantic Audio, London, UK.
- 54Su, L., Chuang, T.-Y., & Yang, Y.-H. (2016). Exploiting frequency, periodicity and harmonicity using advanced time-frequency concentration techniques for multipitch estimation of choir and symphony. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 393–399. New York City, USA.
- 55Sundberg, J. (1987). The Science of the Singing Voice. Northern Illinois University Press.
- 56Thomas, V., Fremerey, C., Müller, M., & Clausen, M. (2012).
Linking sheet music and audio – challenges and new approaches . In Müller, M., Goto, M., & Schedl, M., Editors, Multimodal Music Processing, volume 3 of Dagstuhl Follow-Ups, pages 1–22. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, Dagstuhl, Germany. - 57van Kranenburg, P., de Bruin, M., & Volk, A. (2019). Documenting a song culture: The Dutch Song Database as a resource for musicological research. International Journal on Digital Libraries, 20(1), 13–23. DOI: 10.1007/s00799-017-0228-4
- 58Weiß, C., Schlecht, S. J., Rosenzweig, S., & Müller, M. (2019). Towards measuring intonation quality of choir recordings: A case study on Bruckner’s Locus Iste. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 276–283. Delft, The Netherlands.
- 59Werner, N., Balke, S., Stöter, F.-R., Müller, M., & Edler, B. (2017). trackswitch.js: A versatile webbased audio player for presenting scientific results. In Proceedings of the Web Audio Conference (WAC), London, UK.
- 60Zalkow, F., Rosenzweig, S., Graulich, J., Dietz, L., Lemnaouar, E. M., & Müller, M. (2018). A web-based interface for score following and track switching in choral music. In Demos and Late Breaking News of the International Society for Music Information Retrieval Conference (ISMIR), Paris, France.
- 61Zapata, J. R., Davies, M. E. P., & Gómez, E. (2014). Multi-feature beat tracking. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(4), 816–825. DOI: 10.1109/TASLP.2014.2305252
