References
- 1Arzt, A., Widmer, G., and Dixon, S. (2008). Automatic page turning for musicians via real-time machine listening. In Proceedings of the European Conference on Artificial Intelligence (ECAI), pages 241–245, Amsterdam, The Netherlands.
IOS Press . - 2Bay, M., Ehmann, A. F., and Downie, J. S. (2009). Evaluation of multiple-f0 estimation and tracking systems. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 315–320, Kobe, Japan.
- 3Bittner, R. M., Fuentes, M., Rubinstein, D., Jansson, A., Choi, K., and Kell, T. (2019). mirdata: Software for reproducible usage of datasets. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 99–106, Delft, The Netherlands.
- 4Bittner, R. M., Salamon, J., Tierney, M., Mauch, M., Cannam, C., and Bello, J. P. (2014). MedleyDB: A multitrack dataset for annotation-intensive MIR research. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 155–160, Taipei, Taiwan.
- 5Bugler, A., Pardo, B., and Seetharaman, P. (2020). A study of transfer learning in music source separation. arXiv:2010.12650.
- 6Cano, E., FitzGerald, D., Liutkus, A., Plumbley, M. D., and Stöter, F. (2019). Musical source separation: An introduction. IEEE Signal Processing Magazine, 36(1):31–40. DOI: 10.1109/MSP.2018.2874719
- 7Caplin, W. E. (1998). Classical Form: A Theory of Formal Functions for the Instrumental Music of Haydn, Mozart, and Beethoven. Oxford University Press, USA.
- 8Chen, K., Dong, H.-W., Luo, Y., McAuley, J., Berg-Kirkpatrick, T., Puckette, M., and Dubnov, S. (2022). Improving choral music separation through expressive synthesized data from sampled instruments. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Bengaluru, India.
- 9Chiu, C.-Y., Hsiao, W.-Y., Yeh, Y.-C., Yang, Y.-H., and Su, A. W.-Y. (2020). Mixing-specific data augmentation techniques for improved blind violin/piano source separation. In Proceedings of the Workshop on Multimedia Signal Processing (MMSP), pages 1–6. DOI: 10.1109/MMSP48831.2020.9287146
- 10Choi, W., Kim, M., Chung, J., Lee, D., and Jung, S. (2020). Investigating U-Nets with various intermediate blocks for spectrogram-based singing voice separation. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), online.
- 11Cole, W. (1997). The Form of Music. The Associated Board of the Royal Schools of Music (ABRSM), London, UK.
- 12Cuesta, H., Gómez, E., Martorell, A., and Loáiciga, F. (2018). Analysis of intonation in unison choir singing. In Proceedings of the International Conference of Music Perception and Cognition (ICMPC), pages 125–130, Graz, Austria.
- 13Dannenberg, R. B. and Raphael, C. (2006). Music score alignment and computer accompaniment. Communications of the ACM, Special Issue: Music Information Retrieval, 49(8):38–43. DOI: 10.1145/1145287.1145311
- 14Défossez, A. (2021). Hybrid spectrogram and waveform source separation. In Proceedings of the ISMIR 2021 Workshop on Music Source Separation, Online.
- 15Ewert, S., Müller, M., and Grosche, P. (2009). High resolution audio synchronization using chroma onset features. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 1869–1872, Taipei, Taiwan. DOI: 10.1109/ICASSP.2009.4959972
- 16Ewert, S., Pardo, B., Müller, M., and Plumbley, M. (2014). Score-informed source separation for musical audio recordings: An overview. IEEE Signal Processing Magazine, 31(3):116–124. DOI: 10.1109/MSP.2013.2296076
- 17Gasser, M., Arzt, A., Gadermaier, T., Grachten, M., and Widmer, G. (2015). Classical music on the web –user interfaces and data representations. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 571–577, Málaga, Spain.
- 18Girdlestone, C. M. (1948). Mozart & His Piano Concertos. Cassell & Company Ltd., London, UK.
- 19Goto, M., Hashiguchi, H., Nishimura, T., and Oka, R. (2002). RWC music database: Popular, classical and jazz music databases. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 287–288, Paris, France.
- 20Gover, M. and Depalle, P. (2020). Score-informed source separation of choral music. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR).
- 21Hennequin, R., Khlif, A., Voituret, F., and Moussallam, M. (2020). Spleeter: A fast and efficient music source separation tool with pre-trained models. Journal of Open Source Software, 5(50):2154. DOI: 10.21105/joss.02154
- 22Jansson, A., Humphrey, E. J., Montecchio, N., Bittner, R. M., Kumar, A., and Weyde, T. (2017). Singing voice separation with deep U-net convolutional networks. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 745–751, Suzhou, China.
- 23Jeong, D., Kwon, T., Park, C., and Nam, J. (2017). PerformScore: Toward performance visualization with the score on the web browser. In Demos and Late Breaking News of the International Society for Music Information Retrieval Conference (ISMIR), Suzhou, China.
- 24Kastner, T. and Herre, J. (2019). An efficient model for estimating subjective quality of separated audio source signals. In Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pages 95–99, New Paltz, New York, USA. DOI: 10.1109/WASPAA.2019.8937179
- 25Kim, M., Choi, W., Chung, J., Lee, D., and Jung, S. (2021). KUIELab-MDX-Net: A two-stream neural network for music demixing. In Proceedings of the ISMIR 2021 Workshop on Music Source Separation, Online.
- 26Li, B., Liu, X., Dinesh, K., Duan, Z., and Sharma, G. (2019). Creating a multitrack classical music performance dataset for multimodal music analysis: Challenges, insights, and applications. IEEE Transactions on Multimedia, 21(2):522–535. DOI: 10.1109/TMM.2018.2856090
- 27Manilow, E., Seetharaman, P., and Pardo, B. (2020). Simultaneous separation and transcription of mixtures with multiple polyphonic and percussive instruments. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 771–775. DOI: 10.1109/ICASSP40776.2020.9054340
- 28Martínez-Ramírez, A. M., Liao, W.-H., Fabbro, G., Uhlich, S., Nagashima, C., and Mitsufuji, Y. (2022). Automatic music mixing with deep learning and out-of-domain data. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Bengaluru, India.
- 29McLeod, A., Schramm, R., Steedman, M., and Benetos, E. (2017). Automatic transcription of polyphonic vocal music. Applied Sciences, 7(12). DOI: 10.3390/app7121285
- 30Miron, M., Janer, J., and Gómez, E. (2017). Monaural score-informed source separation for classical music using convolutional neural networks. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 55–62, Suzhou, China.
- 31Miron, M. and Martorell, A. (2017). Bach10 Sibelius Dataset. DOI: 10.5281/zenodo.321361
- 32Özer, Y. and Müller, M. (2022). Source separation of piano concertos with test-time adaptation. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 493–500, Bengaluru, India.
- 33Petermann, D., Chandna, P., Cuesta, H., Bonada, J., and Gómez, E. (2020). Deep learning based source separation applied to choir ensembles. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 733–739, Montreal, Canada.
- 34Porter, A. (2012).
Evaluating musical fingerprinting systems . Master’s thesis, McGill University, Montreal, Canada. - 35Prätzlich, T., Driedger, J., and Müller, M. (2016). Memory-restricted multiscale dynamic time warping. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 569–573, Shanghai, China. DOI: 10.1109/ICASSP.2016.7471739
- 36Rafii, Z., Liutkus, A., Stöter, F.-R., Mimilakis, S. I., and Bittner, R. (2017). The MUSDB18 corpus for music separation. DOI: 10.5281/zenodo.1117372
- 37Rafii, Z., Liutkus, A., Stöter, F., Mimilakis, S. I., FitzGerald, D., and Pardo, B. (2018). An overview of lead and accompaniment separation in music. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(8):1307–1335. DOI: 10.1109/TASLP.2018.2825440
- 38Rosenzweig, S., Cuesta, H., Weiß, C., Scherbaum, F., Gómez, E., and Müller, M. (2020). Dagstuhl ChoirSet: A multitrack dataset for MIR research on choral singing. Transactions of the International Society for Music Information Retrieval (TISMIR), 3(1):98–110. DOI: 10.5334/tismir.48
- 39Röwenstrunk, D., Prätzlich, T., Betzwieser, T., Müller, M., Szwillus, G., and Veit, J. (2015). Das Gesamtkunstwerk Oper aus Datensicht – Aspekte des Umgangs mit einer heterogenen Datenlage im BMBF-Projekt “Freischutz Digital”. Datenbank-Spektrum, 15(1):65–72. DOI: 10.1007/s13222-015-0179-0
- 40Sarkar, S., Benetos, E., and Sandler, M. (2022). EnsembleSet: A new high quality dataset for chamber ensemble separation. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Bengaluru, India.
- 41Schedl, M., Hauger, D., Tkalčič, M., Melenhorst, M., and Liem, C. C. S. (2016). A dataset of multimedia material about classical music: PHENICX-SMM. In Proceedings of the International Workshop on Content-Based Multimedia Indexing (CBMI), pages 1–4. DOI: 10.1109/CBMI.2016.7500240
- 42Schering, A. (1905). Geschichte des Instrumentalkonzerts. Breitkopf & Härtel, Leipzig, Germany.
- 43Schramm, R. and Benetos, E. (2017). Automatic transcription of a cappella recordings from multiple singers. In Proceedings of the AES International Conference on Semantic Audio, pages 108–115, Erlangen, Germany.
- 44Series, B. (2014). Method for the subjective assessment of intermediate quality level of audio systems. International Telecommunication Union Radiocommunication Assembly.
- 45Stoller, D., Ewert, S., and Dixon, S. (2018). Wave-Unet: A multi-scale neural network for end-to-end audio source separation. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 334–340, Paris, France.
- 46Stöter, F., Uhlich, S., Liutkus, A., and Mitsufuji, Y. (2019). Open-Unmix – A reference implementation for music source separation. Journal of Open Source Software, 4(41). DOI: 10.21105/joss.01667
- 47Takahashi, N. and Mitsufuji, Y. (2017). Multi-scale multi-band densenets for audio source separation. In Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pages 21–25. DOI: 10.1109/WASPAA.2017.8169987
- 48Vincent, E., Gribonval, R., and Févotte, C. (2006). Performance measurement in blind audio source separation. IEEE Transactions on Audio, Speech, and Language Processing, 14(4):1462–1469. DOI: 10.1109/TSA.2005.858005
- 49Werner, N., Balke, S., Stöter, F.-R., Müller, M., and Edler, B. (2017). trackswitch.js: A versatile webbased audio player for presenting scientific results. In Proceedings of the Web Audio Conference (WAC), London, UK.
