Have a personal or library account? Click to login
CCOM-HuQin: An Annotated Multimodal Chinese Fiddle Performance Dataset Cover

CCOM-HuQin: An Annotated Multimodal Chinese Fiddle Performance Dataset

Open Access
|Jul 2023

References

  1. 1Benetos, E., Dixon, S., Duan, Z., and Ewert, S. (2018). Automatic music transcription: An overview. IEEE Signal Processing Magazine, 36(1):2030. DOI: 10.1109/MSP.2018.2869928
  2. 2Chen, Y. (Song(宋),1101). Yue Shu (乐书). Yuan(元),1347 edition.
  3. 3Choi, K., Fazekas, G., Sandler, M., and Cho, K. (2017). Convolutional recurrent neural networks for music classification. In 2017 IEEE International conference on acoustics, speech and signal processing (ICASSP), pages 23922396. IEEE. DOI: 10.1109/ICASSP.2017.7952585
  4. 4Dalmazzo, D. and Ramirez, R. (2019). Bowing gestures classification in violin performance: a machine learning approach. Frontiers in psychology, 10:344. DOI: 10.3389/fpsyg.2019.00344
  5. 5Drugman, T., Huybrechts, G., Klimkov, V., and Moinet, A. (2018). Traditional machine learning for pitch detection. IEEE Signal Processing Letters, 25(11):17451749. DOI: 10.1109/LSP.2018.2874155
  6. 6Ducher, J.-F. and Esling, P. (2019). Folded cqt rcnn for real-time recognition of instrument playing techniques. In International Society for Music Information Retrieval.
  7. 7D’Amato, V., Volta, E., Oneto, L., Volpe, G., Camurri, A., and Anguita, D. (2020). Understanding violin players’ skill level based on motion capture: a data-driven perspective. Cognitive Computation, 12:13561369. DOI: 10.1007/s12559-020-09768-8
  8. 8Elowsson, A. and Lartillot, O. (2021). A hardanger fiddle dataset with performances spanning emotional expressions and annotations aligned using image registration.
  9. 9Fu, H. (2007). A study on Erhu Performance(论二胡 演奏). International Publishing House For China’s Culture.
  10. 10Goto, M., Hashiguchi, H., Nishimura, T., and Oka, R. (2002). Rwc music database: Popular, classical and jazz music databases. In Ismir, volume 2, pages 287288.
  11. 11Goto, M., Hashiguchi, H., Nishimura, T., and Oka, R. (2003). Rwc music database: Music genre database and musical instrument sound database.
  12. 12Hao, H. and Ma, Z. (2004). Performance Methods on Yu-Ju Banhu(豫剧板胡演奏法). Henan Literary and Art Press.
  13. 13Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In (International Conference on Learning Representations (ICLR).
  14. 14Konkol, M. and Konopik, M. (2015). Segment representations in named entity recognition. In International conference on text, speech, and dialogue, pages 6170. Springer. DOI: 10.1007/978-3-319-24033-6_7
  15. 15Kruger, A. and Jacobs, J. (2020). Playing technique classification for bowed string instruments from raw audio. Journal of New Music Research, 49(4):320333. DOI: 10.1080/09298215.2020.1784957
  16. 16Li, B., Dinesh, K., Sharma, G., and Duan, Z. (2017). Video-based vibrato detection and analysis for polyphonic string music. In ISMIR, pages 123130.
  17. 17Li, B., Liu, X., Dinesh, K., Duan, Z., and Sharma, G. (2018). Creating a multitrack classical music performance dataset for multimodal music analysis: Challenges, insights, and applications. IEEE Transactions on Multimedia, 21(2):522535. DOI: 10.1109/TMM.2018.2856090
  18. 18Li, N. (2007). Left-hand playing techniques of erhu and their applications. The New Voice of Yue-Fu-The Academic Periodical of Shenyang Conservatory of Music, pages 180183.
  19. 19Liang, X., Li, Z., Liu, J., Li, W., Zhu, J., and Han, B. (2019). Constructing a multimedia chinese musical instrument database. In Proceedings of the 6th Conference on Sound and Music Technology (CSMT), pages 5360. Springer. DOI: 10.1007/978-981-13-8707-4_5
  20. 20Liu, C. (1986). Stylistic skills in erhu performance. Journal of the Central Conservatory of Music, pages 5458.
  21. 21Liu, C., Li, H., Tian, Z., Xue, k., Yan, J., Yu, H., Zhao, H., and Zhu, J. (2012). Exibition of Chinese traditional instrumental music(中国民族器乐曲博览), volume Solo. People’s Music Publishing House.
  22. 22Liu, D. (1992). Illustrated catalogue of Chinese musical instruments(中国乐器图鉴). Shandong Education Press.
  23. 23Lostanlen, V., Anden, J., and Lagrange, M. (2018). Extended playing techniques: The next milestone in musical instrument recognition. In Proceedings of the 5th International Conference on Digital Libraries for Musicology, DLfM ’18, page 110, New York, NY, USA. Association for Computing Machinery. DOI: 10.1145/3273024.3273036
  24. 24Mauch, M., Cannam, C., Bittner, R., Fazekas, G., Salamon, J., Dai, J., Bello, J., and Dixon, S. (2015). Computer-aided melody note transcription using the tony software: Accuracy and efficiency.
  25. 25Mauch, M. and Dixon, S. (2014). pyin: A fundamental frequency estimator using probabilistic threshold distributions. In 2014 ieee international conference on acoustics, speech and signal processing (icassp), pages 659663. IEEE. DOI: 10.1109/ICASSP.2014.6853678
  26. 26McFee, B., Raffel, C., Liang, D., Ellis, D. P., McVicar, M., Battenberg, E., and Nieto, O. (2015). librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in science conference, volume 8, pages 1825. Citeseer. DOI: 10.25080/Majora-7b98e3ed-003
  27. 27Montesinos, J. F., Slizovskaia, O., and Haro, G. (2020). Solos: A dataset for audio-visual music analysis. In 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP), pages 16. DOI: 10.1109/MMSP48831.2020.9287124
  28. 28Qiao, J., Yang, G., Yu, Q., and Zhao, H. (2010). China Music(华乐大典), volume Erhu. Shanghai Music Press.
  29. 29Shen, C. (1997). Local style and skills of banhu(板胡 的地方风格与技巧). Chinese Music, pages 3133.
  30. 30Simonetta, F., Ntalampiras, S., and Avanzini, F. (2019). Multimodal music information processing and retrieval: Survey and future challenges. In 2019 international workshop on multilayer music representation and processing (MMRP), pages 1018. IEEE. DOI: 10.1109/MMRP.2019.00012
  31. 31Su, L., Lin, H.-M., and Yang, Y.-H. (2014). Sparse modeling of magnitude and phase-derived spectra for playing technique classification. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(12):21222132. DOI: 10.1109/TASLP.2014.2362006
  32. 32Subramani, K. and Rao, P. (2020). Hprnet: Incorporating residual noise modeling for violin in a variational parametric synthesizer.
  33. 33Thickstun, J., Harchaoui, Z., and Kakade, S. (2016). Learning features of music from scratch. arXiv preprint arXiv:1611.09827.
  34. 34Thomas, V., Fremerey, C., Muller, M., and Clausen, M. (2012). Linking sheet music and audio–challenges and new approaches.
  35. 35Tsou, J. (2001). The new grove dictionary of music and musicians.
  36. 36Volpe, G., Kolykhalova, K., Volta, E., Ghisio, S., Waddell, G., Alborno, P., Piana, S., Canepa, C., and Ramirez-Melendez, R. (2017). A multimodal corpus for technology-enhanced learning of violin playing. volume Part F131371. Association for Computing Machinery. DOI: 10.1145/3125571.3125588
  37. 37von Coler, H. (2018). Tu-note violin sample library–a database of violin sounds with segmentation ground truth. In Proceedings of the 21st International Conference on Digital Audio Effects (DAFx-18), Aveiro, Portugal, pages 48.
  38. 38von Coler, H. and Lerch, A. (2014). Cmmsd: A data set for note-level segmentation of monophonic music. In Audio Engineering Society Conference: 53rd International Conference: Semantic Audio. Audio Engineering Society.
  39. 39Wang, C., Lostanlen, V., Benetos, E., and Chew, E. (2020). Playing technique recognition by joint time–frequency scattering. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 881885. IEEE. DOI: 10.1109/ICASSP40776.2020.9053474
  40. 40Wang, Z., Li, J., Chen, X., Li, Z., Zhang, S., Han, B., and Yang, D. (2019). Musical instrument playing technique detection based on fcn: Using chinese bowed-stringed instrument as an example. arXiv preprint arXiv:1910.09021.
  41. 41Yang, L. (2016). Computational modelling and analysis of vibrato and portamento in expressive music performance. PhD thesis, Queen Mary University of London.
  42. 42Zeng, M. (2006). Bowing and vibrato on the erhu. Master’s thesis, Master dissertation, Shanghai Conservatory of Music.
  43. 43Zhang, F., Bazarevsky, V., Vakunov, A., Tkachenka, A., Sung, G., Chang, C., and Grundmann, M. (2020). Mediapipe hands: On-device real-time hand tracking. CoRR, abs/2006.10214.
  44. 44Zhang, W., Lei, W., Xu, X., and Xing, X. (2016). Improved music genre classification with convolutional neural networks. In Interspeech, pages 33043308. DOI: 10.21437/Interspeech.2016-1236
  45. 45Zhao, H. (1999). The usage of portamento techniques in erhu performance(二胡演奏中滑音技法的运用). Journal of the Central Conservatory of Music, pages 5357.
  46. 46Zhu, H., Li, Y., Zhu, F., Zheng, A., and He, R. (2021). Let’s play music: Audio-driven performance video generation. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 35743581. IEEE. DOI: 10.1109/ICPR48806.2021.9412698
DOI: https://doi.org/10.5334/tismir.146 | Journal eISSN: 2514-3298
Language: English
Submitted on: Aug 10, 2022
Accepted on: Mar 11, 2023
Published on: Jul 12, 2023
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2023 Yu Zhang, Ziya Zhou, Xiaobing Li, Feng Yu, Maosong Sun, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.