References
- 1Abeßer, J., Cano, E., Frieler, K., & Pfleiderer, M. (2014a). Dynamics in jazz improvisation: Scoreinformed estimation and contextual analysis of tone intensities in trumpet and saxophone solos. In 9th Conference on Interdisciplinary Musicology (CIM14), Berlin, Germany.
- 2Abeßer, J., Cano, E., Frieler, K., & Zaddach, W.-G. (2015). Score-informed analysis of intonation and pitch modulation in jazz solos. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Malaga, Spain.
- 3Abeßer, J., Frieler, K., Cano, E., Pfleiderer, M., & Zaddach, W.-G. (2017). Score-informed analysis of tuning, intonation, pitch modulation, and dynamics in jazz solos. IEEE Transactions on Audio, Speech and Language Processing, 25(1), 168–177. DOI: 10.1109/TASLP.2016.2627186
- 4Abeßer, J., Hasselhorn, J., Grollmisch, S., Dittmar, C., & Lehmann, A. (2014b). Automatic competency assessment of rhythm performances of ninth-grade and tenth-grade pupils. In Proceedings of the International Computer Music Conference (ICMC), Athens, Greece.
- 5Abeßer, J., Lukashevich, H., & Schuller, G. (2010). Feature-based extraction of plucking and expression styles of the electric bass guitar. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2290–2293, Dallas. DOI: 10.1109/ICASSP.2010.5495945
- 6Abeßer, J., Pfleiderer, M., Frieler, K., & Zaddach, W.-G. (2014c). Score-informed tracking and contextual analysis of fundamental frequency contours in trumpet and saxophone jazz solos. In Proceedings of the International Conference on Digital Audio Effects (DAFx), Erlangen, Germany.
- 7Arzt, A., & Widmer, G. (2015). Real-time music tracking using multiple performances as a reference. In Müller, M., & Wiering, F., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 357–363, Malaga, Spain.
- 8Ashley, R. (2002). Do[n’t] change a hair for me: The art of jazz rubato. Music Perception: An Interdisciplinary Journal, 19(3), 311–332. DOI: 10.1525/mp.2002.19.3.311
- 9Atli, H. S., Bozkurt, B., & Sentürk, S. (2015). A method for tonic frequency identification of Turkish makam music recordings. In Proceedings of the 5th International Workshop on Folk Music Analysis (FMA), Paris, France.
Association Dirac . DOI: 10.1109/SIU.2015.7130148 - 10Aucouturier, J.-J., & Bigand, E. (2012). Mel Cepstrum & Ann Ova: The difficult dialog between MIR and music cognition. In ISMIR, pages 397–402.
- 11Bantula, H., Giraldo, S. I., & Ramirez, R. (2016). Jazz ensemble expressive performance modeling. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), New York.
- 12Battcock, A., & Schutz, M. (2019). Acoustically expressing affect. Music Perception, 37(1), 66–91. DOI: 10.1525/mp.2019.37.1.66
- 13Bayle, Y., Maršík, L., Rusek, M., Robine, M., Hanna, P., Slaninová, K., Martinovic, J., & Pokorný, J. (2017). Kara1k: A karaoke dataset for cover song identification and singing voice analysis. In International Symposium on Multimedia (ISM), pages 177–184, Taichung, Taiwan.
IEEE . DOI: 10.1109/ISM.2017.32 - 14Behne, K.-E., & Wetekam, B. (1993). Musikpsychologische Interpretationsforschung: Individualität und Intention. Musikpsychologie. Jahrbuch der Deutschen Gesellschaft für Musikpsychologie, 10, 24–37.
- 15Bektaş, T. (2005). Relationships between prosodic and musical meters in the beste form of classical Turkish music. Asian Music, 36(1), 1–26. DOI: 10.1353/amu.2005.0003
- 16Benetos, E., Dixon, S., Giannoulis, D., Kirchhoff, H., & Klapuri, A. (2013). Automatic music transcription: Challenges and future directions. Journal of Intelligent Information Systems, 41(3), 407–434. DOI: 10.1007/s10844-013-0258-3
- 17Bergeron, V., & Lopes, D. M. (2009). Hearing and seeing musical expression. Philosophy and Phenomenological Research, 78(1), 1–16. DOI: 10.1111/j.1933-1592.2008.00230.x
- 18Bishop, L., & Goebl, W. (2018). Performers and an active audience: Movement in music production and perception. Jahrbuch Musikpsychologie, 28, 1–17. DOI: 10.5964/jbdgm.2018v28.19
- 19Bowman Macleod, R. (2006). Influences of Dynamic Level and Pitch Height on the Vibrato Rates and Widths of Violin and Viola Players. Dissertation, Florida State University, College of Music, Tallahassee, FL.
- 20Bozkurt, B., Ayangil, R., & Holzapfel, A. (2014). Computational analysis of Turkish makam music: Review of state-of-the-art and challenges. Journal of New Music Research, 43(1), 3–23. DOI: 10.1080/09298215.2013.865760
- 21Bozkurt, B., Baysal, O., & Yüret, D. (2017). A dataset and baseline system for singing voice assessment. In Proceedings of the International Symposium on Computer Music Modeling and Retrieval (CMMR), Matosinhos.
- 22Bresin, R., & Friberg, A. (2013).
Evaluation of computer systems for expressive music performance . In Kirke, A., & Miranda, E. R., editors, Guide to Computing for Expressive Music Performance, pages 181–203. Springer, London. DOI: 10.1007/978-1-4471-4123-5_7 - 23Broze, G. J.
III (2013). Animacy, Anthropomimesis, and Musical Line. PhD Thesis, The Ohio State University. - 24Busse, W. G. (2002). Toward objective measurement and evaluation of jazz piano performance via MIDIbased groove quantize templates. Music Perception: An Interdisciplinary Journal, 19(3), 443–461. DOI: 10.1525/mp.2002.19.3.443
- 25Cancino-Chacón, C. E., Gadermaier, T., Widmer, G., & Grachten, M. (2017). An evaluation of linear and non-linear models of expressive dynamics in classical piano and symphonic music. Machine Learning, 106(6), 887–909. DOI: 10.1007/s10994-017-5631-y
- 26Cancino-Chacón, C. E., & Grachten, M. (2016). The Basis Mixer: A computational Romantic pianist. In Late Breaking Demo (Extended Abstract), International Society for Music Information Retrieval Conference (ISMIR), New York.
- 27Cancino-Chacón, C. E., Grachten, M., Goebl, W., & Widmer, G. (2018). Computational models of expressive music performance: A comprehensive and critical review. Frontiers in Digital Humanities, 5. DOI: 10.3389/fdigh.2018.00025
- 28Caro Repetto, R., Gong, R., Kroher, N., & Serra, X. (2015). Comparision of the singing style of two Jingju schools. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Malaga, Spain.
- 29Chen, K. (2013).
Characterization of pitch intonation of Beijing opera . Master’s thesis, Universitat Pompeu Fabra, Barcelona. - 30Cheng, E., & Chew, E. (2008). Quantitative analysis of phrasing strategies in expressive performance: Computational methods and analysis of performances of unaccompanied Bach for solo violin. Journal of New Music Research, 37(4), 325–338. DOI: 10.1080/09298210802711660
- 31Chew, E. (2016). Playing with the edge: Tipping points and the role of tonality. Music Perception, 33(3), 344–366. DOI: 10.1525/mp.2016.33.3.344
- 32Choi, K., Fazekas, G., Sandler, M. B., & Cho, K. (2017). Transfer learning for music classification and regression tasks. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 141–149, Suzhou, China.
- 33Chuan, C.-H., & Chew, E. (2007). A dynamic programming approach to the extraction of phrase boundaries from tempo variations in expressive performances. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), Vienna, Austria.
- 34Clarke, E. F. (1993). Imitating and evaluating real and transformed musical performances. Music Perception: An Interdisciplinary Journal, 10(3), 317–341. DOI: 10.2307/40285573
- 35Clarke, E. F. (1998).
Rhythm and timing in music . In The Psychology of Music, pages 473–500. Academic Press, San Diego, 2nd edition. DOI: 10.1016/B978-012213564-4/50014-7 - 36Clarke, E. F. (2002a).
Listening to performance . In Rink, J., editor, Musical Performance – A Guide to Understanding. Cambridge University Press, Cambridge. DOI: 10.1017/CBO9780511811739.014 - 37Clarke, E. F. (2002b).
Understanding the psychology of performance . In Rink, J., editor, Musical Performance – A Guide to Understanding. Cambridge University Press, Cambridge. DOI: 10.1017/CBO9780511811739.005 - 38Clayton, M. (2008). Time in Indian Music: Rhythm, Metre, and Form in North Indian Rag Performance. Oxford University Press, New York.
- 39Collier, G. L., & Collier, J. L. (1994). An exploration of the use of tempo in jazz. Music Perception: An Interdisciplinary Journal, 11(3), 219–242. DOI: 10.2307/40285621
- 40Collier, G. L., & Collier, J. L. (2002). A study of timing in two Louis Armstrong solos. Music Perception: An Interdisciplinary Journal, 19(3), 463–483. DOI: 10.1525/mp.2002.19.3.463
- 41Cuesta, H., Gómez Gutiérrez, E., Martorell Domínguez, A., & Loáiciga, F. (2018). Analysis of intonation in unison choir singing. In Proceedings of the International Conference on Music Perception and Cognition (ICMPC), Graz, Austria.
- 42Dalla Bella, S., & Palmer, C. (2004). Tempo and dynamics in piano performance: The role of movement amplitude. In Proceedings of the 8th International Conference on Music Perception & Cognition (ICMPC), Evanston.
- 43Davies, M., Madison, G., Silva, P., & Gouyon, F. (2013). The effect of microtiming deviations on the perception of groove in short rhythms. Music Perception: An Interdisciplinary Journal, 30(5), 497–510. DOI: 10.1525/mp.2013.30.5.497
- 44De Poli, G., Canazza, S., Rodà, A., & Schubert, E. (2014). The role of individual difference in judging expressiveness of computer-assisted music performances by experts. ACM Transactions on Applied Perception, 11(4), 22:1–22:20. DOI: 10.1145/2668124
- 45Dean, R. T., Bailes, F., & Drummond, J. (2014). Generative structures in improvisation: Computational segmentation of keyboard performances. Journal of New Music Research, 43(2), 224–236. DOI: 10.1080/09298215.2013.859710
- 46Devaney, J. (2016). Inter- versus intra-singer similarity and variation in vocal performances. Journal of New Music Research, 45(3), 252–264. DOI: 10.1080/09298215.2016.1205631
- 47Devaney, J., Mandel, M. I., Ellis, D. P. W., & Fujinaga, I. (2011). Automatically extracting performance data from recordings of trained singers. Psychomusicology: Music, Mind and Brain, 21(1–2), 108–136. DOI: 10.1037/h0094008
- 48Devaney, J., Mandel, M. I., & Fujinaga, I. (2012). A study of intonation in three-part singing using the Automatic Music Performance Analysis and Comparison Toolkit (AMPACT). In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Porto, Portugal.
- 49Dibben, N. (2014).
Understanding performance expression in popular music recordings . In Fabian, D., Timmers, R., & Schubert, E., editors, Expressiveness in Music Performance: Empirical Approaches Across Styles and Cultures. Oxford University Press. DOI: 10.1093/acprof:oso/9780199659647.003.0007 - 50Dillon, R. (2001). Extracting audio cues in real time to understand musical expressiveness. In Proceedings of the MOSART workshop, Barcelona.
- 51Dillon, R. (2003). A statistical approach to expressive intention recognition in violin performances. In Proceedings of the Stockholm Music Acoustics Conference (SMAC), Stockholm.
- 52Dillon, R. (2004). On the Recognition of Expressive Intentions in Music Playing: A Computational Approach with Experiments and Applications. Dissertation, University of Genoa, Faculty of Engineering, Genoa.
- 53Dimov, T. (2010). Short Historical Overview and Comparison of the Pitch Width and Speed Rates of the Vibrato Used in Sonatas and Partitas for Solo Violin by Johann Sebastian Bach as Found in Recordings of Famous Violinists of the Twentieth and the Twenty- First Centuries. D.M.A., West Virginia University, United States.
- 54Dittmar, C., Pfleiderer, M., Balke, S., & Müller, M. (2018). A swingogram representation for tracking micro-rhythmic variation in jazz performances. Journal of New Music Research, 47(2), 97–113. DOI: 10.1080/09298215.2017.1367405
- 55Dixon, S., Goebl, W., & Cambouropoulos, E. (2006). Perceptual smoothness of tempo in expressively performed music. Music Perception, 23(3), 195–214. DOI: 10.1525/mp.2006.23.3.195
- 56Dixon, S., Goebl, W., & Widmer, G. (2002). The Performance Worm: Real time visualisation of expression based on Langner’s tempo loudness animation. In Proceedings of the International Computer Music Conference (ICMC), Göteborg.
- 57Ellis, M. C. (1991). An analysis of “swing” subdivision and asynchronization in three jazz saxophonists. Perceptual and Motor Skills, 73(3), 707–713. DOI: 10.2466/pms.1991.73.3.707
- 58Eremenko, V., Morsi, A., Narang, J., & Serra, X. (2020). Performance assessment technologies for the support of musical instrument learning. In Proceedings of the International Conference on Computer Supported Education (CSEDU), Prague. DOI: 10.5220/0009817006290640
- 59Fabian, D., & Schubert, E. (2008). Musical character and the performance and perception of dotting, articulation and tempo in 34 recordings of Variation 7 from J.S. Bach’s Goldberg Variations (BWV 988). Musicae Scientiae, 12(2), 177–206. DOI: 10.1177/102986490801200201
- 60Fabian, D., & Schubert, E. (2009). Baroque expressiveness and stylishness in three recordings of the D minor Sarabanda for solo violin (BWV 1004) by J. S. Bach. Music Performance Research, 3, 36–55.
- 61Falcao, F., Bozkurt, B., Serra, X., Andrade, N., & Baysal, O. (2019). A dataset of rhythmic pattern reproductions and baseline automatic assessment system. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Delft, The Netherlands.
- 62Farbood, M. M., & Upham, F. (2013). Interpreting expressive performance through listener judgments of musical tension. Frontiers in Psychology, 4. DOI: 10.3389/fpsyg.2013.00998
- 63Finney, S. A., & Palmer, C. (2003). Auditory feedback and memory for music performance: Some evidence for an encoding effect. Memory & Cognition, 31(1), 51–64. DOI: 10.3758/BF03196082
- 64Franz, D. M. (1998).
Markov chains as tools for jazz improvisation analysis . Master’s thesis, Virginia Tech. - 65Friberg, A., & Sundberg, J. (1999). Does music performance allude to locomotion? A model of final ritardandi derived from measurements of stopping runners. The Journal of the Acoustical Society of America, 105(3), 1469–1484. DOI: 10.1121/1.426687
- 66Friberg, A., & Sundström, A. (2002). Swing ratios and ensemble timing in jazz performance: Evidence for a common rhythmic pattern. Music Perception: An Interdisciplinary Journal, 19(3), 333–349. DOI: 10.1525/mp.2002.19.3.333
- 67Frieler, K., Pfleiderer, M., Zaddach, W.-G., & Abeßer, J. (2016). Midlevel analysis of monophonic jazz solos: A new approach to the study of improvisation. Musicae Scientiae, 20(2), 143–162. DOI: 10.1177/1029864916636440
- 68Frühauf, J., Kopiez, R., & Platz, F. (2013). Music on the timing grid: The influence of microtiming on the perceived groove quality of a simple drum pattern performance. Musicae Scientiae, 17(2), 246–260. DOI: 10.1177/1029864913486793
- 69Fu, Z., Lu, G., Ting, K. M., & Zhang, D. (2011). A survey of audio-based music classification and annotation. IEEE Transactions on Multimedia, 13(2), 303–319. DOI: 10.1109/TMM.2010.2098858
- 70Gabrielsson, A. (1987).
Once again: The theme from Mozart’s Piano Sonata in A Major (K. 331): A comparison of five performances . In Gabrielsson, A., editor, Action and Perception in Rhythm and Music, pages 81–103. Royal Swedish Academy of Music, No. 55, Stockholm. - 71Gabrielsson, A. (1999).
The performance of music . In Deutsch, D., editor, The Psychology of Music. Academic Press, San Diego, 2nd edition. DOI: 10.1016/B978-012213564-4/50015-9 - 72Gabrielsson, A. (2003). Music performance research at the millennium. Psychology of Music, 31(3), 221–272. DOI: 10.1177/03057356030313002
- 73Gadermaier, T., & Widmer, G. (2019). A study of annotation and alignment accuracy for performance comparison in complex orchestral music. In Flexer, A., Peeters, G., Urbano, J., & Volk, A., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 769–775, Delft, Netherlands.
- 74Ganguli, K. K., & Rao, P. (2017). Towards computational modeling of the ungrammatical in a raga performance. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Suzhou, China.
- 75Gasser, M., Arzt, A., Gadermaier, T., Grachten, M., & Widmer, G. (2015). Classical music on the web: User interfaces and data representations. In Müller, M., & Wiering, F., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 571–577, Malaga, Spain.
- 76Geringer, J. M. (1995). Continuous loudness judgments of dynamics in recorded music excerpts. Journal of Research in Music Education, 43(1), 22–35. DOI: 10.2307/3345789
- 77Gillick, J., Roberts, A., Engel, J., Eck, D., & Bamman, D. (2019). Learning to groove with inverse sequence transformations. In Proceedings of the International Conference on Machine Learning (ICML), pages 2269–2279.
- 78Gingras, B., Pearce, M. T., Goodchild, M., Dean, R. T., Wiggins, G., & McAdams, S. (2016). Linking melodic expectation to expressive performance timing and perceived musical tension. Journal of Experimental Psychology: Human Perception and Performance, 42(4), 594. DOI: 10.1037/xhp0000141
- 79Gjerdingen, R. O. (1988). Shape and motion in the microstructure of song. Music Perception: An Interdisciplinary Journal, 6(1), 35–64. DOI: 10.2307/40285415
- 80Goebl, W. (1999). Numerisch-klassifikatorische Interpretationsanalyse mit dem ‘Bösendorfer Computerflügel’. Diploma Thesis, Universität Wien, Vienna, Austria.
- 81Goebl, W. (2001). Melody lead in piano performance: Expressive device or artifact? The Journal of the Acoustical Society of America, 110(1), 563–572. DOI: 10.1121/1.1376133
- 82Goebl, W., Dixon, S., De Poli, G., Friberg, A., Bresin, R., & Widmer, G. (2005).
‘Sense’ in expressive music performance: Data acquisition, computational studies, and models . In Leman, M., & Cirotteau, D., editors, Sound to Sense, Sense to Sound: A Stateof- the-Art. Logos, Berlin. - 83Goebl, W., & Palmer, C. (2008). Tactile feedback and timing accuracy in piano performance. Experimental Brain Research, 186(3), 471–479. DOI: 10.1007/s00221-007-1252-1
- 84Gong, R. (2018). Automatic Assessment of Singing Voice Pronunciation: A Case Study with Jingju Music. PhD Thesis, Universitat Pompeu Fabra, Barcelona, Spain.
- 85Gong, R., Yang, Y., & Serra, X. (2016). Pitch contour segmentation for computer-aided Jingju singing training. In Proceedings of the Sound and Music Computing Conference (SMC), Hamburg, Germany.
- 86Grachten, M., Cancino-Chacón, C. E., & Chacón, C. E. C. (2017).
Temporal dependencies in the expressive timing of classical piano performances . In The Routledge Companion to Embodied Music Interaction, pages 360–369. Routledge, New York. DOI: 10.4324/9781315621364-40 - 87Grachten, M., & Widmer, G. (2012). Linear basis models for prediction and analysis of musical expression. Journal of New Music Research, 41(4), 311–322. DOI: 10.1080/09298215.2012.731071
- 88Gupta, C., & Rao, P. (2012).
Objective assessment of ornamentation in Indian classical singing . In Ystad, S., Aramaki, M., Kronland-Martinet, R., Jensen, K., & Mohanty, S., editors, Speech, Sound and Music Processing: Embracing Research in India, Lecture Notes in Computer Science, pages 1–25. Springer Berlin Heidelberg. - 89Gururani, S., Pati, K. A., Wu, C.-W., & Lerch, A. (2018). Analysis of objective descriptors for music performance assessment. In Proceedings of the International Conference on Music Perception and Cognition (ICMPC), Toronto, Ontario, Canada.
- 90Hakan, T., Serra, X., & Arcos, J. L. (2012). Characterization of embellishments in ney performances of makam music in Turkey. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Porto, Portugal.
- 91Han, Y., & Lee, K. (2014). Hierarchical approach to detect common mistakes of beginner flute players. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 77–82, Taipei, Taiwan.
- 92Hartmann, A. (1932). Untersuchungen über das metrische Verhalten in musikalischen Interpretationsvarianten. Archiv für die gesamte Psychologie, 84, 103–192.
- 93Hashida, M., Matsui, T., & Katayose, H. (2008). A new music database describing deviation information of performance expressions. In Bello, J. P., Chew, E., & Turnbull, D., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 489–494, Philadelphia, PA.
- 94Hawthorne, C., Stasyuk, A., Roberts, A., Simon, I., Huang, C.-Z. A., Dieleman, S., Elsen, E., Engel, J., & Eck, D. (2019). Enabling factorized piano music modeling and generation with the MAESTRO Dataset. In Proceedings of the International Conference on Learning Representations (ICLR). arXiv: 1810.12247.
- 95Henkel, F., Balke, S., Dorfer, M., & Widmer, G. (2019). Score following as a multi-modal reinforcement learning problem. Transactions of the International Society for Music Information Retrieval, 2(1), 67–81. DOI: 10.5334/tismir.31
- 96Hill, P. (2002).
From score to sound . In Rink, J., editor, Musical Performance – A Guide to Understanding. Cambridge University Press, Cambridge. DOI: 10.1017/CBO9780511811739.010 - 97Howes, P., Callaghan, J., Davis, P., Kenny, D., & Thorpe, W. (2004). The relationship between measured vibrato characteristics and perception in Western operatic singing. Journal of Voice, 18(2), 216–230. DOI: 10.1016/j.jvoice.2003.09.003
- 98Huang, J., & Krumhansl, C. L. (2011). What does seeing the performer add? It depends on musical style, amount of stage behavior, and audience expertise. Musicae Scientiae, 15(3), 343–364. DOI: 10.1177/1029864911414172
- 99Huang, J., & Lerch, A. (2019). Automatic assessment of sight-reading exercises. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Delft, Netherlands.
- 100Huron, D. (2001). Tone and voice: A derivation of the rules of voice-leading from perceptual principles. Music Perception, 19(1), 1–64. DOI: 10.1525/mp.2001.19.1.1
- 101Huron, D. (2015). Affect induction through musical sounds: An ethological perspective. Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1664),
20140098 . DOI: 10.1098/rstb.2014.0098 - 102Iyer, V. (2002). Embodied mind, situated cognition, and expressive microtiming in African-American music. Music Perception: An Interdisciplinary Journal, 19(3), 387–414. DOI: 10.1525/mp.2002.19.3.387
- 103Järvinen, T. (1995). Tonal hierarchies in jazz improvisation. Music Perception: An Interdisciplinary Journal, 12(4), 415–437. DOI: 10.2307/40285675
- 104Järvinen, T., & Toiviainen, P. (2000). The effect of metre on the use of tones in jazz improvisation. Musicae Scientiae, 4(1), 55–74. DOI: 10.1177/102986490000400103
- 105Jeong, D., Kwon, T., Kim, Y., Lee, K., & Nam, J. (2019a). VirtuosoNet: A hierarchical RNN-based system for modeling expressive piano performance. In Flexer, A., Peeters, G., Urbano, J., & Volk, A., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 908–915, Delft, Netherlands.
- 106Jeong, D., Kwon, T., Kim, Y., & Nam, J. (2019b). Graph neural network for music score data and modeling expressive piano performance. In International Conference on Machine Learning, pages 3060–3070.
- 107Juchniewicz, J. (2008). The influence of physical movement on the perception of musical performance. Psychology of Music, 36(4), 417–427. DOI: 10.1177/0305735607086046
- 108Jure, L., Lopez, E., Rocamora, M., Cancela, P., Sponton, H., & Irigaray, I. (2012). Pitch content visualization tools for music performance analysis. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Porto.
- 109Juslin, P. N. (2000). Cue utilization in communication of emotion in music performance: Relating performance to perception. Journal of Experimental Psychology, 26(6), 1797–1813. DOI: 10.1037/0096-1523.26.6.1797
- 110Juslin, P. N. (2003). Five myths about expressivity in music performance and what to do about them. In Proceedings of the International Conference on Arts and Humanities, Honululu, Hawaii.
- 111Juslin, P. N., & Laukka, P. (2003). Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin, 129(5), 770–814. DOI: 10.1037/0033-2909.129.5.770
- 112Katz, M. (2004). Capturing Sound: How Technology has Changed Music. University of California Press, Berkeley and Los Angeles.
- 113Kawase, S. (2014). Importance of communication cues in music performance according to performers and audience. International Journal of Psychological Studies, 6(2), 49. DOI: 10.5539/ijps.v6n2p49
- 114Kehling, C., Abeßer, J., Dittmar, C., & Schuller, G. (2014). Automatic tablature transcription of electric guitar recordings by estimation of score- and instrument-related parameters. In Proceedings of the International Conference on Digital Audio Effects (DAFx), Erlangen, Germany.
- 115Kendall, R. A., & Carterette, E. C. (1990). The communication of musical expression. Music Perception, 8(2), 129–164. DOI: 10.2307/40285493
- 116Kirke, A., & Miranda, E. R., editors (2013). Guide to Computing for Expressive Music Performance. Springer Science & Business Media, London. DOI: 10.1007/978-1-4471-4123-5
- 117Knight, T., Upham, F., & Fujinaga, I. (2011). The potential for automatic assessment of trumpet tone quality. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 573–578, Miami, FL.
- 118Kosta, K., Bandtlow, O. F., & Chew, E. (2015).
A change-point approach towards representing musical dynamics . In Collins, T., Meredith, D., & Volk, A., editors, Mathematics and Computation in Music, Lecture Notes in Computer Science, pages 179–184, Cham. Springer International Publishing. DOI: 10.1007/978-3-319-20603-5_18 - 119Kosta, K., Bandtlow, O. F., & Chew, E. (2018). Dynamics and relativity: Practical implications of dynamic markings in the score. Journal of New Music Research, 47(5), 438–461. DOI: 10.1080/09298215.2018.1486430
- 120Kosta, K., Ramirez, R., Bandtlow, O., & Chew, E. (2016). Mapping between dynamic markings and performed loudness: A machine learning approach. Journal of Mathematics and Music, 10(2), 149–172. DOI: 10.1080/17459737.2016.1193237
- 121Krumhansl, C. L. (1996). A perceptual analysis of Mozart’s Piano Sonata K. 282: Segmentation, tension, and musical ideas. Music Perception: An Interdisciplinary Journal, 13(3), 401–432. DOI: 10.2307/40286177
- 122Langner, J., & Goebl, W. (2002). Representing expressive performance in tempo-loudness space. In Proceedings of the Conference European Society for the Cognitive Sciences of Music (ESCOM), Liege.
- 123Leman, M., & Maes, P.-J. (2014). The role of embodiment in the perception of music. Empirical Musicology Review, 9(3–4), 236–246. DOI: 10.18061/emr.v9i3-4.4498
- 124Lerch, A. (2009). Software-Based Extraction of Objective Parameters from Music Performances. GRIN Verlag, München.
- 125Lerch, A. (2012). An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics. Wiley-IEEE Press, Hoboken. DOI: 10.1002/9781118393550
- 126Lerch, A., Arthur, C., Pati, A., & Gururani, S. (2019). Music performance analysis: A survey. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Delft.
- 127Li, B., Liu, X., Dinesh, K., Duan, Z., & Sharma, G. (2019). Creating a multitrack classical music performance dataset for multimodal music analysis: Challenges, insights, and applications. IEEE Transactions on Multimedia, 21(2), 522–535. DOI: 10.1109/TMM.2018.2856090
- 128Li, P.-C., Su, L., Yang, Y.-H., & Su, A. W. Y. (2015). Analysis of expressive musical terms in violin using score-informed and expression-based audio features. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 809–815, Malaga, Spain.
- 129Li, S., Dixon, S., & Plumbley, M. D. (2017). Clustering expressive timing with regressed polynomial coefficients demonstrated by a model selection test. In Cunningham, S. J., Duan, Z., Hu, X., & Turnbull, D., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 457–463, Suzhou, China.
- 130Liem, C. C., & Hanjalic, A. (2011). Expressive timing from cross-performance and audio-based alignment patterns: An extended case study. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Miami, FL.
- 131Liem, C. C., & Hanjalic, A. (2015). Comparative analysis of orchestral performance recordings: An imagebased approach. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Malaga, Spain.
- 132Livingstone, S. R., Thompson, W. F., & Russo, F. A. (2009). Facial expressions and emotional singing: A study of perception and production with motion capture and electromyography. Music Perception, 26(5), 475–488. DOI: 10.1525/mp.2009.26.5.475
- 133Luizard, P., Brauer, E., & Weinzierl, S. (2019). Singing in physical and virtual environments: How performers adapt to room acoustical conditions. In Proceedings of the AES Conference on Immersive and Interactive Audio,
York. AES . - 134Luizard, P., Steffens, J., & Weinzierl, S. (2020). Singing in different rooms: Common or individual adaptation patterns to the acoustic conditions? The Journal of the Acoustical Society of America, 147(2), EL132–EL137. DOI: 10.1121/10.0000715
- 135Luo, Y.-J. (2015). Detection of common mistakes in novice violin playing. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 316–322, Malaga, Spain.
- 136Maempel, H.-J. (2011).
Musikaufnahmen als Datenquellen der Interpretationsanalyse . In von Lösch, H., & Weinzierl, S., editors, Gemessene Interpretation – Computergestützte Aufführungsanalyse im Kreuzverhör der Disziplinen, Klang und Begriff, pages 157–171. Schott, Mainz. - 137Maezawa, A., Yamamoto, K., & Fujishima, T. (2019). Rendering music performance with interpretation variations using conditional variational RNN. In Flexer, A., Peeters, G., Urbano, J., & Volk, A., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 855–861, Delft, Netherlands.
- 138Malik, I., & Ek, C. H. (2017). Neural translation of musical style. arXiv:1708.03535 [cs].
- 139Marchini, M., Ramirez, R., Papiotis, P., & Maestre, E. (2014). The sense of ensemble: A machine learning approach to expressive performance modeling in string quartets. Journal of New Music Research, 43(3), 303–317. DOI: 10.1080/09298215.2014.922999
- 140Mayor, O., Bonada, J., & Loscos, A. (2009). Performance analysis and scoring of the singing voice. In Proceedings of the Audio Engineering Society Convention, pages 1–7.
- 141McFee, B., Humphrey, E. J., & Bello, J. P. (2015). A software framework for musical data augmentation. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Malaga, Spain.
- 142McNeil, A. (2017). Seed ideas and creativity in Hindustani Raga music: Beyond the compositionimprovisation dialectic. Ethnomusicology Forum, 26(1), 116–132. DOI: 10.1080/17411912.2017.1304230
- 143McPherson, G. E., & Thompson, W. F. (1998). Assessing music performance: Issues and influences. Research Studies in Music Education, 10(1), 12–24. DOI: 10.1177/1321103X9801000102
- 144Molina, E., Barbancho, I., Gómez, E., Barbancho, A. M., & Tardón, L. J. (2013). Fundamental frequency alignment vs. note-based melodic similarity for singing voice assessment. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 744–748, Vancouver, Canada. DOI: 10.1109/ICASSP.2013.6637747
- 145Müller, M., Konz, V., Bogler, W., & Arifi-Müller, V. (2011). Saarland Music Data (SMD). In Late Breaking Demo (Extended Abstract), International Society for Music Information Retrieval Conference (ISMIR), Miami, FL.
- 146Nakamura, T. (1987). The communication of dynamics between musicians and listeners through musical performance. Perception & Psychophysics, 41(6), 525–533. DOI: 10.3758/BF03210487
- 147Nakano, T., Goto, M., & Hiraga, Y. (2006). An automatic singing skill evaluation method for unknown melodies using pitch interval accuracy and vibrato features. In Proceedings of the International Conference on Spoken Langaunge Processing (INTERSPEECH), volume 12, Pittsburgh, PA.
- 148Narang, K., & Rao, P. (2017). Acoustic features for determining goodness of tabla strokes. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 257–263, Suzhou, China.
- 149Ohriner, M. S. (2012). Grouping hierarchy and trajectories of pacing in performances of Chopin’s Mazurkas. Music Theory Online, 18(1). DOI: 10.30535/mto.18.1.6
- 150Okumura, K., Sako, S., & Kitamura, T. (2011). Stochastic modeling of a musical performance with expressive representations from the musical score. In Klapuri, A., & Leider, C., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 531–536, Miami, FL.
University of Miami . - 151Oore, S., Simon, I., Dieleman, S., Eck, D., & Simonyan, K. (2020). This time with feeling: Learning expressive musical performance. Neural Computing and Applications, 32(4), 955–967. DOI: 10.1007/s00521-018-3758-9
- 152Ornoy, E., & Cohen, S. (2018). Analysis of contemporary violin recordings of 19th century repertoire: Identifying trends and impacts. Frontiers in Psychology, 9. DOI: 10.3389/fpsyg.2018.02233
- 153Page, K. R., Nurmikko-Fuller, T., Rindfleisch, C., Weigl, D. M., Lewis, R., Dreyfus, L., & De Roure, D. (2015). A toolkit for live annotation of opera performance: Experiences capturing Wagner’s Ring Cycle. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Malaga, Spain.
- 154Palmer, C. (1989). Mapping musical thought to musical performance. Journal of Experimental Psychology: Human Perception and Performance, 15(2), 331–346. DOI: 10.1037/0096-1523.15.2.331
- 155Palmer, C. (1996). On the assignment of structure in music performance. Music Perception: An Interdisciplinary Journal, 14(1), 23–56. DOI: 10.2307/40285708
- 156Palmer, C. (1997). Music performance. Annual Review of Psychology, 48, 115–138. DOI: 10.1146/annurev.psych.48.1.115
- 157Papiotis, P. (2016). A Computational Approach to Studying Interdependence in String Quartet Performance. PhD thesis, Univesitat Pompeu Fabra, Barcelona, Spain.
- 158Pati, K. A., Gururani, S., & Lerch, A. (2018). Assessment of student music performances using deep neural networks. Applied Sciences, 8(4), 507. DOI: 10.3390/app8040507
- 159Peperkamp, J., Hildebrandt, K., & Liem, C. C. (2017). A formalization of relative local tempo variations in collections of performances. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Suzhou.
- 160Pfleiderer, M., Frieler, K., Abeßer, J., Zaddach, W.-G., & Burkhart, B., editors (2017). Inside the Jazzomat: New Perspectives for Jazz Research. Schott Campus.
- 161Pfordresher, P. Q. (2005). Auditory feedback in music performance: The role of melodic structure and musical skill. Journal of Experimental Psychology, 31(6), 1331–1345. DOI: 10.1037/0096-1523.31.6.1331
- 162Pfordresher, P. Q., & Palmer, C. (2002). Effects of delayed auditory feedback on timing of music performance. Psychological Research, 16, 71–79. DOI: 10.1007/s004260100075
- 163Platz, F., & Kopiez, R. (2012). When the eye listens: A meta-analysis of how audio-visual presentation enhances the appreciation of music performance. Music Perception, 30(1), 71–83. DOI: 10.1525/mp.2012.30.1.71
- 164Povel, D.-J. (1977). Temporal structure of performed music: Some preliminary observations. In Acta Psychologica, volume 41, pages 309–320. DOI: 10.1016/0001-6918(77)90024-5
- 165Prögler, J. A. (1995). Searching for swing: Participatory discrepancies in the jazz rhythm section. Ethnomusicology, 39(1), 21–54. DOI: 10.2307/852199
- 166Ramirez, R., Hazan, A., Maestre, E., & Serra, X. (2008). A genetic rule-based model of expressive performance for jazz saxophone. Computer Music Journal, 32(1), 38–50. DOI: 10.1162/comj.2008.32.1.38
- 167Repp, B. H. (1989). Expressive microstructure in music: A preliminary perceptual assessment of four composers’ “pulses”. Music Perception: An Interdisciplinary Journal, 6(3), 243–273. DOI: 10.2307/40285589
- 168Repp, B. H. (1990). Patterns of expressive timing in performances of a Beethoven minuet by nineteen famous pianists. Journal of the Acoustical Society of America (JASA), 88(2), 622–641. DOI: 10.1121/1.399766
- 169Repp, B. H. (1992). A constraint on the expressive timing of a melodic gesture: Evidence from performance and aesthetic judgment. Music Perception: An Interdisciplinary Journal, 10(2), 221–241. DOI: 10.2307/40285608
- 170Repp, B. H. (1993). Music as motion: A synopsis of Alexander Truslit’s (1938) Gestaltung und Bewegung in der Musik. Psychology of Music, 21(1), 48–72. DOI: 10.1177/030573569302100104
- 171Repp, B. H. (1996a). The art of inaccuracy: Why pianists’ errors are difficult to hear. Music Perception, 14(2), 161–184. DOI: 10.2307/40285716
- 172Repp, B. H. (1996b). The dynamics of expressive piano performance: Schumann’s ‘Träumerei’ revisited. Journal of the Acoustical Society of America (JASA), 100(1), 641–650. DOI: 10.1121/1.415889
- 173Repp, B. H. (1996c). Pedal timing and tempo in expressive piano performance: A preliminary investigation. Psychology of Music, 24(2), 199–221. DOI: 10.1177/0305735696242011
- 174Repp, B. H. (1997a). The aesthetic quality of a quantitatively average music performance: Two preliminary experiments. Music Perception, 14(4), 419–444. DOI: 10.2307/40285732
- 175Repp, B. H. (1997b). The effect of tempo on pedal timing in piano performance. Psychological Research, 60(3), 164–172. DOI: 10.1007/BF00419764
- 176Repp, B. H. (1998a). A microcosm of musical expression. I. Quantitative analysis of pianists’ timing in the initial measures of Chopin’s Etude in E major. Journal of the Acoustical Society of America (JASA), 104(2), 1085–1100. DOI: 10.1121/1.423325
- 177Repp, B. H. (1998b). Obligatory “expectations” of expressive timing induced by perception of musical structure. Psychological Research, 61(1), 33–43. DOI: 10.1007/s004260050011
- 178Repp, B. H. (1999). Effects of auditory feedback deprivation on expressive piano performance. Music Perception, 16(4), 409–438. DOI: 10.2307/40285802
- 179Rink, J. (2003). In respect of performance: The view from musicology. Psychology of Music, 31(3), 303–323. DOI: 10.1177/03057356030313004
- 180Roholt, T. C. (2014). Groove: A Phenomenology of Rhythmic Nuance. Bloomsbury Publishing USA. Google-Books-ID: qvJzBAAAQBAJ.
- 181Romani Picas, O., Rodriguez, H. P., Dabiri, D., Tokuda, H., Hariya, W., Oishi, K., & Serra, X. (2015). A real-time system for measuring sound goodness in instrumental sounds. In Proceedings of the Audio Engineering Society Convention, volume 138, Warsaw.
- 182Rosenzweig, S., Scherbaum, F., Shugliashvili, D., Arifi- Müller, V., & Müller, M. (2020). Erkomaishvili Dataset: A curated corpus of traditional Georgian vocal music for computational musicology. Transactions of the International Society for Music Information Retrieval, 3(1), 31–41. DOI: 10.5334/tismir.44
- 183Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. DOI: 10.1037/h0077714
- 184Sapp, C. S. (2007). Comparative analysis of multiple musical performances. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), Vienna, Austria.
- 185Sapp, C. S. (2008). Hybrid numeric/rank similarity metrics for musical performance analysis. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), Philadelphia, PA.
- 186Sarasúa, A., Caramiaux, B., Tanaka, A., & Ortiz, M. (2017). Datasets for the analysis of expressive musical gestures. In Proceedings of the International Conference on Movement Computing (MOCO), pages 1–4, London, UK.
Association for Computing Machinery . DOI: 10.1145/3077981.3078032 - 187Schärer Kalkandjiev, Z., & Weinzierl, S. (2013). The influence of room acoustics on solo music performance: An empirical case study. Acta Acustica united with Acustica, 99(3), 433–441. DOI: 10.3813/AAA.918624
- 188Schärer Kalkandjiev, Z., & Weinzierl, S. (2015). The influence of room acoustics on solo music performance: An experimental study. Psychomusicology: Music, Mind, and Brain, 25(3), 195–207. DOI: 10.1037/pmu0000065
- 189Schramm, R., de Souza Nunes, H., & Jung, C. R. (2015). Automatic Solfège assessment. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 183–189, Malaga, Spain.
- 190Schubert, E., Canazza, S., De Poli, G., & Rodà, A. (2017). Algorithms can mimic human piano performance: The deep blues of music. Journal of New Music Research, 46(2), 175–186. DOI: 10.1080/09298215.2016.1264976
- 191Schubert, E., & Fabian, D. (2006). The dimensions of Baroque music performance: A semantic differential study. Psychology of Music, 34(4), 573–587. DOI: 10.1177/0305735606068105
- 192Schubert, E., & Fabian, D. (2014).
A taxonomy of listeners’ judgments of expressiveness in music performance . In Fabian, D., Timmers, R., & Schubert, E., editors, Expressiveness in Music Performance: Empirical Approaches Across Styles and Cultures. Oxford University Press. DOI: 10.1093/acprof:oso/9780199659647.003.0016 - 193Seashore, C. E. (1938). Psychology of Music. McGraw-Hill, New York. DOI: 10.2307/3385515
- 194Serra, X. (2014). Creating research corpora for the computational study of music: The case of the CompMusic Project. In Proceedings of the AES International Conference on Semantic Audio, pages 1–9, London, UK.
AES . - 195Shaffer, L. H. (1984). Timing in solo and duet piano performances. The Quarterly Journal of Experimental Psychology, 36A, 577–595. DOI: 10.1080/14640748408402180
- 196Shi, Z., Sapp, C. S., Arul, K., McBride, J., & Smith, J. O. (2019). SUPRA: Digitizing the Stanford University Piano Roll Archive. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 517–523, Delft, The Netherlands.
- 197Siegwart, H., & Scherer, K. R. (1995). Acoustic concomitants of emotional expression in operatic singing: The case of Lucia in Ardi gli incensi. Journal of Voice, 9(3), 249–260. DOI: 10.1016/S0892-1997(05)80232-2
- 198Silvey, B. A. (2012). The role of conductor facial expression in students’ evaluation of ensemble expressivity. Journal of Research in Music Education, 60(4), 419–429. DOI: 10.1177/0022429412462580
- 199Sloboda, J. A. (1982).
Music Performance . In Deutsch, D., editor, The Psychology of Music. Academic Press, New York. DOI: 10.1016/B978-0-12-213562-0.50020-6 - 200Sloboda, J. A. (1983). The communication of musical metre in piano performance. The Quarterly Journal of Experimental Psychology Section A, 35(2), 377–396. DOI: 10.1080/14640748308402140
- 201Srinivasamurthy, A., Holzapfel, A., Ganguli, K. K., & Serra, X. (2017). Aspects of tempo and rhythmic elaboration in Hindustani music: A corpus study. Frontiers in Digital Humanities, 4. DOI: 10.3389/fdigh.2017.00020
- 202Stowell, D., & Chew, E. (2013).
Maximum a posteriori estimation of piecewise arcs in tempo time-series . In Aramaki, M., Barthet, M., Kronland-Martinet, R., & Ystad, S., editors, From Sounds to Music and Emotions, Lecture Notes in Computer Science, pages 387–399, Berlin, Heidelberg. Springer. DOI: 10.1007/978-3-642-41248-6_22 - 203Su, L., Yu, L.-F., & Yang, Y.-H. (2014). Sparse cepstral, phase codes for guitar playing technique classification. In Wang, H.-M., Yang, Y.-H., & Lee, J. H., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 9–14.
- 204Sulem, A., Bodner, E., & Amir, N. (2019). Perceptionbased classification of expressive musical terms: Toward a parameterization of musical expressiveness. Music Perception, 37(2), 147–164. DOI: 10.1525/mp.2019.37.2.147
- 205Sundberg, J. (1993). How can music be expressive? Speech Communication, 13(1), 239–253. DOI: 10.1016/0167-6393(93)90075-V
- 206Sundberg, J. (2018).
The singing voice . In Frühholz, S., & Belin, P., editors, The Oxford Handbook of Voice Perception. Oxford University Press. DOI: 10.1093/oxfordhb/9780198743187.013.6 - 207Sundberg, J., Lã, F. M. B., & Himonides, E. (2013). Intonation and expressivity: A single case study of classical Western singing. Journal of Voice, 27(3), 391.e1–391.e8. DOI: 10.1016/j.jvoice.2012.11.009
- 208Takeda, H., Nishimoto, T., & Sagayama, S. (2004). Rhythm and tempo recognition of music performance from a probabilistic approach. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), Barcelona, Spain.
- 209Thompson, S., & Williamon, A. (2003). Evaluating evaluation: Musical performance assessment as a research tool. Music Perception: An Interdisciplinary Journal, 21(1), 21–41. DOI: 10.1525/mp.2003.21.1.21
- 210Timmers, R. (2005). Predicting the similarity between expressive performances of music from measurements of tempo and dynamics. The Journal of the Acoustical Society of America, 117(1), 391–399. DOI: 10.1121/1.1835504
- 211Todd, N. P. M. (1992). The dynamics of dynamics: A model of musical expression. Journal of the Acoustical Society of America, 91, 3540–3550. DOI: 10.1121/1.402843
- 212Todd, N. P. M. (1993). Vestibular feedback in musical performance. Music Perception, 10(3), 379–382. DOI: 10.2307/40285575
- 213Todd, N. P. M. (1995). The kinematics of musical expression. Journal of the Acoustical Society of America (JASA), 97(3), 1940–1949. DOI: 10.1121/1.412067
- 214Toyoda, K., Noike, K., & Katayose, H. (2004). Utility system for constructing database of performance deviations. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), Barcelona, Spain.
- 215Tsay, C.-J. (2013). Sight over sound in the judgment of music performance. Proceedings of the National Academy of Sciences, 110(36), 14580–14585. DOI: 10.1073/pnas.1221454110
- 216Tzanetakis, G. (2014). Computational ethnomusicology: A music information retrieval perspective. In Proceedings of the Joint International Computer Music/Sound and Music Computing Conference, Athens.
- 217Van Herwaarden, S., Grachten, M., & De Haas, W. B. (2014). Predicting expressive dynamics in piano performances using neural networks. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Taipei, Taiwan.
- 218van Noorden, L., & Moelants, D. (1999). Resonance in the perception of musical pulse. Journal of New Music Research, 28(1), 43–66. DOI: 10.1076/jnmr.28.1.43.3122
- 219Vidwans, A., Gururani, S., Wu, C.-W., Subramanian, V., Swaminathan, R. V., & Lerch, A. (2017). Objective descriptors for the assessment of student music performances. In Proceedings of the AES Conference on Semantic Audio, Erlangen.
Audio Engineering Society (AES) . - 220Vieillard, S., Roy, M., & Peretz, I. (2012). Expressiveness in musical emotions. Psychological Research, 76(5), 641–653. DOI: 10.1007/s00426-011-0361-4
- 221Viraraghavan, V. S., Aravind, R., & Murthy, H. A. (2017). A statistical analysis of gamakas in Carnatic music. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Suzhou, China.
- 222Wager, S., Tzanetakis, G., Sullivan, S., Wang, C.-i., Shimmin, J., Kim, M., & Cook, P. (2019). Intonation: A dataset of quality vocal performances refined by spectral clustering on pitch congruence. In Proceedings of the International Conference on Acoustics Speech and Signal Processing (ICASSP), pages 476–480, Brighton, UK.
IEEE . DOI: 10.1109/ICASSP.2019.8683554 - 223Wang, B., & Yang, Y.-H. (2019). PerformanceNet: Score-to-audio music generation with multi-band convolutional residual network. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 1174–1181. DOI: 10.1609/aaai.v33i01.33011174
- 224Wang, C., Benetos, E., Lostanlen, V., & Chew, E. (2019). Adaptive time-frequency scattering for periodic modulation recognition in music signals. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Delft, Netherlands.
- 225Wapnick, J., Campbell, L., Siddell-Strebel, J., & Darrow, A.-A. (2009). Effects of non-musical attributes and excerpt duration on ratings of high-level piano performances. Musicae Scientiae, 13(1), 35–54. DOI: 10.1177/1029864909013001002
- 226Wesolowski, B. C. (2016). Timing deviations in jazz performance: The relationships of selected musical variables on horizontal and vertical timing relations: A case study. Psychology of Music, 44(1), 75–94. DOI: 10.1177/0305735614555790
- 227Wesolowski, B. C., Wind, S. A., & Engelhard, G. (2016). Examining rater precision in music performance assessment: An analysis of rating scale structure using the multifaceted Rasch partial credit model. Music Perception: An Interdisciplinary Journal, 33(5), 662–678. DOI: 10.1525/mp.2016.33.5.662
- 228Widmer, G. (2003). Discovering simple rules in complex data: A meta-learning algorithm and some surprising musical discoveries. Artificial Intelligence, 146(2), 129–148. DOI: 10.1016/S0004-3702(03)00016-X
- 229Widmer, G., & Goebl, W. (2004). Computational models of expressive music performance: The state of the art. Journal of New Music Research, 33(3), 203–216. DOI: 10.1080/0929821042000317804
- 230Wilkins, J., Seetharaman, P., Wahl, A., & Pardo, B. (2018). VocalSet: A singing voice dataset. In Gómez, E., Hu, X., Humphrey, E., & Benetos, E., editors, Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 468–474, Paris, France.
- 231Winters, R. M., Gururani, S., & Lerch, A. (2016). Automatic practice logging: Introduction, dataset & preliminary study. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), New York.
- 232Wolf, A., Kopiez, R., Platz, F., Lin, H.-R., & Mütze, H. (2018). Tendency towards the average? The aesthetic evaluation of a quantitatively average music performancea successful replication of Repp’s (1997) study. Music Perception, 36(1), 98–108. DOI: 10.1525/mp.2018.36.1.98
- 233Wu, C.-W., Gururani, S., Laguna, C., Pati, A., Vidwans, A., & Lerch, A. (2016). Towards the objective assessment of music performances. In Proceedings of the International Conference on Music Perception and Cognition (ICMPC), pages 99–103, San Francisco.
- 234Wu, C.-W., & Lerch, A. (2016). On drum playing technique detection in polyphonic mixtures. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), New York.
ISMIR . - 235Wu, C.-W., & Lerch, A. (2018a). Assessment of percussive music performances with feature learning. International Journal of Semantic Computing (IJSC), 12(3), 315–333. DOI: 10.1142/S1793351X18400147
- 236Wu, C.-W., & Lerch, A. (2018b). From labeled to unlabeled data – on the data challenge in automatic drum transcription. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Paris.
- 237Wu, C.-W., & Lerch, A. (2018c). Learned features for the assessment of percussive music performances. In Proceedings of the International Conference on Semantic Computing (ICSC), Laguna Hills.
IEEE . DOI: 10.1109/ICSC.2018.00022 - 238Xia, G., & Dannenberg, R. (2015). Duet interaction: Learning musicianship for automatic accompaniment. In Proceedings of the International Conference on New Interfaces for Musical Expression, NIME 2015, pages 259–264, Baton Rouge, Louisiana, USA.
The School of Music and the Center for Computation and Technology (CCT), Louisiana State University . - 239Xia, G., Wang, Y., Dannenberg, R. B., & Gordon, G. (2015). Spectral learning for expressive interactive ensemble music performance. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 816–822, Malaga, Spain.
- 240Yang, L., Tian, M., & Chew, E. (2015). Vibrato characteristics and frequency histogram envelopes in Beijing opera singing. In Proceedings of the Fifth International Workshop on Folk Music Analysis (FMA), Paris, France.
- 241Zhang, S., Caro Repetto, R., & Serra, X. (2014). Study of the similarity between linguistic tones and melodic pitch contours in Beijing opera singing. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Taipei, Taiwan.
- 242Zhang, S., Caro Repetto, R., & Serra, X. (2015). Predicting pairwise pitch contour relations based on linguistic tone information in Beijing opera singing. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Malaga, Spain.
- 243Zhang, S., Caro Repetto, R., & Serra, X. (2017). Understanding the expressive functions of Jingju metrical patterns through lyrics text mining. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), Suzhou, China.
