References
- 1Agres, K. R., Schaefer, R. S., Volk, A., van Hooren, S., Holzapfel, A., Bella, S. D., Müller, M., de Witte, M., Herremans, D., Melendez, R. R., Neerincx, M., Ruiz, S., Meredith, D., Dimitriadis, T., and Magee, W. L. (2021). Music, computing, and health: A roadmap for the current and future roles of music technology for health care and well-being. Music & Science, 4:1–32. DOI: 10.1177/2059204321997709
- 2Arthur, C. (2021). Vicentino versus Palestrina: A computational investigation of voice leading across changing vocal densities. Journal of New Music Research, 50(1):74–101. DOI: 10.1080/09298215.2021.1877729
- 3Benetos, E., Dixon, S., Duan, Z., and Ewert, S. (2019). Automatic music transcription: An overview. IEEE Signal Processing Magazine, 36(1):20–30. DOI: 10.1109/MSP.2018.2869928
- 4Bittner, R. M., McFee, B., Salamon, J., Li, P., and Bello, J. P. (2017). Deep salience representations for F0 tracking in polyphonic music. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 63–70, Suzhou, China.
- 5Bloom, B. S. and Engelhart, M. D. (1956).
Taxonomy of Educational Objectives: The Classification of Educational Goals . Handbook I: Cognitive Domain. David McKay Company. - 6Böck, S., Korzeniowski, F., Schlüter, J., Krebs, F., and Widmer, G. (2016). madmom: A new Python audio and music signal processing library. In Proceedings of the ACM International Conference on Multimedia (ACM-MM), pages 1174–1178, Amsterdam, The Netherlands. DOI: 10.1145/2964284.2973795
- 7Bogdanov, D., Wack, N., Gómez, E., Gulati, S., Herrera, P., Mayor, O., Roma, G., Salamon, J., Zapata, J. R., and Serra, X. (2013). Essentia: An audio analysis library for music information retrieval. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 493–498, Curitiba, Brazil. DOI: 10.1145/2502081.2502229
- 8Brown, J. (1991). Calculation of a constant q spectral transform. Journal of the Acoustical Society of America, 89(1):425–434. DOI: 10.1121/1.400476
- 9Cannam, C., Landone, C., Sandler, M. B., and Bello, J. P. (2006). The Sonic Visualiser: A visualisation platform for semantic descriptors from musical signals. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 324–327, Victoria, Canada.
- 10Cañón, J. S. G., Cano, E., Eerola, T., Herrera, P., Hu, X., Yang, Y., and Gómez, E. (2021). Music emotion recognition: Toward new, robust standards in personalized and context-sensitive applications. IEEE Signal Processing Magazine, 38(6):106–114. DOI: 10.1109/MSP.2021.3106232
- 11Clayton, M., Rao, P., and Rohit, M. A. (2023).
Rhythm and structural segmentation in Dhrupad Bandish performance . In Indian Art Music: A Computational Perspective, pages 215–238. Sriranga Digital Software Technologies, India. - 12Cosme-Clifford, N., Symons, J., Kapoor, K., and White, C.W. (2023). Musicological interpretability in generative transformers. In Proceedings of the International Symposium on the Internet of Sounds, pages 1–9. DOI: 10.1109/IEEECONF59510.2023.10335202
- 13Dannenberg, R. B. and Raphael, C. (2006). Music score alignment and computer accompaniment. Communications of the ACM, Special Issue: Music Information Retrieval, 49(8):38–43. DOI: 10.1145/1145287.1145311
- 14de Valk, R., Volk, A., Holzapfel, A., Pikrakis, A., Kroher, N., and Six, J. (2017). MIRchiving: Challenges and opportunities of connecting MIR research and digital music archives. In Proceedings of the International Workshop on Digital Libraries for Musicology (DLfM), pages 25–28. DOI: 10.1145/3144749.3144755
- 15Deutsch, D. (2013). The Psychology of Music. Academic Press, 3rd edition.
- 16Dixon, S., Gómez, E., and Volk, A. (2018). Editorial: Introducing the Transactions of the International Society for Music Information Retrieval. Transactions of the International Society for Music Information Retrieval (TISMIR), 1(1):1–3. DOI: 10.5334/tismir.22
- 17Duan, Z., van Kranenburg, P., Nam, J., and Rao, P. (2023). Editorial for TISMIR special collection: Cultural diversity in MIR research. Transactions of the International Society for Music Information Retrieval (TISMIR), 6(1):203–205. DOI: 10.5334/tismir.179
- 18Eerola, T. (2024). Music and Science – Guide to Empirical Music Research. SEMPRE Studies in the Psychology of Music. Routledge, London, UK.
- 19Essid, S. and Richard, G. (2012). Fusion of multimodal information in music content analysis. Multimodal Music Processing (Dagstuhl Seminar 11041), Dagstuhl Follow-Ups, 3:37–52.
- 20Fastl, H. and Zwicker, E. (2007). Psychoacoustics: Facts and Models. Springer, 3rd edition. DOI: 10.1007/978-3-540-68888-4
- 21Fletcher, N. H. and Rossing, T. D. (1998). The Physics of Musical Instruments. Springer, Berlin, Germany, 2nd edition. DOI: 10.1007/978-0-387-21603-4
- 22Gotham, M. R. H. (2019). Moments Musicaux. In Proceedings of the International Workshop on Digital Libraries for Musicology (DLfM), pages 70–78, New York, NY, USA. DOI: 10.1145/3358664.3358676
- 23Gotham, M. R. H. (2021). Connecting the dots: Recognizing and implementing more kinds of “Open Science” to connect musicians and musicologists. Empirical Musicology Review, 16:34–46. DOI: 10.18061/emr.v16i1.7644
- 24Gotham, M. R. H. (2023). Chromatic chords in theory and practice. Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 272–278.
- 25Gotham, M. R. H. and Jonas, P. (2022). The Open-Score Lieder Corpus. In Münnich, S. and Rizo, D., editors, Proceedings of the Music Encoding Conference, pages 131–136.
Humanities Commons . - 26Gotham, M. R. H., Micchi, G., Nápoles-López, N., and Sailor, M. (2023). When in Rome: A meta-corpus of functional harmony. Transactions of the International Society for Music Information Retrieval (TISMIR), 6(1):150–166. DOI: 10.5334/tismir.165
- 27Goto, M. and Dannenberg, R. B. (2019). Music interfaces based on automatic music signal analysis: New ways to create and listen to music. IEEE Signal Processing Magazine, 36(1):74–81. DOI: 10.1109/MSP.2018.2874360
- 28Guzdial, M. (2013). Exploring hypotheses about media computation. In International Computing Education Research Conference (ICER), pages 19–26, La Jolla, CA, USA. DOI: 10.1145/2493394.2493397
- 29Heard, S. B. (2022). The Scientist’s Guide to Writing: How toWrite More Easily and Effectively Throughout Your Scientific Career. Princeton University Press, 2nd edition.
- 30Henry, L., Frieler, K., Solis, G., Pfleiderer, M., Dixon, S., Höger, F., Weyde, T., and Crayencour, H.-C. (2024). Dig that lick: Exploring patterns in jazz with computational methods. Jazzforschung / Jazz Research, 50/51. In press.
- 31Honing, H. (2021). Music Cognition: The Basics. Routledge. DOI: 10.4324/9781003158301
- 32Humphrey, E., Bello, J. P., and LeCun, Y. (2013). Feature learning and deep architectures: New directions for music informatics. Journal of Intelligent Information Systems, 41(3):461–481. DOI: 10.1007/s10844-013-0248-5
- 33Kim, J., Urbano, J., Liem, C. C. S., and Hanjalic, A. (2020). One deep music representation to rule them all? A comparative analysis of different representation learning strategies. Neural Computing and Applications, 32:1067–1093. DOI: 10.1007/s00521-019-04076-1
- 34Knees, P. and Schedl, M. (2016). Music Similarity and Retrieval. Springer Verlag. DOI: 10.1007/978-3-662-49722-7
- 35Krathwohl, D. R. (2002). A revision of Bloom’s taxonomy: An overview. Theory into Practice, 41(4):212–218. DOI: 10.1207/s15430421tip4104_2
- 36Lartillot, O. and Toiviainen, P. (2007). MIR in MATLAB (II): A toolbox for musical feature extraction from audio. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 127–130, Vienna, Austria.
- 37Lerch, A. (2012). An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics. John Wiley & Sons. DOI: 10.1002/9781118393550
- 38Lerch, A. (2022). An Introduction to Audio Content Analysis. Wiley/IEEE Press, 2nd edition. DOI: 10.1002/9781119890980
- 39Li, B., Liu, X., Dinesh, K., Duan, Z., and Sharma, G. (2019). Creating a multitrack classical music performance dataset for multimodal music analysis: Challenges, insights, and applications. IEEE Transactions on Multimedia, 21(2):522–535. DOI: 10.1109/TMM.2018.2856090
- 40Margulis, E. H. (2014). On Repeat: How Music Plays the Mind. Oxford University Press, New York, NY. DOI: 10.1093/acprof:oso/9780199990825.001.0001
- 41McFee, B. (2023). Digital Signals Theory. Chapman and Hall/CRC. DOI: 10.1201/9781003264859
- 42McFee, B., Kim, J. W., Cartwright, M., Salamon, J., Bittner, R. M., and Bello, J. P. (2019). Open-source practices for music signal processing research: Recommendations for transparent, sustainable, and reproducible audio research. IEEE Signal Processing Magazine, 36(1):128–137. DOI: 10.1109/MSP.2018.2875349
- 43McFee, B., Raffel, C., Liang, D., Ellis, D. P., McVicar, M., Battenberg, E., and Nieto, O. (2015). Librosa: Audio and music signal analysis in Python. In Proceedings of the Python Science Conference, pages 18–25, Austin, Texas, USA. DOI: 10.25080/Majora-7b98e3ed-003
- 44Müller, M. (2007). Information Retrieval for Music and Motion. Springer Verlag. DOI: 10.1007/978-3-540-74048-3
- 45Müller, M. (2015). Fundamentals of Music Processing – Audio, Analysis, Algorithms, Applications. Springer Verlag. DOI: 10.1007/978-3-319-21945-5
- 46Müller, M. (2021). Fundamentals of Music Processing – Using Python and Jupyter Notebooks. Springer Verlag, 2nd edition. DOI: 10.1007/978-3-030-69808-9
- 47Müller, M., Goto, M., and Schedl, M., editors (2012). Multimodal Music Processing, volume 3 of Dagstuhl Follow-Ups. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, Germany.
- 48Müller, M., McFee, B., and Kinnaird, K. (2021). Interactive learning of signal processing through music: Making Fourier analysis concrete for students. IEEE Signal Processing Magazine, 38(3):73–84. DOI: 10.1109/MSP.2021.3052181
- 49Müller, M. and Zalkow, F. (2019). FMP Notebooks: Educational material for teaching and learning fundamentals of music processing. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 573–580, Delft, The Netherlands.
- 50Müller, M. and Zalkow, F. (2021). libfmp: A Python package for fundamentals of music processing. Journal of Open Source Software (JOSS), 6(63):3326:1–5. DOI: 10.21105/joss.03326
- 51Nam, J., Choi, K., Lee, J., Chou, S., and Yang, Y. (2019). Deep learning for audio-based music classification and tagging: Teaching computers to distinguish rock from Bach. IEEE Signal Processing Magazine, 36(1):41–51. DOI: 10.1109/MSP.2018.2874383
- 52Orr, R. B., Csikari, M. M., Freeman, S., and Rodriguez, M. C. (2022). Writing and using learning objectives. CBE—Life Sciences Education, 21(3):fe3,1–6. DOI: 10.1187/cbe.22-04-0073
- 53Pons, J., Nieto, O., Prockup, M., Schmidt, E. M., Ehmann, A. F., and Serra, X. (2018). End-to-end learning for music audio tagging at scale. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 637–644, Paris, France.
- 54Rao, P., Murthy, H. A., and Prasanna, S. R. M., editors (2023). Indian Art Music: A Computational Perspective. Sriranga Digital Software Technologies, India.
- 55Rosenshine, B. (2012). Principles of instruction: Research-based strategies that all teachers should know. American Educator, 36(1):12–39.
- 56Rosenzweig, S., Scherbaum, F., and Müller, M. (2022). Computer-assisted analysis of field recordings: A case study of Georgian funeral songs. ACM Journal on Computing and Cultural Heritage (JOCCH). DOI: 10.1145/3551645
- 57Starr, C., Manaris, B., and Stalvey, R. H. (2008). Bloom’s taxonomy revisited: Specifying assessable learning objectives in computer science. SIGCSE Bulletin, 40(1):261–265. DOI: 10.1145/1352322.1352227
- 58Tzanetakis, G. (2009). Music analysis, retrieval and synthesis of audio signals MARSYAS. In Proceedings of the ACM International Conference on Multimedia (ACM-MM), pages 931–932, Vancouver, British Columbia, Canada. DOI: 10.1145/1631272.1631459
- 59Tzanetakis, G. (2014). Computational ethnomusicology: A music information retrieval perspective. In Proceedings of the International Computer Music Conference (ICMC).
Michigan Publishing . - 60van Kranenburg, P., de Bruin, M., and Volk, A. (2019). Documenting a song culture: The Dutch Song Database as a resource for musicological research. International Journal on Digital Libraries, 20(1):13–23. DOI: 10.1007/s00799-017-0228-4
- 61van Merriënboer, J. J. (2019). The Four-Component Instructional Design Model. Maastricht University, The Netherlands.
- 62Volk, A. and de Haas, W. (2013). A corpus-based study on Ragtime syncopation. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), pages 163–168, Curitiba, Brazil.
- 63Wang, A. (2003). An industrial strength audio search algorithm. In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pages 7–13, Baltimore, Maryland, USA.
- 64Weiß, C., Mauch, M., Dixon, S., and Müller, M. (2019). Investigating style evolution of Western classical music: A computational approach. Musicae Scientiae, 23(4):486–507. DOI: 10.1177/1029864918757595
- 65Weiß, C., Schreiber, H., and Muller, M. (2020). Local key estimation in music recordings: A case study across songs, versions, and annotators. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:2919–2932. DOI: 10.1109/TASLP.2020.3030485
- 66Weihs, C., Jannach, D., Vatolkin, I., and Rudolph, G. (2016). Music Data Analysis: Foundations and Applications. CRC Press. DOI: 10.1201/9781315370996
- 67Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., Gonzalez-Beltran, A., Gray, A. J. G., Groth, P., Goble, C., Grethe, J. S., Heringa, J., ’t Hoen, P. A. C., Hooft, R., Kuhn, T., Kok, R., Kok, J., Lusher, S. J., Martone, M. E., Mons, A., Packer, A. L., Persson, B., Rocca-Serra, P., Roos, M., van Schaik, R., Sansone, S.-A., Schultes, E., Sengstag, T., Slater, T., Strawn, G., Swertz, M. A., Thompson, M., van der Lei, J., van Mulligen, E., Velterop, J., Waagmeester, A., Wittenburg, P., Wolstencroft, K., Zhao, J., and Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1). Article number:
160018 . - 68Yang, Y.-H. and Chen, H. H. (2011). Music Emotion Recognition. CRC Press. DOI: 10.1201/b10731
- 69Yesiler, F., Doras, G., Bittner, R. M., Tralie, C. J., and Serrà, J. (2021). Audio-based musical version identification: Elements and challenges. IEEE Signal Processing Magazine, 38(6):115–136. DOI: 10.1109/MSP.2021.3105941
- 70Yust, J. and Fiore, T. M. (2014). Introduction to the special issue on pedagogies of mathematical music theory. Journal of Mathematics and Music, 8(2):113–116. DOI: 10.1080/17459737.2014.951188
- 71Zölzer, U. (2002). DAFX: Digital Audio Effects. John Wiley & Sons, New York, NY, USA. DOI: 10.1002/0470846046
