References
- 1Abreu, J., Caetano, M., and Penha, R. (2016). Computer-aided musical orchestration using an artificial immune system. In International Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design (EvoMUSART 2016), pages 1–16. DOI: 10.1007/978-3-319-31008-4_1
- 2Agres, K., Forth, J., and Wiggins, G. A. (2016). Evaluation of musical creativity and musical metacreation systems. Computers in Entertainment, 14(3): 1–33. DOI: 10.1145/2967506
- 3Allegraud, P., Bigo, L., Feisthauer, L., Giraud, M., Groult, R., Leguy, E., and Levé, F. (2019). Learning sonata form structure on Mozart’s string quartets. Transactions of the International Society for Music Information Retrieval, 2(1): 82–96. DOI: 10.5334/tismir.27
- 4Ariza, C. (2011). Two pioneering projects from the early history of computer-aided algorithmic composition. Computer Music Journal, 35(3): 40–56. DOI: 10.1162/COMJ_a_00068
- 5Assayag, G., Bloch, G., Cont, A., and Dubnov, S. (2010).
Interaction with machine improvisation . In The Structure of Style, pages 219–245. Springer. DOI: 10.1007/978-3-642-12337-5_10 - 6Barbieri, G., Pachet, F., Roy, P., and Degli Esposti, M. (2012). Markov constraints for generating lyrics with style. In European Conference on Artificial Intelligence (ECAI 2012), volume 242, pages 115–120.
- 7Barlow, H., and Morgenstern, S. (1948). A Dictionary of Musical Themes. Crown Publishers.
- 8Ben-Tal, O., Harris, M. T., and Sturm, B. L. (2020). How music AI is useful: Engagements with composers, performers, and audiences. Leonardo, 54(5): 510–516. DOI: 10.1162/leon_a_01959
- 9Birtchnell, T. (2018). Listening without ears: Artificial intelligence in audio mastering. Big Data & Society, 5(2): 2053951718808553. DOI: 10.1177/2053951718808553
- 10Briot, J.-P., Hadjeres, G., and Pachet, F.-D. (2019). Deep Learning Techniques for Music Generation. Springer. arXiv:1709.01620. DOI: 10.1007/978-3-319-70163-9
- 11Conklin, D., Gasser, M., and Oertl, S. (2018). Creative chord sequence generation for electronic dance music. Applied Sciences, 8(9): 1704. DOI: 10.3390/app8091704
- 12Copeland, B. J., and Long, J. (2017).
Turing and the history of computer music . In Floyd, J. and Bokulich, A., editors, Philosophical Explorations of the Legacy of Alan Turing: Turing 100, pages 189–218. Springer International Publishing. DOI: 10.1007/978-3-319-53280-6_8 - 13Crestel, L., and Esling, P. (2017). Live Orchestral Piano, a system for real-time orchestral music generation. In Sound and Music Computing Conference (SMC 2017), pages 434–442.
- 14Cuthbert, M. S., and Ariza, C. (2010). music21: A toolkit for computer-aided musicology and symbolic music data. In International Society for Music Information Retrieval Conference (ISMIR 2010), pages 637–642.
- 15De Man, B., Reiss, J., and Stables, R. (2017). Ten years of automatic mixing. In Workshop on Intelligent Music Production (WIMP 2017).
- 16Deruty, E. (2016). Goal-oriented mixing. In Workshop on Intelligent Music Production (WIMP 2016), volume 13.
- 17Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A., and Sutskever, I. (2020). Jukebox: A generative model for music. arXiv:2005.00341.
- 18Dong, H.-W., Hsiao, W.-Y., Yang, L.-C., and Yang, Y.-H. (2018). MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment. In AAAI Conference on Artificial Intelligence (AAAI 2018), volume 32.
- 19Eno, B., and Schmidt, P. (1975). Oblique strategies. Boxed set of cards (limited edition).
- 20Esling, P., and Devis, N. (2020). Creativity in the era of artificial intelligence. arXiv:2008.05959.
- 21Fernández, J. D., and Vico, F. (2013). AI methods in algorithmic composition: A comprehensive survey. Journal of Artificial Intelligence Research, 48(1): 513–582. DOI: 10.1613/jair.3908
- 22Ghisi, D. (2017). Music across music: Towards a corpus-based, interactive computer-aided composition. PhD thesis, Pierre and Marie Curie University (Paris 6).
- 23Gifford, T., Knotts, S., McCormack, J., Kalonaris, S., Yee-King, M., and d’Inverno, M. (2018). Computational systems for music improvisation. Digital Creativity, 29(1): 19–36. DOI: 10.1080/14626268.2018.1426613
- 24Giraud, M., Groult, R., Leguy, E., and Levé, F. (2015). Computational fugue analysis. Computer Music Journal, 39(2). DOI: 10.1162/COMJ_a_00300
- 25Herremans, D., and Chew, E. (2017). Morpheus: Generating structured music with constrained patterns and tension. IEEE Transactions on Affective Computing, 10(4): 510–523. DOI: 10.1109/TAFFC.2017.2737984
- 26Herremans, D., Chuan, C.-H., and Chew, E. (2017). A functional taxonomy of music generation systems. ACM Computing Surveys, 50(5): 1–30. DOI: 10.1145/3108242
- 27Hiller
Jr , L. A., and Isaacson, L. M. (1957). Musical composition with a high speed digital computer. In Audio Engineering Society Convention 9. - 28Huang, C.-Z. A., Duvenaud, D., and Gajos, K. Z. (2016). ChordRipple: Recommending chords to help novice composers go beyond the ordinary. In International Conference on Intelligent User Interfaces (IUI 2016), pages 241–250. DOI: 10.1145/2856767.2856792
- 29Huang, C.-Z. A., Koops, H. V., Newton-Rex, E., Dinculescu, M., and Cai, C. J. (2020). AI Song Contest: Human-AI co-creation in songwriting. In International Society for Music Information Retrieval Conference (ISMIR 2020).
- 30Ji, S., Luo, J., and Yang, X. (2020). A comprehensive survey on deep music generation: Multi-level representations, algorithms, evaluations, and future directions. arXiv:2011.06801.
- 31Jordanous, A. (2012). A standardised procedure for evaluating creative systems: Computational creativity evaluation based on what it is to be creative. Cognitive Computation, 4(3): 246–279. DOI: 10.1007/s12559-012-9156-1
- 32Jordanous, A. (2017). Has computational creativity successfully made it “beyond the fence” in musical theatre? Connection Science, 29: 350–386. DOI: 10.1080/09540091.2017.1345857
- 33Kantosalo, A., and Jordanous, A. (2020). Role-based perceptions of computer participants in human-computer co-creativity. In AISB Symposium of Computational Creativity (CC@AISB 2020).
- 34Krumhansl, C. L. (1990). Cognitive Foundations of Musical Pitch. Oxford University Press.
- 35Krumhansl, C. L., and Kessler, E. J. (1982). Tracing the dynamic changes in perceived tonal organisation in a spatial representation of musical keys. Psychological Review, 89(2): 334–368. DOI: 10.1037/0033-295X.89.4.334
- 36Louie, R., Coenen, A., Huang, C. Z., Terry, M., and Cai, C. J. (2020). Novice-AI music co-creation via AI-steering tools for deep generative models. In Conference on Human Factors in Computing Systems (CHI 2020), pages 1–13. DOI: 10.1145/3313831.3376739
- 37Lovelace, A. (1843). A sketch of the analytical engine, with notes by the translator. Scientific Memoirs, 3: 666–731.
- 38Lubart, T. (2005). How can computers be partners in the creative process: Classification and commentary on the special issue. International Journal of Human-Computer Studies, 63(4–5): 365–369. DOI: 10.1016/j.ijhcs.2005.04.002
- 39McCormack, J., Gifford, T., and Hutchings, P. (2019). Autonomy, authenticity, authorship and intention in computer generated art. In International Conference on Computational Intelligence in Music, Sound, Art and Design (EvoMUSART 2019), pages 35–50. DOI: 10.1007/978-3-030-16667-0_3
- 40McKeown, L., and Jordanous, A. (2018). An evaluation of the impact of constraints on the perceived creativity of narrative generating software. In International Conference on Computational Creativity (ICCC 2018).
- 41Medeot, G., Cherla, S., Kosta, K., McVicar, M., Abdalla, S., Selvi, M., Rex, E., and Webster, K. (2018). StructureNet: Inducing structure in generated melodies. In International Society for Music Information Retrieval Conference (ISMIR 2018).
- 42Mehri, S., Kumar, K., Gulrajani, I., Kumar, R., Jain, S., Sotelo, J., Courville, A., and Bengio, Y. (2016). SampleRNN: An unconditional end-to-end neural audio generation model. arXiv:1612.07837.
- 43Miller, A. (2020). The Artist in the Machine: The World of AI-Powered Creativity. MIT Press. DOI: 10.7551/mitpress/11585.001.0001
- 44Nika, J., Chemillier, M., and Assayag, G. (2017). Improtek: Introducing scenarios into human-computer music improvisation. Computers in Entertainment, 14(2): 1–27. DOI: 10.1145/3022635
- 45Pachet, F., and Roy, P. (2011). Markov constraints: Steerable generation of Markov sequences. Constraints, 16(2): 148–172. DOI: 10.1007/s10601-010-9101-4
- 46Paiement, J.-F., Eck, D., and Bengio, S. (2005). A probabilistic model for chord progressions. In International Conference on Music Information Retrieval (ISMIR 2005).
- 47Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8): 9.
- 48Reybrouck, M. M. (2006).
Musical creativity between symbolic modelling and perceptual constraints: The role of adaptive behaviour and epistemic autonomy . In Musical Creativity, pages 58–76. Psychology Press. DOI: 10.4324/9780203088111-13 - 49Rohrmeier, M. (2011). Towards a generative syntax of tonal harmony. Journal of Mathematics and Music, 5(1): 35–53. DOI: 10.1080/17459737.2011.573676
- 50Shin, A., Crestel, L., Kato, H., Saito, K., Ohnishi, K., Yamaguchi, M., Nakawaki, M., Ushiku, Y., and Harada, T. (2017). Melody generation for pop music via word representation of musical properties. arXiv:1710.11549.
- 51Sloboda, J. A. (1984). Experimental studies of music reading: A review. Music Perception, 2(2): 222–236. DOI: 10.2307/40285292
- 52Smith, J. B. L., Burgoyne, J. A., Fujinaga, I., De Roure, D., and Downie, J. S. (2011). Design and creation of a large-scale database of structural annotations. In International Society for Music Information Retrieval Conference (ISMIR 2011).
- 53Sturm, B. L., and Ben-Tal, O. (2018).
Let’s have another Gan Ainm: An experimental album of Irish traditional music and computer-generated tunes . Technical Report, KTH Royal Institute of Technology. - 54Tardon-Garcia, L. J., Barbancho-Perez, I., Barbancho-Perez, A. M., Roig, C., Tzanetakis, G. (2019). Automatic melody composition inspired by short melodies using a probabilistic model and harmonic rules. In International Society for Music Information Retrieval Conference (ISMIR 2019).
- 55Temperley, D. (1999). What’s key for key? The Krumhansl-Schmuckler key-finding algorithm reconsidered. Music Perception, 17(1): 65–100. DOI: 10.2307/40285812
- 56Tsushima, H., Nakamura, E., Itoyama, K., and Yoshii, K. (2018). Interactive arrangement of chords and melodies based on a tree-structured generative model. In International Society for Music Information Retrieval Conference (ISMIR 2018).
- 57Yang, L.-C., Chou, S.-Y., and Yang, Y.-H. (2017). MidiNet: A convolutional generative adversarial network for symbolic-domain music generation. In International Society for Music Information Retrieval Conference (ISMIR 2017), pages 324–331.
- 58Zhou, Y., Chu, W., Young, S., and Chen, X. (2019). BandNet: A neural network-based, multi-instrument Beatles-style MIDI music composition machine. In International Society for Music Information Retrieval Conference (ISMIR 2019).
- 59Zhu, H., Liu, Q., Yuan, N. J., Qin, C., Li, J., Zhang, K., Zhou, G., Wei, F., Xu, Y., and Chen, E. (2018). XiaoIce Band: A melody and arrangement generation framework for pop music. In International Conference on Knowledge Discovery and Data Mining (KDD 2018), pages 2837–2846. DOI: 10.1145/3219819.3220105
