References
- 1Aouameur, C., Esling, P., and Hadjeres, G. (2019). Neural drum machine: An interactive system for real-time synthesis of drum sounds. In Proceedings of the Tenth International Conference on Computational Creativity (ICCC), Charlotte, North Carolina, USA.
- 2Bazin, T., Hadjeres, G., Esling, P., and Malt, M. (2020). Spectrogram inpainting for interactive generation of instrument sounds. In Joint Conference on AI Music Creativity.
- 3Bell, A. P. (2018). Dawn of the DAW: The Studio as Musical Instrument. Oxford University Press. DOI: 10.1093/oso/9780190296605.001.0001
- 4Bennett, J. (2012).
Constraint, collaboration and creativity in popular songwriting teams . In Collins, D., editor, The Act of Musical Composition: Studies in the Creative Process, pages 139–69. Ashgate Farnham. - 5Braun, V., and Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2):77–101. DOI: 10.1191/1478088706qp063oa
- 6Briot, J.-P., Hadjeres, G., and Pachet, F.-D. (2020). Deep Learning Techniques for Music Generation. Springer. DOI: 10.1007/978-3-319-70163-9
- 7Burgess, R. J. (2013). The Art of Music Production: The Theory and Practice. Oxford University Press.
- 8Cherry, E., and Latulipe, C. (2014). Quantifying the creativity support of digital tools through the creativity support index. ACM Transactions on Computer-Human Interaction (TOCHI), 21(4):1–25. DOI: 10.1145/2617588
- 9Chu, H., Urtasun, R., and Fidler, S. (2016). Song from PI: A musically plausible network for pop music generation. arXiv preprint arXiv:1611.03477.
- 10Clark, E., Ross, A. S., Tan, C., Ji, Y., and Smith, N. A. (2018). Creative writing with a machine in the loop: Case studies on slogans and stories. In 23rd International Conference on Intelligent User Interfaces (IUI), pages 329–340.
ACM . DOI: 10.1145/3172944.3172983 - 11Csikszentmihalyi, M. (1997). Creativity: Flow and the Psychology of Discovery and Invention. Harper-Perennial, New York.
- 12Csikszentmihalyi, M. (1999).
Implications of a systems perspective for the study of creativity . In Handbook of Creativity. Cambridge University Press, Cambridge, UK. DOI: 10.1017/CBO9780511807916.018 - 13Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A., and Sutskever, I. (2020). Jukebox: A generative model for music. CoRR, abs/2005.00341.
- 14Engel, J. H., Agrawal, K. K., Chen, S., Gulrajani, I., Donahue, C., and Roberts, A. (2019). GANSynth: Adversarial neural audio synthesis. In 7th International Conference on Learning Representations (ICLR), New Orleans, USA.
- 15Engel, J. H., Resnick, C., Roberts, A., Dieleman, S., Norouzi, M., Eck, D., and Simonyan, K. (2017). Neural audio synthesis of musical notes with WaveNet autoencoders. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia.
- 16Frith, S. (2004). Popular music: Critical concepts in media and cultural studies, volume 1. Psychology Press.
- 17Gibson, J. J. (1977).
The theory of affordances . In Shaw, R. and Bransford, J., editors, Perceiving, Acting and Knowing: Toward an Ecological Psychology, pages 67–82. Erlbaum, Hillsdale, New Jersey, USA. - 18Gioti, A.-M. (2021).
Artificial intelligence for music composition . In Handbook of Artificial Intelligence for Music, pages 53–73. Springer. DOI: 10.1007/978-3-030-72116-9_3 - 19Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing (NIPS).
- 20Grachten, M., Deruty, E., and Tanguy, A. (2019). Auto-adaptive resonance equalization using dilated residual networks. In Proceedings of the 20th International Society for Music Information Retrieval Conference (ISMIR), Delft, The Netherlands.
- 21Grachten, M., Lattner, S., and Deruty, E. (2020). Bassnet: A variational gated autoencoder for conditional generation of bass guitar tracks with learned interactive control. Applied Sciences, 10(18). DOI: 10.3390/app10186627
- 22Hadjeres, G., and Crestel, L. (2021). The piano inpainting application. CoRR, abs/2107.05944.
- 23Hadjeres, G., Pachet, F., and Nielsen, F. (2017). Deep-Bach: A steerable model for Bach chorales generation. In Proceedings of the 34th International Conference on Machine Learning (ICML), volume 70, pages 1362–1371, Sydney, Australia.
- 24Hennessey, B. A. (2017). Taking a systems view of creativity: On the right path toward understanding. The Journal of Creative Behavior, 51(4):341–344. DOI: 10.1002/jocb.196
- 25Hennion, A. (1983). The production of success: An anti-musicology of the pop song. Popular Music, 3:159–193. DOI: 10.1017/S0261143000001616
- 26Huang, C.-Z. A., Koops, H. V., Newton-Rex, E., Dinculescu, M., and Cai, C. (2020). AI Song Contest: Human-AI co-creation in songwriting. In Proceedings of the 21st International Society for Music Information Retrieval Conference (ISMIR), pages 708–716, Montreal, Canada.
- 27Ji, S., Luo, J., and Yang, X. (2020). A comprehensive survey on deep music generation: Multi-level representations, algorithms, evaluations, and future directions. ArXiv, abs/2011.06801.
- 28Jones, S. (1992). Rock Formation: Music, Technology, and Mass Communication, volume 3 of Foundations of Popular Culture. Sage Publications. DOI: 10.4135/9781483325491
- 29Kingma, D. P., and Welling, M. (2014). Auto-encoding variational Bayes. In 2nd International Conference on Learning Representations (ICLR), Banff, AB, Canada.
- 30Knotts, S., and Collins, N. (2021).
AI-Lectronica: Music AI in clubs and studio production . In Handbook of Artificial Intelligence for Music, pages 849–871. Springer. DOI: 10.1007/978-3-030-72116-9_30 - 31Lashua, B., and Thompson, P. (2016). Producing music, producing myth? Creativity in recording studios. International Association for the Study of Popular Music Journal, 6(2):70–90. DOI: 10.5429/2079-3871(2016)v6i2.5en
- 32Lattner, S., and Grachten, M. (2019). High-level control of drum track generation using learned patterns of rhythmic interaction. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, USA. DOI: 10.1109/WASPAA.2019.8937261
- 33Mazzanti, S. (2019).
Defining popular music: Towards a “historical melodics” . In Vilotijević, M. D. and Medić, I., editors, Contemporary Popular Music Studies, pages 17–26. Springer VS, Wiesbaden. DOI: 10.1007/978-3-658-25253-3_2 - 34McIntyre, P. (2008). Creativity and cultural production: A study of contemporary Western popular music songwriting. Creativity Research Journal, 20(1):40–52. DOI: 10.1080/10400410701841898
- 35Middleton, R. (1990). Studying popular music. McGraw-Hill Education (UK).
- 36Moore, A. F., and Martin, R. (2018). Rock: The Primary Text: Developing a Musicology of Rock. Routledge. DOI: 10.4324/9780429490170
- 37Moylan, W. (2020). Recording Analysis: How the Record Shapes the Song. CRC Press. DOI: 10.4324/9781315617176
- 38Muller, M. J., and Kuhn, S. (1993). Participatory design. Communications of the ACM, 36(6):24–28. DOI: 10.1145/153571.255960
- 39Nistal, J., Lattner, S., and Richard, G. (2020). Drum-GAN: Synthesis of drum sounds with timbral feature conditioning using generative adversarial networks. In Proceedings of the 21st International Society for Music Information Retrieval Conference (ISMIR), Montreal, Canada.
- 40Payne, C. (2019). Musenet.
https://openai.com/blog/musenet/ . Retrieved Feb. 2021. - 41Piantanida, P., and Vega, L. R. (2021).
Information bottleneck and representation learning . In Rodrigues, M. R. D. and Eldar, Y. C., editors, Information-Theoretic Methods in Data Science, chapter 11, pages 330–358. Cambridge University Press. DOI: 10.1017/9781108616799.012 - 42Roberts, A., Engel, J., Mann, Y., Gillick, J., Kayacik, C., Nørly, S., Dinculescu, M., Radebaugh, C., Hawthorne, C., and Eck, D. (2019). Magenta Studio: Augmenting creativity with deep learning in Ableton Live. In Proceedings of the International Workshop on Musical Metacreation (MUME).
- 43Roberts, A., Engel, J., Raffel, C., Hawthorne, C., and Eck, D. (2018). A hierarchical latent vector model for learning long-term structure in music. In Dy, J. and Krause, A., editors, Proceedings of the 35th International Conference on Machine Learning (ICML), volume 80, pages 4364–4373, Stockholmsmässan, Stockholm Sweden.
PMLR . - 44Scurto, H., and Bevilacqua, F. (2018). Appropriating music computing practices through human-AI collaboration. In Journées d’Informatique Musicale (JIM 2018), Amiens, France.
- 45Serra, X. (2012). Opportunities for a cultural specific approach in the computational description of music. In Serra, X., Rao, P., Murthy, H., and Bozkurt, B., editors, Proceedings of the 2nd Comp-Music Workshop. Universitat Pompeu Fabra.
- 46Shneiderman, B. (2007). Creativity support tools: Accelerating discovery and innovation. Communications of the ACM, 50:20–32. DOI: 10.1145/1323688.1323689
- 47Steinmetz, C. J., and Reiss, J. D. (2020). Randomized overdrive neural networks. CoRR, abs/2010.04237.
- 48Tagg, P. (1982). Analysing popular music: theory, method and practice. Popular music, 2:37–67. DOI: 10.1017/S0261143000001227
- 49Thompson, P. (2019). Creativity in the Recording Studio: Alternative Takes. Springer. DOI: 10.1007/978-3-030-01650-0
- 50van den Oord, A., Li, Y., and Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748.
- 51Yang, L.-C., Chou, S.-Y., and Yang, Y.-H. (2017). MidiNet: A convolutional generative adversarial network for symbolic-domain music generation. arXiv preprint arXiv:1703.10847.
