References
- 1Arzt, A. (2016).
Flexible and Robust Music Tracking . PhD thesis, Johannes Kepler University Linz. - 2Arzt, A., Frostel, H., Gadermaier, T., Gasser, M., Grachten, M., & Widmer, G. (2015). Artificial Intelligence in the Concertgebouw. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (pp. 2424–2430). Buenos Aires, Argentina.
- 3Arzt, A., Widmer, G., & Dixon, S. (2008). Automatic Page Turning for Musicians via Real-Time Machine Listening. In Proceedings of the European Conference on Artificial Intelligence (ECAI) (pp. 241–245). Patras, Greece.
- 4Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., & Müller, K.-R. (2010). How to Explain Individual Classification Decisions. Journal of Machine Learning Research, 11, 1803–1831.
- 5Balke, S., Achankunju, S. P., & Müller, M. (2015). Matching Musical Themes Based on Noisy OCR and OMR Input. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (pp. 703–707). Brisbane, Australia. DOI: 10.1109/ICASSP.2015.7178060
- 6Bishop, C. M. (2006).
Pattern Recognition and Machine Learning . Springer. - 7Boulanger-Lewandowski, N., Bengio, Y., & Vincent, P. (2012). Modeling Temporal Dependencies in High-dimensional Sequences: Application to Polyphonic Music Generation and Transcription. In Proceedings of the 29th International Conference on Machine Learning (ICML). Edinburgh, UK. DOI: 10.1109/ICASSP.2013.6638244
- 8Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., & Zaremba, W. (2016). OpenAI Gym. arXiv preprint arXiv:1606.01540.
- 9Byrd, D., & Simonsen, J. G. (2015). Towards a Standard Testbed for Optical Music Recognition: Definitions, Metrics, and Page Images. Journal of New Music Research, 44(3), 169–195. DOI: 10.1080/09298215.2015.1045424
- 10Clevert, D., Unterthiner, T., & Hochreiter, S. (2016). Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). In Proceedings of the International Conference on Learning Representations (ICLR) (arXiv:1511.07289).
- 11Cobbe, K., Klimov, O., Hesse, C., Kim, T., & Schulman, J. (2018). Quantifying Generalization in Reinforcement Learning. arXiv preprint arXiv:1812.02341.
- 12Cont, A. (2006). Realtime Audio to Score Alignment for Polyphonic Music Instruments using Sparse Non-Negative Constraints and Hierarchical HMMs. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (vol. 5, pp. 245–248). Toulouse, France. DOI: 10.1109/ICASSP.2006.1661258
- 13Cont, A. (2010). A Coupled Duration-Focused Architecture for Real-Time Music-to-Score Alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6), 974–987. DOI: 10.1109/TPAMI.2009.106
- 14Dixon, S. (2005). An On-Line Time Warping Algorithm for Tracking Musical Performances. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) (pp. 1727–1728). Edinburgh, UK.
- 15Dixon, S., & Widmer, G. (2005). MATCH: A music alignment tool chest. In Proceedings of the International Conference on Music Information Retrieval (ISMIR) (pp. 492–497). London, UK.
- 16Dorfer, M., Arzt, A., & Widmer, G. (2016). Towards Score Following in Sheet Music Images. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 789–795). New York, USA.
- 17Dorfer, M., Hajič, J.,
Jr. , Arzt, A., Frostel, H., & Widmer, G. (2018a). Learning Audio–Sheet Music Correspondences for Cross-Modal Retrieval and Piece Identification. Transactions of the International Society for Music Information Retrieval, 1(1), 22–33. DOI: 10.5334/timsir.12 - 18Dorfer, M., Henkel, F., & Widmer, G. (2018b). Learning to Listen, Read, and Follow: Score Following as a Reinforcement Learning Game. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 784–791). Paris, France.
- 19Duan, Y., Chen, X., Houthooft, R., Schulman, J., & Abbeel, P. (2016). Benchmarking Deep Reinforcement Learning for Continuous Control. In Proceedings of the 33nd International Conference on Machine Learning (ICML) (pp. 1329–1338). New York City, United States.
- 20Greensmith, E., Bartlett, P. L., & Baxter, J. (2004). Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning. Journal of Machine Learning Research, 5, 1471–1530.
- 21Hajič, J.,
Jr. and Pecina, P. (2017). The MUSCIMA++ Dataset for Handwritten Optical Music Recognition. In 14th International Conference on Document Analysis and Recognition (ICDAR) (pp. 39–46). New York, United States. DOI: 10.1109/ICDAR.2017.16 - 22Kingma, D., & Ba, J. (2015). Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (ICLR) (arXiv:1412.6980).
- 23Krause, J., Perer, A., & Ng, K. (2016). Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5686–5697).
ACM . DOI: 10.1145/2858036.2858529 - 24Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., & Wierstra, D. (2015). Continuous Control with Deep Reinforcement Learning. arXiv preprint arXiv:1509.02971.
- 25Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., & Kavukcuoglu, K. (2016). Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML) (pp. 1928–1937). New York City, United States.
- 26Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level Control Through Deep Reinforcement Learning. Nature, 518, 529–533. DOI: 10.1038/nature14236
- 27Müller, M. (2015).
Fundamentals of Music Processing . Springer Verlag. DOI: 10.1007/978-3-319-21945-5 - 28Nakamura, E., Cuvillier, P., Cont, A., Ono, N., & Sagayama, S. (2015). Autoregressive Hidden Semi-Markov Model of Symbolic Music for Score Following. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (pp. 392–398). Málaga, Spain.
- 29Orio, N., Lemouton, S., & Schwarz, D. (2003). Score Following: State of the Art and New Developments. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME) (pp. 36–41). Montreal, Canada.
- 30Prockup, M., Grunberg, D., Hrybyk, A., & Kim, Y. E. (2013). Orchestral Performance Companion: Using Real-Time Audio to Score Alignment. IEEE Multimedia, 20(2), 52–60. DOI: 10.1109/MMUL.2013.26
- 31Raphael, C. (2010). Music Plus One and Machine Learning. In Proceedings of the International Conference on Machine Learning (ICML) (pp. 21–28).
- 32Schulman, J., Moritz, P., Levine, S., Jordan, M., & Abbeel, P. (2015). High-dimensional Continuous Control Using Generalized Advantage Estimation. arXiv preprint arXiv:1506.02438.
- 33Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal Policy Optimization Algorithms. arXiv preprint arXiv:1707.06347.
- 34Schwarz, D., Orio, N., & Schnell, N. (2004). Robust Polyphonic MIDI Score Following with Hidden Markov Models. In International Computer Music Conference (ICMC). Miami, Florida, USA.
- 35Shrikumar, A., Greenside, P., & Kundaje, A. (2017). Learning Important Features through Propagating Activation Differences. arXiv preprint arXiv:1704.02685.
- 36Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic Attribution for Deep Networks. arXiv preprint arXiv:1703.01365.
- 37Sutton, R. S., & Barto, A. G. (2018).
Reinforcement Learning . MIT Press, 2nd edition. - 38Thomas, V., Fremerey, C., Müller, M., & Clausen, M. (2012).
Linking Sheet Music and Audio – Challenges and New Approaches . In M. Müller, M. Goto, & M. Schedl (Eds.), Multimodal Music Processing, volume 3 of Dagstuhl Follow-Ups (pp. 1–22). Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum für Informatik. - 39van der Maaten, L., & Hinton, G. (2008). Visualizing Data Using t-SNE. Journal of Machine Learning Research, 9, 2579–2605.
- 40Wang, J. X., Kurth-Nelson, Z., Tirumala, D., Soyer, H., Leibo, J. Z., Munos, R., Blundell, C., Kumaran, D., & Botvinick, M. (2016). Learning to Reinforcement Learn. arXiv preprint arXiv:1611.05763.
- 41Widmer, G. (2017). Getting Closer to the Essence of Music: The Con Espressione Manifesto. ACM Transactions on Intelligent Systems and Technology (TIST), 8(2), 19. DOI: 10.1145/2899004
- 42Williams, R. J. (1992). Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8, 229–256. DOI: 10.1007/BF00992696
- 43Williams, R. J., & Peng, J. (1991). Function Optimization Using Connectionist Reinforcement Learning Algorithms. Connection Science, 3(3), 241–268. DOI: 10.1080/09540099108946587
- 44Wu, Y., Mansimov, E., Liao, S., Grosse, R. B., & Ba, J. (2017). Scalable Trust-region Method for Deep Reinforcement Learning Using Kronecker-factored Approximation. CoRR, abs/1708.05144.
