Skip to main content
Have a personal or library account? Click to login
Raveform: A Dataset of Metrical and Functional Structure Annotations for EDM Tracks in DJ Mixes Cover

Raveform: A Dataset of Metrical and Functional Structure Annotations for EDM Tracks in DJ Mixes

Open Access
|Apr 2026

References

  1. Argüello, G., Lanzendörfer, L. A., and Wattenhofer, R. (2024). Cue point estimation using object detection. In International Society of Music Information Retrieval Conference (ISMIR).
  2. Bittner, R. M., Gu, M., Hernandez, G., Humphrey, E. J., Jehan, T., McCurry, H., and Montecchio, N. (2017). Automatic playlist sequencing and transitions. In International Society of Music Information Retrieval Conference (ISMIR) (pp. 442448).
  3. Böck, S., Korzeniowski, F., Schlüter, J., Krebs, F., and Widmer, G. (2016a). madmom: A new Python audio and music signal processing library. In International Conference on Multimedia (pp. 11741178).
  4. Böck, S., Krebs, F., and Widmer, G. (2016b). Joint beat and downbeat tracking with recurrent neural networks. In International Society of Music Information Retrieval Conference (ISMIR) (pp. 255261).
  5. Broughton, F., and Brewster, B. (2003). How to DJ right: The art and science of playing records. Grove Press.
  6. Cano, P., Batlle, E., Kalker, T., and Haitsma, J. (2005). A review of audio fingerprinting. Journal of VLSI Signal Processing Systems for Signal, Image and Video Technology, 41, 271284.
  7. Chen, B.‑Y., Hsu, W.‑H., Liao, W.‑H., Ramírez, M. A. M., Mitsufuji, Y., and Yang, Y.‑H. (2022). Automatic DJ transitions with differentiable audio effects and generative adversarial networks. In International Conference on Acoustics, Speech and Signal Processing (ICASSP).
  8. Davies, M. E., Degara, N., and Plumbley, M. D. (2009). Evaluation methods for musical audio beat tracking algorithms. Queen Mary University of London, Centre for Digital Music, Tech. Rep. C4DM‑TR‑09‑06.
  9. Fisher, M. (2009). Something in the air: Radio, rock, and the revolution that shaped a generation. Random House.
  10. Flexer, A., and Grill, T. (2016). The problem of limited inter‑rater agreement in modelling music similarity. Journal of New Music Research, 45(3), 239251.
  11. Flexer, A., Schnitzer, D., Gasser, M., and Widmer, G. (2008). Playlist generation using start and end songs. In International Society of Music Information Retrieval Conference (ISMIR) (Vol. 8, pp. 173178).
  12. Glazyrin, N. (2014). Towards automatic content‑based separation of DJ mixes into single tracks. In Proceedings International Society for Music Information Retrieval Conference (ISMIR) (pp. 149154). https://github.com/nglazyrin/MixSplitter
  13. Hassani, A., and Shi, H. (2022). Dilated neighborhood attention transformer. arXiv:2209.15001.
  14. Kim, T., and Nam, J. (2023). All‑in‑one metrical and functional structure analysis with neighborhood attentions on demixed audio. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA).
  15. Kim, T., Choi, M., Sacks, E., Yang, Y.‑H., and Nam, J. (2020). A computational analysis of real‑world DJ mixes using mix‑to‑track subsequence alignment. In Proceedings International Society for Music Information Retrieval Conference (ISMIR) (pp. 764770).
  16. Kim, T., Lee, J., and Nam, J. (2018). Sample‑level CNN architectures for music auto‑tagging using raw waveforms. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 366370). IEEE.
  17. Kim, T., Yang, Y.‑H., and Nam, J. (2021). Reverse engineering the transition regions of real‑world DJ mixes using sub‑band analysis with convex optimization. In New Interfaces for Musical Expression (NIME).
  18. Kim, T., Yang, Y.‑H., and Nam, J. (2022). Joint estimation of fader and equalizer gains of DJ mixers using convex optimization. In International Conference on Digital Audio Effects (DAFx) (pp. 312319).
  19. Lee, J., Park, J., Kim, K. L., and Nam, J. (2017). Sample‑level deep convolutional neural networks for music auto‑tagging using raw waveforms. In Sound and Music Computing Conference (SMC).
  20. Nieto, O., McCallum, M., Davies, M. E., Robertson, A., Stark, A. M., and Egozy, E. (2019). The Harmonix set: Beats, downbeats, and functional segment annotations of western popular music. In International Society of Music Information Retrieval Conference (ISMIR) (pp. 565572).
  21. Nieto, O., Mysore, G. J., Wang, C.‑I., Smith, J. B., Schlüter, J., Grill, T., and McFee, B. (2020). Audiobased music structure analysis: Current trends, open challenges, and applications. Transactions of the International Society for Music Information, 3(1).
  22. Scarfe, T., Koolen, W., and Kalnishkan, Y. (2014). Segmentation of electronic dance music. International Journal of Engineering Intelligent Systems for Electrical Engineering and Communications, 22(3), 4. https://github.com/ecsplendid/DanceMusicSegmentation
  23. Schwarz, D., and Fourer, D. (2019). Methods and datasets for DJ‑mix reverse engineering. In Proceedings International Symposium on Computer Music Multidisciplinary Research (CMMR) (pp. 426437).
  24. Schwarz, D., Schindler, D., and Spadavecchia, S. (2018). A heuristic algorithm for DJ cue point estimation. In Proceedings Sound and Music Computing (SMC) Conference.
  25. Smith, J. B. L., Burgoyne, J. A., Fujinaga, I., De Roure, D., and Downie, J. S. (2011). Design and creation of a large‑scale database of structural annotations. In International Society of Music Information Retrieval Conference (ISMIR) (Vol. 11, pp. 555560).
  26. Sonnleitner, R., Arzt, A., and Widmer, G. (2016). Landmark‑ based audio fingerprinting for DJ mix monitoring. In Proceedings International Society for Music Information Retrieval Conference (ISMIR) (pp. 185191).
  27. Van den Oord, A., Dieleman, S., and Schrauwen, B. (2013). Deep content‑based music recommendation. In The Conference on Neural Information Processing Systems (NIPS) (p. 26).
  28. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In The Conference on Neural Information Processing Systems (NIPS) (p. 30).
  29. Veire, L. V., and De Bie, T. (2018). From raw audio to a seamless mix: Creating an automated DJ system for drum and bass. EURASIP Journal on Audio, Speech, and Music Processing, 2018(1), 13.
  30. Wang, A. (2006). The Shazam music recognition service. Communications of the ACM, 49(8), 4448.
DOI: https://doi.org/10.5334/tismir.288 | Journal eISSN: 2514-3298
Language: English
Submitted on: Jun 10, 2025
Accepted on: Jan 9, 2026
Published on: Apr 20, 2026
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Taejun Kim, Jongsoo Kim, Hyungyu Kim, Juhan Nam, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.