E. Strubell, Ananya Ganesh, and Andrew McCallum. “Energy and Policy Considerations for Deep Learning in NLP,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019. doi: 10.48550/arXiv.1906.02243.
Y. Cheng, et al. “Model compression and acceleration for deep neural networks: The principles, progress, and challenges.” IEEE Signal Processing Magazine vol. 35, no. 1, 126–136, 2018. doi: 10.48550/arXiv.1710.09282.
M. Hessel, et al. “Rainbow: Combining improvements in deep reinforcement learning,” Proceedings of the AAAI conference on artificial intelligence, vol. 32, no. 1, 2018.
W. Dabney, et al. “Distributional reinforcement learning with quantile regression,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
M. Ahmed, C. P. Lim, and S. Nahavandi. “A Deep Q-Network Reinforcement Learning-Based Model for Autonomous Driving,” 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), IEEE, 2021.
J. Carreira, and A. Zisserman. “Quo vadis, action recognition? a new model and the kinetics dataset,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. doi: 10.48550/arXiv.1705.07750.
“How do self-driving cars know their way around without a map?”, https://bigthink.com/technology-innovation/how-do-self-driving-carsknow-their-way-around-without-a-map/ (accessed 2023.03.31).
M. Sewak. “Deep Q Network (DQN), Double DQN, and Dueling DQN: A Step Towards General Arti- ficial Intelligence,” Deep Reinforcement Learning: Frontiers of Artificial Intelligence 2019, 95–108. doi: 10.1007/978-981-13-8285-7_8.
W. Dudek, N. Miguel, and T. Winiarski. “SPSysML: A meta-model for quantitative evaluation of Simulation-Physical Systems,” arXiv preprint arXiv:2303.09565 (2023). doi: 10.48550/arXiv. 2303.09565.
J. Lin, C. Gan, and S. Han. “TSM: Temporal shift module for efficient video understanding.” Proceedings of the IEEE/CVF international conference on computer vision, 2019.