References
- Dalal N, Triggs B. Histograms of Oriented Gradients for Human Detection[C]//IEEE Computer Society Conference on Computer Vision & Pattern Recognition.IEEE, 2005.DOI: 10.1109/CVPR.2005.177.
- Girshick R, Donahue J, Darrell T, et al. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[J].IEEE Computer Society, 2014.DOI:10.1109/CVPR.2014.81.
- Girshick R. Fast R-CNN [J].Computer Science, 2015.DOI:10.1109/ICCV.2015.169.
- Redmon J, Divvala S, Girshick R, et al. You Only Look Once: Unified, Real-Time Object Detection[C]//Computer Vision & Pattern Recognition.IEEE, 2016.DOI:10.1109/CVPR.2016.91.
- Vaswani A, Shazeer N, Parmar N, et al. Attention Is All You Need[J].arXiv, 2017.DOI: 10.48550/arXiv. 1706.03762.
- Carion N, Massa F, Synnaeve G, et al.End-to-End Object Detection with Transformers [M]. 2020.
- Zhu X, Su W, Lu L, et al.Deformable DETR: Deformable Transformers for End-to-End Object Detection [J]. 2020. DOI: 10.48550/arXiv.2010.04159.
- Li F, Zhang H, Zhang N L. DN-DETR: Accelerate DETR Training by Introducing Query DeNoising [J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(4):2239-2251.
- Liu S, Li F, Zhang H, et al.DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR [J]. 2022. DOI:10.48550/arXiv.2201.12329.
- Liu S, et al. RT-DETR: Real-Time Detection Transformer. NeurIPS 2023.
- Wang J, et al. SCSA: Exploring Spatial-Channel Synergy. CVPR 2022.
- Zhang Y, et al. CARAFE: Content-Aware Feature Reassembly. ICCV 2019.
- Ding X, Zhang X. Ma N, et al. RepVGG: Making VGG-style ConvNets Great Again [J]. 2021. DOI:10.1109/CVPR46437.2021.01352.
- Li X, et al. B2CNet: Boundary-to-Center Refinement. TGRS 2023.
- Zhu P, Wen L, Bian X, et al.Vision Meets Drones: A Challenge [J].Springer, Cham, 2018.DOI:10.1007/978-3-030-11021-5_27.
- Wang A, Chen H, Liu L, et al.YOLOv10: Real-Time End-to-End Object Detection [J]. 2024.
- Zhang H, Li F, Liu S, et al. DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection [J].arXiv e-prints, 2022.DOI:10.48550/arXiv.2203.03605.