References
- M. XU, W. C. NG, W. Y. B. LIM, J. KANG, Z. XIONG, D. NIYATO, Q. YANG, X. SHEN, and C. MIAO, “A full dive into realizing the edge-enabled metaverse: Visions, enabling technologies, and challenges,” IEEE Communications Surveys and Tutorials, vol. 25, no. 1, pp. 656–700, 2023.
- Y. REN, R. XIE, F. R. YU, T. HUANG, and Y. LIU, “Quantum collective learning and many-to-many matching game in the metaverse for connected and autonomous vehicles,” IEEE Transactions on Vehicular Technology, vol. 71, no. 11, pp. 12 128–12 139, 2022.
- P. ZHOU, J. ZHU, Y. WANG, Y. LU, Z. WEI, H. SHI, Y. DING, Y. GAO, Q. HUANG, Y. SHI, A. ALHILAL, L.-H. LEE, T. BRAUD, P. HUI, and L. WANG, “Vetaverse: A survey on the intersection of metaverse, vehicles, and transportation systems,” 2023.
- T. LIU, H. ZHAO, Y. YU, G. ZHOU, and M. LIU, “Car-studio: Learning car radiance fields from single-view and endless in-the-wild images,” 2023.
- C. WU, J. SUN, Z. SHEN, and L. ZHANG, “Mapnerf: Incorporating map priors into neural radiance fields for driving view simulation,” in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 7082–7088.
- T. TAO, L. GAO, G. WANG, Y. LAO, P. CHEN, H. ZHAO, D. HAO, X. LIANG, M. SALZMANN, and K. YU, “Lidar-nerf: Novel lidar view synthesis via neural radiance fields,” 2023.
- O. RONNEBERGER, P. FISCHER, and T. BROX, “U-net: Convolutional networks for biomedical image segmentation,” 2015.
- Y. GAO, L. SU, H. LIANG, Y. YUE, Y. YANG, and M. FU, “Mc-nerf: Muti-camera neural radiance fields for muti-camera image acquisition systems,” 2023.
- C.-H. LIN, W.-C. MA, A. TORRALBA, and S. LUCEY, “Barf: Bundle-adjusting neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 10 2021, pp. 5741–5751.
- Y. CHEN, X. CHEN, X. WANG, Q. ZHANG, Y. GUO, Y. SHAN, and F. WANG, “Local-to-global registration for bundle-adjusting neural radiance fields,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 6 2023, pp. 8264–8273.
- H. KUANG, X. CHEN, T. GUADAGNINO, N. ZIMMERMAN, J. BEHLEY, and C. STACHNISS, “Ir-mcl: Implicit representation-based online global localization,” IEEE Robotics and Automation Letters, vol. 8, no. 3, pp. 1627–1634, 2023.
- A. KRISHNAN, A. RAJ, X. ZHANG, A. CARLSON, N. TSENG, S. SRIDHAR, N. JAIPURIA, and J. HAYS, “Lane: Lighting-aware neural fields for compositional scene synthesis,” 2023.
- Y. LIU, X. TU, D. CHEN, K. HAN, O. ALTINTAS, H. WANG, and J. XIE, “Visualization of Mobility Digital Twin: Framework Design, Case Study, and Future Challenges,” in 2023 IEEE 20th International Conference on Mobile Ad Hoc and Smart Systems (MASS). IEEE, 2023, pp. 170–177.
- Z. WU, T. LIU, L. LUO, Z. ZHONG, J. CHEN, H. XIAO, C. HOU, H. LOU, Y. CHEN, R. YANG et al., “Mars: An instance-aware, modular and realistic simulator for autonomous driving,” arXiv preprint arXiv:2307.15058, 2023.
- A. BYRAVAN, J. HUMPLIK, L. HASENCLEVER, A. BRUSSEE, F. NORI, T. HAARNOJA, B. MORAN, S. BOHEZ, F. SADEGHI, B. VUJATOVIC et al., “Nerf2real: Sim2real transfer of vision-guided bipedal motion skills using neural radiance fields,” in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 9362–9369.
- T. MÜLLER, A. EVANS, C. SCHIES, and A. KELLER, “Instant neural graphics primitives with a multiresolution hash encoding,” ACM Transactions on Graphics (ToG), vol. 41, no. 4, pp. 1–15, 2022.
- B. KERBL, G. KOPANAS, T. LEIMKÜHLER, and G. DRETTAKIS, “3D Gaussian Splatting for Real-Time Radiance Field Rendering,” ACM Transactions on Graphics, vol. 42, no. 4, pp. 1–14, Jul. 2023. [Online]. Available: https://inria.hal.science/hal-04088161
- B. MILDENHALL, P. P. SRINIVASAN, M. TANCIK, J. T. BARRON, R. RAMAMOORTHI, and R. NG, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
- M. TANCIK, V. CASSER, X. YAN, S. PRADHAN, B. P. MILDENHALL, P. SRINIVASAN, J. T. BARRON, and H. KRETZSCHMAR, “Block-nerf: Scalable large scene neural view synthesis,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 8238–8248.
