T. J. McCabe, “A complexity measure,” IEEE Transactions on Software Engineering, vol. SE-2, no. 4, pp. 308–320, Dec. 1976. https://doi.org/10.1109/TSE.1976.233837
L. Coyle, M. Hinchey, B. Nuseibeh, and J. L. Fiadeiro, “Guest editors’ introduction: Evolving critical systems,” Computer, vol. 43, no. 05, pp. 28–33, May 2010. https://doi.org/10.1109/MC.2010.139
J. M. Boyle and M. N. Muralidharan, “Program reusability through program transformation,” IEEE Transactions on Software Engineering, vol. SE-10, no. 5, pp. 574–588, Sep. 1984. https://doi.org/10.1109/TSE.1984.5010281
R. C. Waters, “Program translation via abstraction and reimplementation,” IEEE Transactions on Software Engineering, vol. 14, no. 8, pp. 1207–1228, Aug. 1988. https://doi.org/10.1109/32.7629
M. Mirchev, A. Costea, A. K. Singh, and A. Roychoudhury, “Assured automatic programming via large language models,” arXiv preprint arXiv:2410.18494, Oct. 2024. https://doi.org/10.48550/arXiv.2410.18494
Q. Zhang, C. Fang, Y. Shang, T. Zhang, S. Yu, and Z. Chen, “No man is an island: Towards fully automatic programming by code search, code generation and program repair,” arXiv preprint arXiv:2409.03267, Sep. 2024. https://doi.org/10.48550/arXiv.2409.03267
M. Atemkeng, S. Hamlomo, B. Welman, N. Oyentunji, P. Ataei, and J. L. E. K. Fendji, “Ethics of software programming with generative AI: Is programming without generative AI always radical?” arXiv preprint arXiv:2408.10554, Aug. 2024. https://doi.org/10.48550/arXiv.2408.10554
Z. Bahroun, C. Anane, V. Ahmed, and A. Zacca, “Transforming education: A comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis,” Sustainability, vol. 15, no. 17, Aug. 2023, Art. no. 12983. https://doi.org/10.3390/su151712983
R. Zviel-Girshin, “The good and bad of AI tools in novice programming education,” Education Sciences, vol. 14, no. 10, Art. no. 1089, Oct. 2024. https://doi.org/10.3390/educsci14101089
B. A. Becker, P. Denny, J. Finnie-Ansley, A. Luxton-Reilly, J. Prather, and E. A. Santos, “Programming is hard – or at least it used to be: Educational opportunities and challenges of AI code generation,” arXiv preprint arXiv:2212.01020, Dec. 2022. https://doi.org/10.48550/arXiv.2212.01020
E. Ortega-Ochoa, J. Sabaté, M. Arguedas, J. Conesa, T Daradoumis, and S. Caballé, “Exploring the utilization and deficiencies of generative artificial intelligence in students’ cognitive and emotional needs: a systematic mini-review,” Frontiers in Artificial Intelligence, vol. 7, Nov. 2024, Art. no. 1493566. https://doi.org/10.3389/frai.2024.1493566
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in Neural Information Processing Systems, vol. 30, 2017.
A. Ni, P. Yin, Y. Zhao, M. Riddell, T. Feng, R. Shen, S. Yin, L. Ye, S. Yavuz, C. Xiong, S. Joty, Y. Zhou, D. Radev, and A. Cohan, “L2ceval: Evaluating language-to-code generation capabilities of large language models,” arXiv preprint arXiv:2309.17446, Jan. 2023. https://doi.org/10.48550/arXiv.2309.17446
R. Takaichi, Y. Higo, S. Matsumoto, S. Kusumoto, T. Kurabayashi, H. Kirinuki, and H. Tanno, “Are NLP metrics suitable for evaluating generated code?” in Product-Focused Software Process Improvement: 23rd International Conference, PROFES 2022, Jyväskylä, Finland, Nov. 2022, pp. 531–537. https://doi.org/10.1007/978-3-031-21388-5_38
Y. Shao, “Human-computer interaction environment monitoring and collaborative translation mode exploration using artificial intelligence technology,” Journal of Environmental and Public Health, vol. 2022, Jan. 2022. https://doi.org/10.1155/2022/4702003
R. Zou, W. Chang, S. Gao, and S. Qin, “Strategies for improving the quality of computer-assisted translation based on internet,” International Journal of Advanced Academic Studies, vol. 4, no. 4, pp. 156–158, Oct. 2022. https://doi.org/10.33545/27068919.2022.v4.i4c.890
N. Radziwill and M. C. Benton, “Evaluating quality of chatbots and intelligent conversational agents,” arXiv preprint arXiv:1704.04579, Jan. 2017. https://doi.org/10.48550/arXiv.1704.04579
C. Merow, J. M. Serra-Diaz, B. J. Enquist, and A. M. Wilson, “AI chatbots can boost scientific coding,” Nature Ecology Evolution, vol. 7, pp. 960–962, Apr. 2023. https://doi.org/10.1038/s41559-023-02063-3
R. Michel-Villarreal, E. L. Vilalta-Perdomo, D. E. Salinas-Navarro, R. Thierry-Aguilera, and F. S. Gerardou, “Challenges and opportunities of generative AI for higher education as explained by ChatGPT,” Education Sciences, vol. 13, no. 9, Aug. 2023, Art. no. 856. https://doi.org/10.3390/educsci13090856
J. Zhou, H. Müller, A. Holzinger, and F. Chen, “Ethical ChatGPT: Concerns, challenges, and commandments,” Electronics, vol. 13, no. 17, Aug. 2024, Art. no. 3417. https://doi.org/10.3390/electronics13173417
T. Wu, S. He, J. Liu, S. Sun, K. Liu, Q.-L. Han, and Y. Tang, “A brief overview of ChatGPT: The history, status quo and potential future development,” IEEE/CAA Journal of Automatica Sinica, vol. 10, no. 5, pp. 1122–1136, May 2023. https://doi.org/10.1109/JAS.2023.123618
R. Karanjai, L. Xu, and W. Shi, “Teaching machines to code: Smart contract translation with LLMs,” arXiv preprint arXiv:2403.09740, Mar. 2024. https://doi.org/10.48550/arXiv.2403.09740
M. A. K. Raiaan, M. S. H. Mukta, K. Fatema, N. M. Fahad, S. Sakib, M. M. J. Mim, J. Ahmad, M. E. Ali, and S. Azam, “A review on large language models: Architectures, applications, taxonomies, open issues and challenges,” TechRxiv, Sep. 2023. https://doi.org/10.36227/techrxiv.24171183
H. Naveed, A. U. Khan, S. Qiu, M. Saqib, S. Anwar, M. Usman, N. Barnes, and A. Mian, “A comprehensive overview of large language models,” arXiv preprint arXiv:2307.06435, Jul. 2023. https://doi.org/10.48550/arXiv.2307.06435
U. Alon, S. Brody, O. Levy, and E. Yahav, “code2seq: Generating sequences from structured representations of code,” arXiv preprint arXiv:1808.01400, Aug. 2018. https://doi.org/10.48550/arXiv.1808.01400
U. Alon, M. Zilberstein, O. Levy, and E. Yahav, “Code2vec: Learning distributed representations of code,” Proceedings of the ACM on Programming Languages, vol. 3, no. POPL, Jan. 2019, Art. no. 40. https://doi.org/10.1145/3290353
C. Xiao, “Comparison of differences between artificial intelligence translation and artificial translation,” Journal of Physics Conference Series, vol. 1992, Aug. 2021, Art. no. 22079. https://doi.org/10.1088/1742-6596/1992/2/022079
T. Marjanov, I. Pashchenko, and F. Massacci, “Machine learning for source code vulnerability detection: What works and what isn’t there yet,” IEEE Security & Privacy, vol. 20, no. 5, pp. 60–76, Aug. 2022. https://doi.org/10.1109/MSEC.2022.3176058
X. Hu, G. Li, X. Xia, D. Lo, and Z. Jin, “Deep code comment generation,” in 2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC), 2018, pp. 200–210. https://doi.org/10.1145/3196321.3196334
R. Pan, A. R. Ibrahimzada, R. Krishna, D. Sankar, L. P. Wassi, M. Merler, B. Sobolev, R. Pavuluri, S. Sinha, and R. Jabbarvand, “Lost in translation: A study of bugs introduced by large language models while translating code,” in Proceedings of the IEEE/ACM 46th International Conference on Software Engineering (ICSE ‘24), Apr. 2024, Art. no. 82. https://doi.org/10.1145/3597503.3639226
Z. Yang, F. Liu, Z. Yu, J. Keung, J. Li, S. Liu, Y. Hong, X. Ma, Z. Jin, and G. Li, “Exploring and unleashing the power of large language models in automated code translation,” in Proceedings of the ACM on Software Engineering., vol. 1, no. FSE, pp. 1585–1608, Jul. 2024. https://doi.org/10.1145/3660778
U. Alon, M. Zilberstein, and O. Levy, “Code2vec: Learning distributed representations of code,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019, pp. 2733–2739. https://doi.org/10.1145/3290353
M. Allamanis, M. Brockschmidt, M. Khademi, and C. Sutton, “Convolutional, recurrent, and hybrid models for paraphrase detection,” arXiv preprint arXiv:1601.06744, 2016.
V. Raychev, M. Vechev, and E. Yahav, “Code completion with statistical language models,” in Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, 2014, pp. 419–428. https://doi.org/10.1145/2594291.2594321
R. Robbes, C. Petitpierre, and M. Monperrus, “User evaluations of automatic software repair,” ACM SIGSOFT Software Engineering Notes, vol. 37, no. 3, pp. 1–5, 2012.
S. Sachdeva, O. Polozov, and S. Gulwani, “Effective program synthesis through efficient code transplantation,” in Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation, 2017, pp. 722–736.
Y. Qi, D. Sachan, M. Felix, S. Padmanabhan, and G. Neubig, “Rosetta: Large scale cross-lingual resource for natural language processing,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018, pp. 306–316.
Z. Feng et al., “Code BERT: A pre-trained model for programming and natural languages,” arXiv preprint arXiv:2002.08155, Feb. 2020. https://doi.org/10.48550/arXiv.2002.08155
M. Allamanis, M. Brockschmidt, and M. Khademi, “Learning to represent programs with graphs,” arXiv preprint arXiv:1711.00740, Nov. 2017. https://doi.org/10.48550/arXiv.1711.00740
M. Allamanis and C. Sutton, “Transcoder: Analyzing code similarities for porting legacy code,” IEEE Transactions on Software Engineering, vol. 43, no. 6, pp. 527–546, 2017.
P. Tembhekar, M. Devan, and J. Jeyaraman, “Role of GenAI in automated code generation within DevOps practices: Explore how generative AI” Journal of Knowledge Learning and Science Technology, vol. 2, no. 2, pp. 500–512, Oct. 2023. https://doi.org/10.60087/jklst.vol2.n2.p512
J. Sun, Q. V. Liao, M. Muller, M. Agarwal, S. Houde, K. Talamadupula, and J. D. Weisz, “Investigating explainability of generative ai for code through scenario-based design,” in Proceedings of the 27th International Conference on Intelligent User Interfaces, 2022, pp. 212–228. https://doi.org/10.1145/3490099.3511119
D. Guo et al., “DeepSeek-Coder: When the large language model meets programming – the rise of code intelligence,” arXiv preprint arXiv:2401.14196, Jan. 2024. https://doi.org/10.48550/arXiv.2401.14196
T. Zheng, G. Zhang, T. Shen, X. Liu, B. Y. Lin, J. Fu, W. Chen, and X. Yue, “OpenCodeInterpreter: Integrating code generation with execution and refinement,” arXiv preprint arXiv:2402.14658, 2024.
A. Deroy and S. Maity, “Code generation and algorithmic problem solving using Llama 3.1 405b,” arXiv preprint arXiv:2409.19027, Sep. 2024. https://doi.org/10.48550/arXiv.2409.19027
A. Sadik, A. Ceravola, F. Joublin, and J. Patra, “Analysis of ChatGPT on source code,” arXiv preprint arXiv:2306.00597, Jun. 2023. https://doi.org/10.48550/arXiv.2306.00597
Z. Zheng, K. Ning, Y. Wang, J. Zhang, D. Zheng, M. Ye, and J. Chen, “A survey of large language models for code: Evolution, benchmarking, and future trends,” arXiv preprint arXiv:2311.10372, Nov. 2023. https://doi.org/10.48550/arXiv.2311.10372
H. Pearce, B. Tan, B. Ahmad, R. Karri, and B. Dolan-Gavitt, “Examining zero-shot vulnerability repair with large language models,” in 2023 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, May 2023, pp. 2339–2356. https://doi.org/10.1109/SP46215.2023.10179324
K. Huang, X. Wang, J. Fu, B. Xie, Z. Wang, Y. Yao, H. Guo, M. Tu, X. Sun, and H. Wang, “Codex GLUE: A machine learning benchmark dataset for code understanding and generation,” arXiv preprint arXiv:2102.04664, Feb. 2021. https://doi.org/10.48550/arXiv.2102.04664
M. Zhu, A. Jain, K. Suresh, R. Ravindran, S. Tipirneni, and C. K. Reddy, “XLCoST: A benchmark dataset for cross-lingual code intelligence,” arXiv preprint arXiv:2206.08474, Jun. 2022. https://doi.org/10.48550/arXiv.2206.08474
Z. Zhang, C. Chen, B. Liu, C. Liao, Z. Gong, H. Yu, J. Li, and R. Wang, “Unifying the perspectives of NLP and software engineering: A survey on language models for code,” arXiv preprint arXiv:2311.07989v3, Dec. 2023. https://arxiv.org/html/2311.07989v3
H. Lu et al., “DeepSeek-VL: towards real-world vision-language understanding,” arXiv preprint arXiv:2403.05525, Mar. 2024. https://doi.org/10.48550/arXiv.2403.05525
W. Kim, “Multimodal foundation models: a taxonomy and reflections,” International Journal of Web and Grid Services, vol. 20, no. 4, pp. 505–531, Dec. 2024. https://doi.org/10.1504/IJWGS.2024.143177
A. J. Adetayo, M. O. Aborisade, and B. A. Sanni, “Microsoft Copilot and Anthropic Claude AI in education and library service,” Library Hi Tech News, Jan. 2024. https://doi.org/10.1108/LHTN-01-2024-0002
M. Imran and N. Almusharraf, “Google Gemini as a next generation AI educational tool: a review of emerging educational technology,” Smart Learning Environments, vol. 11, no. 1, May 2024, Art. no. 22. https://doi.org/10.1186/s40561-024-00310-z
M. A. Akib, M. M. Mazumder, and S. Ahsan, “Analysis on LLMs performance for code summarization,” arXiv preprint arXiv:2412.17094, Dec. 2024. https://doi.org/10.48550/arXiv.2412.17094
D. Noever and F. McKee, “Numeracy from literacy: Data science as an emergent skill from large language models,” arXiv preprint arXiv.2301.13382, Jan. 2023. https://doi.org/10.48550/arXiv.2301.13382
F. F. Xu, U. Alon, G. Neubig, and V. J. Hellendoorn, “A systematic evaluation of large language models of code,” in Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming (MAPS 2022), Jun. 2022, pp. 1–10. https://doi.org/10.1145/3520312.3534862
V. Corso, L. Mariani, D. Micucci, and O. Riganelli, “Assessing AI-based code assistants in method generation tasks,” in Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings (ICSE-Companion’24), May 2024, pp. 380–381. https://doi.org/10.1145/3639478.3643122
A. Cohan, S. Feldman, I. Beltagy, D. Downey, and D. S. Weld, “Specter: Document-level representation learning using citation-informed transformers,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Jul. 2020, pp. 2270–2282. https://doi.org/10.18653/v1/2020.acl-main.207
D. N. Palacio, A. Velasco, D. Rodriguez-Cardenas, K. Moran, and D. Poshyvanyk, “Evaluating and explaining large language models for code using syntactic structures,” arXiv preprint arXiv.2308.03873, Aug. 2023. https://doi.org/10.48550/arXiv.2308.03873
P. P. Ray, “ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope,” Internet of Things and Cyber-Physical Systems, vol. 3, pp. 121–154, Jan. 2023. https://doi.org/10.1016/j.iotcps.2023.04.003
L. Murr, M. Grainger, and D. Y. Gao, “Testing LLMs on code generation with varying levels of prompt specificity,” arXiv preprint arXiv.2311.07599, Nov. 2023. https://doi.org/10.48550/arXiv.2311.07599
S. Bhatia, S. Kohli, S. A. Seshia, and A. Cheung, “Building code transpilers for domain-specific languages using program synthesis (experience paper),” in 37th European Conference on Object-Oriented Programming (ECOOP 2023), 2023, pp. 1–30. https://dnb.info/1367153115/34
H. Lunnikivi, K. Jylkkä, and T. Hämäläinen, “Transpiling Python to rust for optimized performance,” in International Conference on Embedded Computer Systems, Oct. 2020, pp. 127–138. https://doi.org/10.1007/978-3-030-60939-9_9
F. A. Bastidas and M. Pérez, “A systematic review on transpiler usage for transaction-oriented applications,” in 2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM), Cuenca, Ecuador, Oct. 2018, pp. 1–6. https://doi.org/10.1109/ETCM.2018.8580312
J. White, S. Hays, Q. Fu, J. Spencer-Smith, and D. C. Schmidt, “ChatGPT prompt patterns for improving code quality, refactoring, requirements elicitation, and software design,” in Generative AI for Effective Software Development, A. Nguyen-Duc, P. Abrahamsson, and F. Khomh, Eds. Springer, Cham, Jan. 2024, pp. 71–108. https://doi.org/10.1007/978-3-031-55642-5_4
Y. Chen, C. Wong, H. Yang, J. Aguenza, S. Bhujangari, B. C. Vu, X. Lei, A. Prasad, M. Fluss, E. Phuong, M. Liu, and J. Davis, “Assessing the impact of prompting, persona, and chain of thought methods on ChatGPT’s arithmetic capabilities,” arXiv preprint arXiv.2312.15006, Dec. 2023. https://doi.org/10.48550/arXiv.2312.15006
B. Chen, Z. Zhang, N. Langrené, and S. Zhu, “Unleashing the potential of prompt engineering in large language models,” Patterns, vol. 6, no. 6, Jun. 2025, Art. no. 101260. https://doi.org/10.1016/j.patter.2025.101260
B. Dornauer, M. Felderer, J. Weinzerl, M.-C. Racasan, and M. Heß, “SoHist: A tool for managing technical debt through retro perspective code analysis,” in Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering (EASE ‘23), Jun. 2023, pp. 184–187. https://doi.org/10.1145/3593434.3593460
A. Puspaningrum, M. A. A. Hilmi, Darsih, M. Z. Mustamiin, and M. I. Ginanjar, “Vulnerable source code detection using sonarcloud code analysis,” in Proceedings of the 5th International Conference on Applied Science and Technology on Engineering Science (iCAST-ES 2022), vol. 1, Bandung, Indonesia, Jan. 2022, pp. 683–687. https://doi.org/10.5220/0011862600003575
V. Bhutani, F. G. Toosi, and J. Buckley, “Analysing the analysers: An investigation of source code analysis tools,” Applied Computer Systems, vol. 29, no. 1, pp. 98–111, Jun. 2024. https://doi.org/10.2478/acss-2024-0013
M. Riaz, E. Mendes, and E. Tempero, “A systematic review of software maintainability prediction and metrics,” in 2009 3rd International Symposium on Empirical Software Engineering and Measurement, Lake Buena Vista, FL, USA, Oct. 2009, pp. 367–377. https://doi.org/10.1109/ESEM.2009.5314233
M. Weyssow, X. Zhou, K. Kim, D. Lo, and H. Sahraoui, “Exploring parameter-efficient fine-tuning techniques for code generation with large language models,” arXiv preprint arXiv.2308.10462, Aug. 2023. https://doi.org/10.48550/arXiv.2308.10462
A. Malyala, K. Zhou, B. Ray, and S. Chakraborty, “On ML-based program translation: Perils and promises,” arXiv preprint arXiv.2302.10812, Feb. 2023. https://doi.org/10.48550/arXiv.2302.10812
W. Yan, Y. Tian, Y. Li, Q. Chen, and W. Wang, “CodeTransOcean: A comprehensive multilingual benchmark for code translation,” arXiv preprint arXiv.2310.04951, Oct. 2023. https://doi.org/10.48550/arXiv.2310.04951
L. Chen, “Research on code generation technology based on LLM pre-training,” Frontiers in Computing and Intelligent Systems, vol. 10, no. 1, pp. 69–75, Oct. 2024. https://doi.org/10.54097/scrwpt34
C. Tony, N. E. D. Ferreyra, M. Mutas, S. Dhiff, and R. Scandariato, “Prompting techniques for secure code generation: A systematic investigation,” arXiv preprint arXiv.2407.07064, Jul. 2024. https://doi.org/10.48550/arXiv.2407.07064
I. Paul, J. Luo, G. Glavaš, and I. Gurevych, “IRCoder: Intermediate representations make language models robust multilingual code generators,” arXiv preprint arXiv:2403.03894v1, Mar. 2024. https://arxiv.org/html/2403.03894v1
N. Raihan, C. D. Newman, and M. Zampieri, “Code LLMs: A taxonomy-based survey,” arXiv preprint arXiv.2412.08291, Dec. 2024. https://doi.org/10.48550/arXiv.2412.08291
P. Devanbu, “New initiative: The naturalness of software,” in 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, vol. 2, Florence, Italy, May 2015, pp. 543–546. https://doi.org/10.1109/ICSE.2015.190
N. Jain, T. Zhang, W.-L. Chiang, J. E. Gonzalez, K. Sen, and I. Stoica, “LLM-assisted code cleaning for training accurate code generators,” arXiv preprint arXiv.2311.14904, Nov. 2023. https://doi.org/10.48550/arXiv.2311.14904
H. Husain, H.-H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt, “CodeSearchNet challenge: Evaluating the state of semantic code search,” arXiv preprint arXiv:1909.09436, 2019. https://arxiv.org/pdf/1909.09436
M. Zhu, K. Suresh, and C. K. Reddy, “Multilingual code snippets training for program translation,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 10, pp. 11783–11790, Jun. 2022. https://doi.org/10.1609/aaai.v36i10.21434
X. Chen, C. Liu, and D. Song, “Tree-to-tree neural networks for program translation,” Advances in Neural Information Processing Systems, vol. 31, 2018.
D. Guo et al., “GraphCodeBERT: Pre-training code representations with data flow,” arXiv preprint arXiv:2009.08366, Sep. 2020. https://doi.org/10.48550/arXiv.2009.08366
W. U. Ahmad, G. R. Tushar, S. Chakraborty, and K.-W. Chang, “AVATAR: A parallel corpus for java-python program translation,” arXiv preprint arXiv.2108.11590, Aug. 2021. https://doi.org/10.48550/arXiv.2108.11590
J. D. Weisz, M. Muller, S. I. Ross, F. Martinez, S. Houde, M. Agarwal, K. Talamadupula, and J. T. Richards, “Better together? An evaluation of AI-supported code translation,” in 27th International Conference on Intelligent User Interfaces, Mar. 2022, pp. 369–391. https://doi.org/10.1145/3490099.3511157