Have a personal or library account? Click to login
Unleashing the potential: harnessing generative artificial intelligence for empowering model training Cover

Unleashing the potential: harnessing generative artificial intelligence for empowering model training

Open Access
|Jul 2024

References

  1. Anagnoste, S. (2018). Robotic Automation Process - The operating system for the digital enterprise. Proceedings of the International Conference on Business Excellence, 12(1), 54-69. doi:https://doi.org/10.2478/picbe-2018-0007
  2. Anagnoste, S. (2024, March). Today’s GenAI capabilities have been developing since 2009. OnStrategy. Bucharest: OnStrategy.
  3. Arcila, B. B. (2023). Is it a Platform? Is it a Search Engine? It’s Chat GPT! The European Liability Regime for Large Language Models. J. Free Speech L., 3, 455.
  4. Bezko, G. (2023, December 7). Understanding AI, ML & Co. in Contact Centers: Definitions and Explanations. Retrieved from Miarec: https://blog.miarec.com/contact-centers-ai-definition
  5. Boom, C. D., Canneyt, S. V., Demeester, T., & Dhoedt, B. (2016). Representation learning for very short texts using weighted word embedding aggregation. Pattern Recognit. Lett., 150–156.
  6. Cai, Y., Mao, S., Wu, W., Wang, Z., Liang, Y., Ge, T., . . . Duan, N. (2023). Low-code LLM: Visual Programming over LLMs. (C. University, Ed.) arXivpreprintarXiv:2304.08103. doi:https://doi.org/10.48550/arXiv.2304.08103
  7. Calzone, O. (2022, February 21). An Intuitive Explanation of LSTM. Retrieved from Medium: https://medium.com/@ottaviocalzone/an-intuitive-explanation-of-lstm-a035eb6ab42c
  8. Caprasi, C. (2023, July 21). Artificial Intelligence, Machine Learning , Deep Learning, GenAI and more. Retrieved from Medium - Women in Technology: https://medium.com/womenintechnology/ai-c3412c5aa0ac
  9. Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A., . . . Zemmel, R. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey & Company, McKinsey Digital. Retrieved from https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction
  10. Data Base Camp. (2022, June 4). Long Short-Term Memory Networks (LSTM)- simply explained! Retrieved from Data Base Camp: https://databasecamp.de/en/ml/lstms
  11. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805v2. doi:https://doi.org/10.48550/arXiv.1810.04805
  12. Fatima, N., Imran, A. S., Zenun Kastrati, S. M., & Soomro, A. (2022). A Systematic Literature Review on Text Generation Using Deep Neural Network Models. IEEE Access, 10.
  13. Feng, S. Y., Gangal, V., Wei, J., Chandar, S., Vosoughi, S., Mitamura, T., & Hovy, E. (2021). A Survey of Data Augmentation Approaches for NLP. Findings of the Association for Computational Linguistics: ACL-IJCNLP, 968–988.
  14. Gartner. (2023, July 31). Gartner Says Conversational AI Capabilities Will Help Drive Worldwide Contact Center Market to 16% Growth in 2023. Retrieved from Gartner : https://www.gartner.com/en/newsroom/press-releases/2023-07-31-gartner-says-conversational-ai-capabilities-will-help-drive-worldwide-contact-center-market-to-16-percent-growth-in-2023
  15. Gartner. (2023). Generative AI. Retrieved from Gartner : https://www.gartner.com/en/information-technology/glossary/generative-ai
  16. Gruetzemacher, R. (2022). The Power of Natural Language Processing. AI And Machine Learning. Retrieved from https://hbr.org/2022/04/the-power-of-natural-language-processing
  17. Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780. doi:https://doi.org/10.1162/neco.1997.9.8.1735
  18. IBM. (2023, July 6). AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: Whats the difference? Retrieved from IBM: https://www.ibm.com/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks/
  19. Jain, R., Gervasoni, N., Ndhlovu, M., & Rawat, S. (2023). A Code Centric Evaluation of C/C++ Vulnerability Datasets for Deep Learning Based Vulnerability Detection Techniques. Proceedings of the16th Innovations in Software Engineering Conference, 1–10.
  20. Kang, H., Wu, H., & Zhang, X. (2020). Generative Text Steganography Based on LSTM Network and Attention Mechanism with Keywords. Electronic Imaging.
  21. Kumar, T. S. (2022, August 26). Natural Language Processing – Sentiment Analysis using LSTM. Retrieved from Analytics Vidhya: https://www.analyticsvidhya.com/blog/2021/06/natural-language-processing-sentiment-analysis-using-lstm/
  22. Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., . . . Zettlemoyer, L. (2019). BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. arXiv preprint arXiv:1910.13461.
  23. Li, X., Zhu, X., Ma, Z., Liu, X., & Shah, S. (2023). Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics? An Examination on Several Typical Tasks. Cornell University Computer Science - Computation and Language. doi:https://doi.org/10.48550/arXiv.2305.05862
  24. Lindemann, B., Müller, T., Vietz, H., Jazdi, N., & Weyrich, M. (2021). A survey on long short-term memory networks for time series prediction. Procedia CIRP 99, 650-655.
  25. Liu, Y., & Lapata, M. (2018). Learning Structured Text Representations. Trans. Assoc. Comput. Linguist., 6, 63-75. doi:https://doi.org/10.48550/arXiv.1705.09207
  26. Luaran, N., & Alfred, R. (2022). Assessment of the Optimization of Hyperparameters in Deep LSTM for Time Series Sea Water Tidal Shift. Research Square .
  27. Medium. (2019). Recurrent Neural Network and Long Term Dependencies. Retrieved from Medium: https://infolksgroup.medium.com/recurrent-neural-network-and-long-term-dependencies-e21773defd92
  28. Mungalpara, J. (2022, July 26). Stemming Lemmatization Stopwords and N-Grams in NLP. Retrieved from Medium: https://jaimin-ml2001.medium.com/stemming-lemmatizationstopwords-and-n-grams-in-nlp-96f8e8b6aa6f
  29. Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S., Usman, M., . . . Mian, A. (2024). A Comprehensive Overview of Large Language Models. Preprint. Retrieved from https://doi.org/10.48550/arXiv.2307.06435
  30. Noaman, H. M., Sarhan, S. S., & Rashwan, M. A. (2018). Enhancing recurrent neural network-based language models by word tokenization. Human-centric Computing and Information Sciences, 8(12). doi:https://doi.org/10.1186/s13673-018-0133-x
  31. Novelli, C., Casolari, F., Rotolo, A., Taddeo, M., & Floridi, L. (2023). Taking AI Risks Seriously: a Proposal for the AI Act. AI & SOCIETY, 1-5.
  32. Olah, C. (2015). Understanding LSTM Networks. Retrieved from colah’s blog: https://colah.github.io/posts/2015-08-Understanding-LSTMs/
  33. Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018, June). Deep Contextualized Word Representations. (M. Walker, H. Ji, & A. Stent, Eds.) Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2227–2237. doi:10.18653/v1/N18-1202
  34. Razin, M. J., Karim, M. A., Mridha, M. F., Rifat, S. M., & Alam, T. (2021). A Long Short-Term Memory (LSTM) Model for Business Sentiment Analysis Based on Recurrent Neural Network. Sustainable Communication Networks and Application. doi:https://doi.org/10.1007/978-981-15-8677-4_1
  35. Sak, H., Senior, A., & Beaufays, F. (2014). Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling. INTERSPEECH, 338-342.
  36. Săniuță, A., & Filip, S.-O. (2021). Artificial Intelligence: An Overview of European and Romanian Startups Landscape and the Factors that Determine their Success. Strategica. Shaping the Future of Business and Economy, 872-884.
  37. Thirunavukarasu, A. J., Ting, D. S., Elangovan, K., Gutierrez, L., Tan, T. F., & Ting, D. S. (2023). Large language models in medicine. Nature medicine, 29(8), 1930–1940.
  38. Wang, C.-F. (2019, January 8). The Vanishing Gradient Problem. (T. D. Science, Editor) Retrieved March 1, 2024, from Medium: https://towardsdatascience.com/the-vanishing-gradient-problem-69bf08b15484
  39. Wolfram, S. (2023, February 14). What Is ChatGPT Doing … and Why Does It Work? Retrieved from https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
  40. Yao, Y., Duan, J., Xu, K., Cai, Y., Sun, Z., & Zhang, Y. (2023). A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly. Preprint submitted to Elsevier.
  41. Zhanga, W., Lia, Y., & Wang, S. (2019). Learning document representation via topic-enhanced LSTM model. Knowledge-Based Systems, 174, 194–204. doi: https://doi.org/10.1016/j.knosys.2019.03.007
  42. Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., . . . Wen, J.-R. (2023). A Survey of Large Language Models. (C. University, Ed.) doi: https://doi.org/10.48550/arXiv.2303.18223
Language: English
Page range: 3618 - 3635
Published on: Jul 22, 2024
Published by: Bucharest University of Economic Studies
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2024 Alexandra-Mihaela Dumitru, Sorin Anagnoste, Marco Savastano, published by Bucharest University of Economic Studies
This work is licensed under the Creative Commons Attribution 4.0 License.