Have a personal or library account? Click to login
Evolving AI models: adoption patterns of transformers and diffusers Cover

Evolving AI models: adoption patterns of transformers and diffusers

By: Paweł Cabała  
Open Access
|Apr 2026

References

  1. Ahirwar, K. (2023). A very short introduction to diffusion models. Retrieved from https://kailashahirwar.medium.com/a-very-short-introduction-to-diffusion-models-a84235e4e9ae
  2. Chen, M., Mei, S., Fan, J., & Wang, M. (2024). Opportunities and challenges of diffusion models for generative AI. National Science Review, 11(12). doi: 10.1093/nsr/nwae348
  3. Cong, S., Wang, H., Zhou, Y., Wang, Z., Yao, X., & Yang, C. (2023). Comprehensive review of Transformer-based models in neuroscience neurology and psychiatry. Brain and Behavior, 13(2). doi: 10.1002/brx2.57
  4. Gallon, D., Jentzen, A,. & von Wurstemberger, P. (2024). An overview of diffusion models for generative artificial intelligence. ArXiv, 12(2024). doi: 10.48550/arXiv.2412.01371
  5. Gupta, P. (2023). Transformer models: A breakthrough in artificial intelligence. Retrieved from https://medium.com/%40prashantgupta17/transformer-models-a-breakthrough-in-artificial-intelligence-e3de92d37f8f
  6. Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. Advances in neural information processing systems33, 6840-6851.
  7. Hussain, Z., Mata, R., Binz, M., & Wulff, D. U. (2024). A tutorial on open-source large language models for behavioral science. Behavior Research Methods, 56, 8214-8237.3 doi: 10.3758/s13428-024-02455-8
  8. Islam, S., Elmekki, H., Elsebai, A., Bentahar, J., Drawel, N., Rjoub, G., & Pedrycz, W. (2024). A comprehensive survey on applications of transformers for deep learning tasks. Expert Systems with Applications, 213, 122666. doi: 10.1016/j.eswa.2023.122666
  9. Jones, J., Jiang, W., Synovic, N., Thiruvathukal, G. K., & Davis, J. C. (2024). What do we know about Hugging Face? A systematic literature review and quantitative validation of qualitative claims. In Proceedings of the 18th ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM ’24) (pp. 18-14). doi: 10.1145/3674805.3686665
  10. Lauridsen, P. S., (2025). DeepSeek: Potential and challenges in education. Retrieved from https://viden.ai/en/deepseek-potential-and-challenges-in-teaching/
  11. Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., & Zettlemoyer, L. (2020). BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 7871-7880). Association for Computational Linguistics.
  12. Osborne, C., Ben Allal, L., & Cihon, P. (2024). The AI community building the future? A quantitative analysis of development activity on Hugging Face Hub. Journal of Computational Social Science, 7, 2067-2105.4 doi: 10.1007/s42001-024-00300-8
  13. Pantano, E., Serravalle, F., & Priporas, C.-V. (2024). The form of AI-driven luxury: How generative AI (GAI) and Large Language Models (LLMs) are transforming the creative process. Journal of Marketing Management, 40(17-18), 1771-1790. doi: 10.1080/0267257X.2024.2436096
  14. Patwardhan, N., Marrone, S., & Sansone, C. (2023). Transformers in the real world: A survey on NLP applications. Information, 14(4), 248. doi: 10.3390/info14040248
  15. Po, R., Yifan, W., Golyanik, V., Aberman, K., Barron, J. T., Bermano, A. H., Chan, E. R., Dekel, T., Holynski, A., Kanazawa, A., Liu, C. K., Liu, L., Mildenhall, B., Nießner, M., Ommer, B., Theobalt, C., Wonka, P., & Wetzstein, G. (2023). State of the art on diffusion models for visual computing, 43(2). doi: 10.48550/arXiv.2310.07204
  16. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
  17. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140), 1-67.
  18. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 10684-10695). IEEE. doi: 10.1109/CVPR52688.2022.01042
  19. Rothman, D. (2022). Transformers for Natural Language Processing (2nd ed.). Birmingham, UK: Packt Publishing.
  20. Sedkaoui, S., & Benaichouba, R. (2024). Generative AI as a transformative force for innovation: A review of opportunities, applications and challenges. European Journal of Innovation Management. doi: 10.1108/EJIM-02-2024-0129
  21. Song, Y., & Ermon, S. (2019). Generative modeling by estimating gradients of the data distribution. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. Fox, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 32 (pp. 11895-11907). New York, USA: Curran Associates Inc.
  22. Taraghi, M., Dorcelus, G., Foundjem, A., Tambon, F., & Khomh, F. (2024). Deep Learning Model Reuse in the HuggingFace Community: Challenges, Benefit and Trends. In 2024 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) (pp. 512-523). IEEE.
  23. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008. doi: 10.48550/arXiv.1706.03762
  24. Yang, L., Zhang, Z., Song, Y., Hong, S., Xu, R., Zhao, Y., Zhang, W., Cui, B., & Yang, M.-H. (2023). Diffusion models: A comprehensive survey of methods and applications. ACM Computing Surveys, 55(6), 1-35. doi: 10.48550/arXiv.2209.00796
DOI: https://doi.org/10.2478/emj-2026-0005 | Journal eISSN: 2543-912X | Journal ISSN: 2543-6597
Language: English
Page range: 60 - 72
Submitted on: Aug 10, 2025
|
Accepted on: Jan 10, 2026
|
Published on: Apr 2, 2026
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2026 Paweł Cabała, published by Bialystok University of Technology
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.