References
- Jin, H., Yang, Z., Meng, D., Wang, J., & Tan, J. (2024). A Comprehensive Survey on Process-Oriented Automatic Text Summarization with Exploration of LLM-Based Methods. arXiv (Cornell University).
https://doi.org/10.48550/arxiv.2403.02901 - Brown, Tom B. “Language models are few-shot learners.” arXiv preprint arXiv:2005.14165 (2020).
- Chowdhery, Aakanksha, et al. “PaLM: Scaling language modeling with pathways.” arXiv preprint arXiv:2204.02311 (2023).
- Widyassari, A. P., Rustad, S., Shidik, G. F., Noersasongko, E., Syukur, A., Affandy, A., & Setiadi, D. R. I. M. (2022). Review of automatic text summarization techniques & methods. Journal of King Saud University - Computer and Information Sciences, 34(4), 1029–1046.
https://doi.org/10.1016/j.jksuci.2020.05.006 - Zhang, M., Zhou, G., Yu, W., Huang, N., & Liu, W. (2022). A comprehensive survey of abstractive text summarization based on deep learning. Computational Intelligence and Neuroscience, 2022, 1–21.
https://doi.org/10.1155/2022/7132226 - A survey of automatic text Summarization: progress, process and challenges. (2021). IEEE Journals & Magazine | IEEE Xplore.
https://ieeexplore.ieee.org/abstract/document/9623462/ - Gatt, A., & Krahmer, E. (2018). Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61, 65–170.
https://doi.org/10.1613/jair.5477 - Hadi, M. U., Tashi, Q. A., Qureshi, R., Shah, A., Muneer, A., Irfan, M., Zafar, A., Shaikh, M. B., Akhtar, N., Wu, J., & Mirjalili, S. (2023). A survey on large language models: applications, challenges, limitations, and practical usage. TechRxiv.
https://doi.org/10.36227/techrxiv.23589741.v1 - Pan, J. Z., Razniewski, S., Kalo, J., Singhania, S., Chen, J., Dietze, S., Jabeen, H., Omeliyanenko, J., Zhang, W., Lissandrini, M., Biswas, R., De Melo, G., Bonifati, A., Vakaj, E., Dragoni, M., & Graux, D. (2023). Large language models and knowledge graphs: Opportunities and challenges. arXiv (Cornell University).
https://doi.org/10.48550/arxiv.2308.06374 - Shanker, S., & King, B. J. (2002). The emergence of a new paradigm in ape language research. Behavioral and Brain Sciences, 25(5), 605–620.
https://doi.org/10.1017/s0140525x02000110 - Albahri, A. S., Duhaim, A. M., Fadhel, M. A., Alnoor, A., Baqer, N. S., Alzubaidi, L., Albahri, O. S., Alamoodi, A. H., Bai, J., Salhi, A., Santamaría, J., Ouyang, C., Gupta, A., Gu, Y., & Deveci, M. (2023). A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, 96, 156–191.
https://doi.org/10.1016/j.inffus.2023.03.008 - Yliopisto, O., Juustila, A., Rajanen, D., & Rajanen, D. (2017, March 14). Cloud computing: migrating to the cloud, Amazon Web Services and Google Cloud Platform. OuluREPO.
https://urn.fi/URN:NBN:fi:oulu-201703151365 - Kianian, R., Sun, D., Crowell, E. L., & Tsui, E. (2024). The Use of Large Language Models to Generate Education Materials about Uveitis. Ophthalmology Retina, 8(2), 195–201.
https://doi.org/10.1016/j.oret.2023.09.008 - Yada, Divakar, et al. “Automatic Text Summarization Methods: A Comprehensive Review” arXiv preprint arXiv:2204.01849 (2022).
- Grabeel, K. L., Russomanno, J., Oelschlegel, S., Tester, E., & Heidel, R. E. (2018). Computerized versus hand-scored health literacy tools: a comparison of Simple Measure of Gobbledygook (SMOG) and Flesch-Kincaid in printed patient education materials. Journal of the Medical Library Association, 106(1).
https://doi.org/10.5195/jmla.2018.262 - Eid, K., Eid, A. A., Wang, D., Raiker, R. S., Chen, S., & Nguyen, J. (2023). Optimizing Ophthalmology patient education via ChatBot-Generated Materials: Readability analysis of AI-Generated Patient Education materials and the American Society of Ophthalmic Plastic and Reconstructive Surgery patient Brochures. Ophthalmic Plastic and Reconstructive Surgery.
https://doi.org/10.1097/iop.0000000000002549 - Hwang YH, Um J, Pradhan B, Choudhury T, Schlüter S. How does ChatGPT evaluate the value of spatial information in the 4th industrial revolution? Spatial Information Research. December 2023. doi:10.1007/s41324-023-00567-5
- Pal A, Sankarasubbu M. Gemini goes to Med School: exploring the capabilities of multimodal large language models on medical challenge problems & hallucinations. arXiv (Cornell University). February 2024. doi:10.48550/arxiv.2402.07023
- Minaee, S., Mikolov, T., Nikzad, N., Chenaghlu, M., Socher, R., Amatriain, X., & Gao, J. (2024). Large Language Models: a survey. arXiv (Cornell University).
https://doi.org/10.48550/arxiv.2402.061 - Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. arXiv (Cornell University). Published online June 12, 2017. doi:10.48550/arxiv.1706.03762
- Zhu W, Liu H, Dong Q, et al. Multilingual Machine Translation with Large Language Models: Empirical Results and Analysis. arXiv (Cornell University). Published online April 10, 2023. doi:10.48550/arxiv.2304.04675
- Van Veen D, Van Uden C, Blankemeier L, et al. Clinical text summarization: Adapting large language models can outperform human experts. arXiv (Cornell University). Published online September 14, 2023. doi:10.48550/arxiv.2309.07430
- Xiao L, Wang L, He H, Jin Y. Modeling Content Importance for Summarization with Pre-trained Language Models. 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Published online January 1, 2020. doi:10.18653/v1/2020.emnlp-main.293
- Bajaj A, Dangati P, Krishna K, et al. Long Document Summarization in a Low Resource Setting using Pretrained Language Models. arXiv (Cornell University). Published online February 28, 2021. doi:10.48550/arxiv.2103.00751
- Wang Q, Liu D, Cao Y, et al. Recursively summarizing enables Long-Term dialogue memory in large language models. arXiv (Cornell University). Published online August 29, 2023. doi:10.48550/arxiv.2308.15022
- Zhang T, Ladhak F, Durmus E, Liang P, McKeown K, Hashimoto T. Benchmarking large language models for news summarization. arXiv (Cornell University). Published online January 31, 2023. doi:10.48550/arxiv.2301.13848
- Eleyan D, Othman A, Eleyan A. Enhancing software comments readability using Flesch Reading Ease Score. Information. 2020;11(9):430. doi:10.3390/info11090430
- Moncada FM, Pabico JP. On GobbleDyGook and Mood of the Philippine Senate: an exploratory study on the readability and sentiment of selected Philippine senators’ microposts. arXiv (Cornell University). Published online August 6, 2015.
https://arxiv.org/pdf/1508.01321.pdf - Alawad D, Panta M, Zibran MF, Islam R. An Empirical Study of the Relationships between Code Readability and Software Complexity. arXiv (Cornell University). Published online August 30, 2019.
https://arxiv.org/pdf/1909.01760 - Sari DC. Measuring Quality of Reading materials in English textbook: The use of lexical density method in assessing complexity of reading materials of Indonesia’s Curriculum – 13 (K13) English Textbook Dian Sari. Journal of Applied Linguistics and Literature. 2018;1(2):30–39. doi:10.33369/joall.v1i2.4177
- Mihalcea R, Corley CD, Strapparava C. Corpus-based and knowledge-based measures of text semantic similarity. ResearchGate. Published online January 1, 2006.
https://www.researchgate.net/publication/221606405_Corpus-based_and_Knowledge-based_Measures_of_Text_Semantic_Similarity - Shen, T., Jin, R., Huang, Y., Liu, C., Dong, W., Guo, Z. J., Wu, X., Liu, Y., & Xiong, D. (2023). Large Language Model alignment: a survey. arXiv (Cornell University).
https://doi.org/10.48550/arxiv.2309.15025 - Jin H, Yang Z, Meng D, Wang J, Tan J. A Comprehensive Survey on Process-Oriented Automatic Text Summarization with Exploration of LLM-Based Methods. arXiv (Cornell University). March 2024. doi:10.48550/arxiv.2403.02901
- Yadav D, Desai J, Yadav AK. Automatic Text Summarization Methods: A Comprehensive Review. arXiv (Cornell University). March 2022. doi:10.48550/arxiv.2204.01849
https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models [date accessed: 15th February, 2024]https://www.forbes.com/sites/bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read/?sh=392017b560ba [Date accessed: 11th January, 2024)- Zaretsky J, Kim JM, Baskharoun S, et al. Generative Artificial Intelligence to Transform Inpatient Discharge Summaries to Patient-Friendly Language and Format. JAMA Netw Open. 2024;7(3):e240357. doi:10.1001/jamanetworkopen.2024.0357
- Raja, H., & Lodhi, S. (2024). Assessing the readability and quality of online information on anosmia. Annals of the Royal College of Surgeons of England, 106(2), 178–184.
https://doi.org/10.1308/rcsann.2022.0147 - Shet, S. S., Murphy, B., Boran, S., & Taylor, C. (2024). Readability of online information for parents concerning Paediatric In-Toeing: An analysis of the most popular online public sources. Curēus.
https://doi.org/10.7759/cureus.57268 - Clavié, B., Ciceu, A., Naylor, F., Soulié, G., & Brightwell, T. (2023). Large Language Models in the workplace: A case study on prompt Engineering for job type classification. arXiv (Cornell University).
https://doi.org/10.48550/arxiv.2303.07142