References
- J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” arXiv preprint arXiv:1810.04805, 2018.
- V. Sanh, L. Debut, J. Chaumond, and T. Wolf, “DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter,” arXiv preprint arXiv:1910.01108, 2019.
- Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” arXiv preprint arXiv:1907.11692, 2019.
- J. Li, Z. An, W. Cheng, J. Zhou, F. Zheng, and B. Hu, “MHA: a multimodal hierarchical attention model for depression detection in social media,” Health Information Science and Systems, vol. 11, no. 1, pp. 1–13, 2023.
- B. Cui, J. Wang, H. Lin, Y. Zhang, L. Yang, and B. Xu, “Emotion-Based Reinforcement Attention Network for Depression Detection on Social Media: Algorithm Development and Validation,” JMIR Medical Informatics, vol. 10, no. 8, e37818, 2022.
- X. Jia, J. Chen, K. Liu, Q. Wang, and J. He, “Multimodal depression detection based on an attention graph convolution and transformer,” Mathematical Biosciences and Engineering, vol. 22, no. 3, pp. 652–676, 2025.
- K. Hasan and J. Saquer, “A comparative analysis of transformer and LSTM models for detecting suicidal ideation on Reddit,” arXiv preprint arXiv:2411.15404, 2024.
- A. Izmaylov, S. Malmasi, and A. Yates, “Combining psychological theory with language models for suicide risk detection,” in Findings of the Association for Computational Linguistics: EACL 2023, pp. 2405–2414, 2023.
- K. Nikhileswar, D. Vishal, L. Sphoorthi, and S. Fathimabi, “Suicide Ideation Detection in Social Media Forums,” in Proceedings - 2nd International Conference on Smart Electronics and Communication, ICOSEC 2021, doi: 10.1109/ICOSEC51865.2021.9591887, 2021.