References
- 1Anger, Z. (2017). List of profanity in English. Retrieved from
https://github.com/zacanger/profane-words - 2Baumgartner, J., Zannettou, S., Squire, M., & Blackburn, J. (2020). The Pushshift Telegram dataset. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 840–847. Retrieved from
https://ojs.aaai.org/index.php/ICWSM/article/view/7348 - 3Benesch, S., Buerger, C., & Glavinic, T. (2018). Dangerous speech: a practical guide.
http://dangerousspeech.org . DOI: 10.15868/socialsector.34064 - 4Bleich, E. (2011). The rise of hate speech and hate crime laws in liberal democracies. Journal of Ethnic and Migration Studies, 37(6), 917–934. DOI: 10.1080/1369183X.2011.576195
- 5Bojanowski, P., Grave, E., Joulin, A., & Mikolov, T. (2016). Enriching word vectors with subword information. arXiv:1607.04606. DOI: 10.1162/tacl_a_00051
- 6Boot, A. B., Tjong Kim Sang, E., Dijkstra, K., et al. (2019). How character limit affects language usage in tweets. Palgrave Commun, 5(76). DOI: 10.1057/s41599-019-0280-3
- 7Brison, S. J., & Gelber, K. (2019). Free Speech in the Digital Age. Oxford: Oxford University Press. DOI: 10.1093/oso/9780190883591.001.0001
- 8Brown, A. (2017). What is hate speech? Part II: Family resemblances. Law and Philosophy, 36, 561–613. DOI: 10.1007/s10982-017-9300-x
- 9Brown, A. (2019).
The Meaning of Silence in Cyberspace: The Authority Problem and Online Hate Speech . In S. J. Brison & K. Gelber (Eds.), Free Speech in the Digital Age (pp. 207–223). Oxford University Press. DOI: 10.1093/oso/9780190883591.003.0013 - 10Brown, B. (2016). Official Trump Twitter Archive V2 source. Retrieved from
https://www.thetrumparchive.com - 11Chandrasekharan, E., Pavalanathan, U., Srinivisan, A., Glynn, A., Eisenstein, J., & Gilbert, E. (2017). You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech. Proceedings of the ACM on Human-Computer Interaction, 1(2), 1–22. DOI: 10.1145/3134666
- 12Culpeper, J. (1996). Towards an anatomy of impoliteness. Journal of Pragmatics, 25(3), 349–367. DOI: 10.1016/0378-2166(95)00014-3
- 13Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated hate speech detection and the problem of offensive language. Proceedings of the International AAAI Conference on Web and Social Media, 11(1). Retrieved from
https://ojs.aaai.org/index.php/ICWSM/article/view/14955 - 14ECRI. (2016). General Policy Recommendation No. 15 On Combating Hate Speech, December 8, 2015, Strasbourg. Retrieved from
https://rm.coe.int/ecri-general-policy-recommendation-no-15-on-combating-hate-speech/16808b5b01 - 15Gelber, K. (2019). Differentiating hate speech: a systemic discrimination approach. Critical Review of International Social and Political Philosophy, 24(4), 394–414. DOI: 10.1080/13698230.2019.1576006
- 16Heinze, E. (2016). Hate Speech and Democratic Citizenship. Oxford: Oxford University Press. DOI: 10.1093/acprof:oso/9780198759027.001.0001
- 17Howard, J. (2019). Dangerous Speech. Philosophy & Public Affairs, 47, 208–254. DOI: 10.1111/papa.12145
- 18Jeshion, R. (2021).
Varieties of pejoratives . In J. Khoo & R. Sterkin (Eds.), Routledge Handbook of Social and Political Philosophy of Language (pp. 211–231), New York: Routledge. DOI: 10.4324/9781003164869-17 - 19Langton, R. (2012).
Beyond belief: Pragmatics in hate speech and pornography . In I. Maitra & M. K. McGowan (Eds.), Speech and harm: Controversies over free speech (pp. 72–93). Oxford: Oxford University Press. DOI: 10.1093/acprof:oso/9780199236282.001.0001 - 20Maitra, I. (2012). Subordinating speech. In I. Maitra & M. K. McGowan (Eds.), Speech and harm: Controversies over free speech. DOI: 10.1093/acprof:oso/9780199236282.003.0005
- 21Matsuda, M., Lawrence, C., Delgado, R., & Crenshaw, K. (Eds.). (1993). Words that wound: Critical race theory, assaultive speech, and the first amendment. Colorado: Westview Press.
- 22McGowan, M. K. (2019). Just words: on speech and hidden harm. Oxford: Oxford University Press. DOI: 10.1093/oso/9780198829706.001.0001
- 23Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
- 24Nakayama, H. (2017). Hate Speech Detection Library for Python. Retrieved from
https://github.com/Hironsan/HateSonar - 25Oster, J. (2015).
Incitement to hatred . In Media Freedom as a Fundamental Right (Cambridge Intellectual Property and Information Law (pp. 223–240). Cambridge: Cambridge University Press. DOI: 10.1017/CBO9781316162736.013 - 26Palmer, A., Carr, C., Robinson, M., & Sanders, J. (2020). COLD: Annotation scheme and evaluation data set for complex offensive language in English. Journal for Language Technology and Computational Linguistics, 34(1), 1–28. Retrieved from
https://jlcl.org/content/2-allissues/1-heft1-2020/jlcl_2020-1.pdf#page=11 - 27Popa-Wyatt, M., & Wyatt, J. L. (2018). Slurs, roles and power. Philosophical Studies, 175(11), 2879–2906. DOI: 10.1007/s11098-017-0986-2
- 28Prucha, N. (2016). IS and the Jihadist information highway – Projecting influence and religious identity via Telegram. Perspectives On Terrorism, 10(6). Retrieved from
http://www.terrorismanalysts.com/pt/index.php/pot/article/view/556/1102 - 29Poletto, F., Basile, V., Sanguinetti, M., et al. (2020). Resources and benchmark corpora for hate speech detection: a systematic review. Language Resources and Evaluation, 55, 477–523. DOI: 10.1007/s10579-020-09502-8
- 30Scheffler, T. (2014). A German Twitter Snapshot. Proceedings of LREC, Reykjavik, Iceland.
http://www.lrec-conf.org/proceedings/lrec2014/pdf/1146_Paper.pdf - 31Shehabat, A., Mitew, T., & Alzoubi, Y. (2017). Encrypted Jihad: Investigating the role of Telegram app in lone wolf attacks in the West. Journal of Strategic Security, 10(3), 27–53. DOI: 10.5038/1944-0472.10.3.1604
- 32Shutterstock. (2020). List of Dirty, Naughty, Obscene, and Otherwise Bad Words. Retrieved from
https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words - 33Solopova, V., Scheffler, T., & Popa-Wyatt, M. (2021). A Telegram corpus for hate speech, offensive language, and online harm. Journal of Open Humanities Data, 7: X, 1–5. DOI: 10.5334/johd.32
- 34Stenetorp, P., Pyysalo, S., Topić, G., Ohta, T., Ananiadou, S., & Tsujii, J. (2012). brat: a Web-based Tool for NLP-Assisted Text Annotation. Proceedings of the Demonstrations Session at EACL 2012. Association for Computational Linguistics.
https://www.aclweb.org/anthology/E12-2021 - 35Tirrell, L. (2012).
Genocidal language games . In I. Maitra & M. K. McGowan (Eds.), Speech and harm: Controversies over Free Speech (pp. 174–221). Oxford: Oxford University Press. DOI: 10.1093/acprof:oso/9780199236282.003.0008 - 36Tirrell, L. (2017). Toxic Speech: Toward an Epidemiology of Discursive Harm. Philosophical Perspectives 45(2), 139–161. DOI: 10.5840/philtopics201745217
- 37Vidgen, B., & Derczynski, L. (2020). Directions in abusive language training data, a systematic review: Garbage in, garbage out. PLoS ONE, 15(12),
e0243300 . DOI: 10.1371/journal.pone.0243300 - 38Waldron, J. (2014). The harm in hate speech. Cambridge, MA: Harvard University Press.
- 39Yayla, A. S., & Speckhard, A. (2017). Telegram: The mighty application that ISIS loves. International Center for the Study of Violent Extremism. Technical Report. Retrieved from
https://www.icsve.org/telegram-the-mighty-application-that-isis-loves/ - 40Yin, W., & Zubiaga, A. (2021). Towards generalisable hate speech detection: a review on obstacles and solutions. arXiv preprint arXiv:2102.08886. DOI: 10.7717/peerj-cs.598
