Skip to main content
Have a personal or library account? Click to login
Mapping Multiclass-Targeted Hate Speech in Online Discourse: An Open Dataset Cover

Mapping Multiclass-Targeted Hate Speech in Online Discourse: An Open Dataset

Open Access
|Apr 2026

Full Article

(1) Context and motivation

Hate speech is commonly defined as language that expresses hostility, contempt, or discrimination toward individuals or groups based on protected characteristics such as race, ethnicity, religion, gender, nationality, or sexual orientation (Bajt, 2025; Lee et al., 2022). In academic research, hate speech is typically distinguished from general offensive language by the presence of a targeted social identity and the intention or effect of demeaning, marginalizing, or threatening that group. Following this perspective, the present work conceptualizes hate speech as discourse that attacks or delegitimizes a social group or its members based on identity attributes, while recognizing that the boundaries between hate speech, abusive language, and offensive expression may be context-dependent. This conceptual distinction is important for annotation design because it allows the dataset to separate identity-targeted hostility from general insults and other forms of online aggression.

The widespread use of social media platforms has intensified the circulation of harmful and exclusionary language in public spaces (Kaddoura & Nassar, 2025). Harmful language posted online is frequently directed toward individuals and communities based on gender, ethnicity, religion, or other identity markers (Scheffler et al., 2021). Hate speech represents one form of harmful language posted online. It can cause psychological and social harm for targeted individuals and groups, including long-term effects on mental well-being (Madriaza et al., 2025). These consequences have motivated interests in understanding how hateful language is produced, circulated, and normalized in online environments to prevent its spread and thereby protect targeted individuals.

Data annotation schemes play a critical role in shaping how hate speech is documented, interpreted, and studied. Through categorization practices, annotation frameworks determine which forms of hate speech become visible in research and which remain marginal or overlooked. Prior studies have primarily relied on binary or multiclass classification to categorize online hate speech (Mubeen et al., 2025). Although such approaches have enabled large-scale analysis, they often rely on broad categories such as gender, race, and religion. Consequently, subgroup-specific forms of hate speech, such as misandry, anti-Asian, or anti-Hispanic discourse, are frequently subsumed under broader and general categories. This practice can lead to situations where subtle but socially significant manifestations of discrimination may become underrepresented or entirely obscured within aggregated categories.

The categorization of different forms of hate speech into broad and general labels has significant implications for data quality, interpretation, and downstream tasks. Hate speech imbalance leads to a systematic masking of minority subcategories, which reduces the representational richness of the dataset and limits its ability to capture the full spectrum of hate speech phenomena. From a computational perspective, such consolidation can bias machine learning models toward majority patterns, thereby reducing their sensitivity to less common but equally harmful forms of hate speech. Consequently, predictive systems trained on overly generalized labels may exhibit lower recall for minority categories, reinforcing existing data imbalances and weakening model robustness.

The imbalance and underrepresentation of minority categories also constrain qualitative, sociolinguistic, and comparative research. Researchers seeking to examine how hate speech varies across social groups, cultural backgrounds, linguistic communities, or geopolitical contexts may find it difficult to isolate meaningful differences when diverse expressions are grouped under a single label. This limitation restricts the ability to analyze discourse strategies, narrative framing, and evolving forms of hate speech specific to particular communities (Bäumler et al., 2025). Therefore, broad categorization may hinder longitudinal and cross-platform studies aimed at tracking emerging trends in online hate speech.

Existing English hate speech datasets have largely focused on binary classification to differentiate between hate and non-hate, 3-class classification (such as hate, offensive, and normal), or multiclass classification to identify main hate speech categories. Table 1 compares representative datasets, their objectives, and their classification frameworks. While several datasets identify major target categories, none explicitly support systematic subcategory-level annotation. This limits the public datasets’ usefulness for research that seeks to examine intersectionality, minority representations, and social dynamics of online hate speech.

Table 1

Comparison with existing English-language hate speech datasets, their objectives, and classification frameworks.

DATASETOBJECTIVECLASSIFICATION SCHEMAMAIN CATEGORY CLASSIFICATIONSUBCATEGORY CLASSIFICATION
Mody et al. (2023)Hate speech detection2-class (hate, non-hate)NoNo
Davidson et al. (2017)Detect hate speech from offensive and normal language3-class (hate, offensive, normal)NoNo
Mollas et al. (2022)Multi-label hate speech detection8-class (violence, directed vs general, gender, race, national origin, disability, religion, sexual orientation)YesNo
Waseem and Hovy (2016)Identify racism and sexism on Twitter3-classes (racism, sexism, neither)YesNo
Mathew et al. (2021)Explainable hate speech classification with target identification3-class (hate, offensive, normal)NoNo
Walsh and Greaney (2025)Categorize hate speech across multiple groups5-class (ethnicity, gender, sexuality, religion, non-hate)YesNo
Proposed dataHate speech categorization14-classesYesYes

In response to these limitations, this paper presents a re-annotation framework that focuses on subgroup-level categorization practices. The framework is applied to a publicly available hate speech sample sourced from HatEval2019 dataset (Basile et al., 2019) and is designed to enhance transparency, analytical granularity, and reuse. The resulting dataset comprises 14 target-specific classes and enables differentiation between subgroup-targeted hate speech and direct insults. This paper documents annotation guidelines, interpretive decisions, and reliability measures and supports reuse in digital humanities, discourse studies, media analysis, and social justice research. It enables scholars to investigate how hate speech, identity, and power relations are constructed and negotiated in online communication.

(2) Dataset description

The dataset consists of a single Excel file containing 5,455 hate speech samples from the Twitter (now X) platform. Table 2 shows the predefined subcategories and their distribution in the dataset. The dataset is imbalanced, showing that Anti-Asian, Anti-Christian, and Anti-Semitic Hate Speech rarely appear. The dataset shows a high presence of Misogyny and Anti-Immigrant Hate Speech since the hate speech data samples are collected using derogatory words and highly polarized hashtags. This class imbalance between subgroups is linked to real-world visibility of hate speech, as some groups are subjected to hate speech more than others. To increase dataset diversity and quality, researchers and data scientists can augment minority classes to improve automated detection model performance on these classes.

Table 2

Label distribution in the dataset.

MAIN CATEGORYSUBCATEGORYCOUNT
Gender-Based Hate SpeechMisogyny2019
Misandry29
Immigration and Xenophobic Hate SpeechAnti-Immigrant Hate Speech1882
Anti-Refugee Hate Speech379
Xenophobia51
Religious Hate SpeechIslamophobia167
Anti-Christian Hate Speech3
Racial and Ethnic Hate SpeechAnti-Black Hate Speech105
Anti-Hispanic Hate Speech16
Anti-Semitic Hate Speech4
Anti-Asian Hate Speech2
Profanity and General Abuse674
Threats and Violence106
Hate Speech toward Countries18

Repository location

https://doi.org/10.6084/m9.figshare.31292419

Repository name

Figshare

Object name

Multiclass Hate Speech

Format names and versions

Excel

Creation dates

2025-09-23–2025-11-27

Dataset creators

Sanaa Kaddoura and Sumaia Al-Kohlani

Language

English

License

CC0

Publication date

2026-02-09

(3) Method

This paper adopts a re-annotation methodology that builds on the HatEval2019 dataset that is publicly available and released under the CC-BY-NC-4.0 license (Basile et al., 2019), which serves as a benchmark in hate speech research. Using an existing dataset enables a focused discussion of annotation design, labels, and reliability. This section describes the methodological framework underlying the re-annotation process, including data acquisition and filtering criteria, the development of multiclass target-specific labels, annotation guidelines, the annotation process, and quality control. The hate speech samples were extracted from HatEval2019 and then checked for hate or non-hate. After that, the target domain in hate samples is identified. Finally, the subgroup-specific target is identified. Figure 1 provides an overview of the re-annotation workflow, illustrating how raw binary samples from HatEval2019 are transformed into a subgroup hate speech resource.

Figure 1

Data curation and re-annotation workflow.

(3.1) Data Acquisition and Filtering

The dataset was selected to ensure broad coverage of hate-speech subcategories. Hate speech is a language that attacks individuals or groups based on race, ethnicity, religion, gender, nationality, or migration status (Bajt, 2025; Lee et al., 2022). Therefore, the selection process focused on finding a dataset where the hate speech instances contain profanities, gender-based slurs, discriminatory language directed at race, religious terms, immigrant, refugee, or other external groups and countries. This approach will ensure that the resulting dataset captures a diverse range of hate speech categories. Table 3 presents lexical indicators used to ensure that the dataset encompasses a variety of hate speech subcategories. Lexical indicators were used to verify topical coverage during dataset selection, and they were not provided in the annotation guidelines to avoid bias in annotation. The profanity and slurs are masked to prevent exposure to harmful and offensive language, while maintaining the work’s reproducibility and adherence to ethical guidelines in natural language processing.

Table 3

Masked lexical indicators used for the dataset selection process.

WORDSHATE SPEECH CONTEXT TYPE
F*** (profanity), sh*** (profanity), d*** (insult), j*** (insult), b*** (insult), a*** (insult), idiot, moron, stupid, trash, garbage, scum, vermin, animal, savageProfanities and general insults
B*** (gender-slur), h*** (gender-slur), s*** (gender-slur), w*** (gender-slur), p*** (gender-slur), gold-digger, feminaziGender-based slurs and sexist language
N*** (racial slur), n*** (racial slur), k*** (racial slur), Pakistani, black, white, Arab, Mexican, Asian, African, monkeyRace-related or ethnicity-targeted words
Muslim, jew, Islamic, Christian, hindu, jihad, jihadist, infidel, terroristReligion-related expressions
Immigrant, refugee, migrant, alien, asylum seeker, #sendthemback, invader, invasion, illegals, #buildthatwall, foreignerAnti-immigrant and anti-refugee language
Chinese, China, Russian, Russia, middle eastern, Australia, AmericanCountry or nationality targeting terms

The hate speech samples in the dataset were extracted from HatEval2019, a publicly available collection of tweets from the Twitter (now X) platform. HatEval2019 consists of six text files: three containing the tweet text for the training, validation, and test splits, and the other three containing the corresponding label files. Each tweet is annotated by an expert using a binary annotation guideline, where 0 denotes non-hate and 1 denotes hate. The public repository also includes a separate mapping file that defines the label interpretations. First, the data splits are combined into a single CSV file that contains two columns: the text and its corresponding binary labels. After the combination, the hate speech samples were extracted and reviewed to remove incorrect and inconsistent labels.

During the filtering process, only samples labeled hate speech were retained from the dataset for the multiclass hate speech annotation process. Each extracted sample was manually reviewed by annotators to identify mislabeling based on the absence of hateful content. Any sample that does not include explicit hate speech, implicit hate speech, abusive content, or discriminatory content based on standard hate speech definitions in prior literature was removed. This process resulted in the removal of seven samples corresponding to 0.13% from the initially labeled hate speech samples. Table 4 presents examples of tweets initially labeled as hate speech that were removed during the cleaning and validation process.

Table 4

Misclassified hate speech instances in the dataset.

REMOVED HATE SPEECH SAMPLES
Well that’s just great! @user @user
@user Simply put
@user @user @user @user @user @user @user @user @user @user @user @user @user @user @user @user @user @user @user

(3.2) Annotation Labels

The annotation framework uses a target-specific multiclass label set to capture forms of hate speech that are frequently collapsed into broad categories in existing datasets. Although some tweets may reference multiple social groups, the annotation framework adopts a multi-class rather than multi-label structure. This design decision was made to maintain annotation consistency and reduce cognitive load for annotators. Multi-label annotation often introduces additional ambiguity, as annotators may disagree not only on the label itself but also on the number of applicable labels. To address cases where multiple groups are mentioned, annotators were instructed to assign the label corresponding to the group most directly targeted by the hostile expression. This dominant-target principle is commonly used in hate speech annotation studies and helps preserve clearer category boundaries while maintaining acceptable inter-annotator agreement.

The annotation scheme comprises seven primary categories extracted from definitions of hate speech (Yu et al., 2025; Alkomah & Ma, 2022; Warner & Hischberg, 2012). Gender-based hate speech includes misogyny (Yu et al., 2025; Warner & Hischberg, 2012) and misandry (Papcunová et al., 2023). Immigration and xenophobic hate speech include anti-immigrant hate speech (Yu et al., 2025; Warner & Hischberg, 2012), anti-refugee hate speech (Yu et al., 2025), and xenophobia (Papcunová et al., 2023; Waseem & Hovy, 2016). Religious hate speech includes Islamophobia (Yu et al., 2025; Warner & Hischberg, 2012) and anti-Christian hate speech (Yu et al., 2025). Racial and ethnic hate speech includes anti-Black (Yu et al., 2025; Warner & Hischberg, 2012), anti-Hispanic (Lantz & Faulkner, 2025), anti-Semitic (Yu et al., 2025; Warner & Hischberg, 2012), and anti-Asian (Yu et al., 2025; Warner & Hischberg, 2012) hate speech. Profanity and general abuse (Yu et al., 2025), threats and violence, and hate speech toward countries are treated as separate categories (Papcunová et al., 2023). The inclusion of “profanity and general abuse” and “threats and violence” aims to differentiate between hate speech and other forms of harmful language that are conflated in prior research (Yu et al., 2025). This structure enables analysis of online hate discourse and supports the study of minority subgroups that are often underrepresented in generic classification schemes.

(3.3) Annotation Guideline

To minimize subjectivity, bias, and inconsistencies during the annotation process, all data samples were provided to annotators independently, ensuring that individual judgments were not influenced by group discussions or peer opinions. Each annotator followed a unified and carefully designed set of annotation guidelines to promote consistency, reliability, and reproducibility across the labeling process. These guidelines were developed based on established hate speech taxonomies and were refined through multiple pilot annotation rounds and expert feedback.

The annotation manual included clear definitions, illustrative examples, and boundary cases for each category to facilitate accurate interpretation and reduce semantic ambiguity. Annotator training emphasized the distinction between hate speech, non-hate offensive language, and neutral content to ensure that general insults or profanity were not incorrectly labeled as identity-targeted hate speech. In addition, annotators were trained to differentiate between explicit and implicit expressions of hate speech. Explicit hate speech refers to direct hostile statements toward a protected group, often using slurs or clearly derogatory language, whereas implicit hate speech may appear through sarcasm, stereotypes, exclusionary narratives, or coded language. Recognizing these patterns helped annotators make consistent labeling decisions when encountering ambiguous tweets.

The final annotation scheme consisted of the following categories:

  • Gender-Based Hate Speech (Misogyny): Hate speech directed at women and girls, frequently involving sexual objectification, derogatory remarks about appearance, slut-shaming, victim-blaming, and stereotypes that reinforce gender inequality.

  • Gender-Based Hate Speech (Misandry): Hate speech directed at men and boys, commonly characterized by negative generalizations about masculinity, accusations of inherent aggression or incompetence, and derogatory references to traditional or non-traditional gender roles.

  • Racial and Ethnic Hate Speech (Anti-Black Hate Speech): Hate speech targeting Black individuals or communities, including racial slurs, demeaning stereotypes, historical dehumanization narratives, and discriminatory rhetoric.

  • Racial and Ethnic Hate Speech (Anti-Hispanic Hate Speech): Hate speech directed at Hispanic individuals or communities, often involving ethnic slurs, xenophobic narratives, and derogatory assumptions related to language, culture, or immigration status.

  • Racial and Ethnic Hate Speech (Anti-Asian Hate Speech): Hate speech targeting Asian individuals, including racialized insults, stereotyping related to physical features, cultural practices, or nationality, and narratives promoting exclusion or blame.

  • Racial and Ethnic Hate Speech (Anti-Semitic Hate Speech): Hate speech directed at Semitic individuals or communities, including discrimination and dehumanisation to these groups.

  • Immigration and Xenophobic Hate Speech (Anti-Immigrant Hate Speech): Hate speech aimed at immigrants, frequently portraying them as criminals, social burdens, or threats to national identity and public safety.

  • Immigration and Xenophobic Hate Speech (Anti-Refugee Hate Speech): Hate speech targeting refugees, often characterized by dehumanizing metaphors, fear-inducing narratives, and rhetoric that questions their legitimacy or right to protection.

  • Immigration and Xenophobic Hate Speech (Xenophobia): Hate speech directed at foreigners or individuals perceived as outsiders, including expressions of exclusion, cultural superiority, and hostility toward perceived non-native populations.

  • Religious Hate Speech (Islamophobia): Hate speech targeting Muslims, commonly involving associations with extremism, terrorism, or violence, as well as negative portrayals of religious practices and symbols.

  • Religious Hate Speech (Anti-Christian Hate Speech): Hate speech directed at Christians, often expressed through mockery.

  • Profanity and General Abuse: Abusive language and offensive expressions that include profanity, insults, or harassment not explicitly directed at a protected group but intended to demean, intimidate, or humiliate individuals.

  • Threats and Violence: Content involving explicit or implicit threats, incitement to violence, glorification of harm, or calls for physical aggression against individuals or groups.

  • Hate speech toward Countries: Hate speech that involves direct or indirect insults, derogatory characterizations, or hostile generalizations targeting specific countries or national identities.

When a data instance referenced multiple protected groups, annotators were instructed to identify the primary target of the hateful expression. The primary target was defined as the group most directly affected by hate speech, as indicated by the main semantic focus of the message. In ambiguous cases, annotators were advised to consult the contextual cues and prioritize the dominant discriminatory intent.

(3.4) Manual Annotation Process

Three independent annotators manually annotated the dataset, each labeling samples without access to other annotators’ decisions. The final label in the dataset is determined by majority vote. However, the data annotation process involves multiple labels, which can lead to disagreement among annotators. Annotation disagreements were resolved by assigning a fourth independent annotator who labeled only the disputed samples without knowing the prior labels assigned by the other annotators.

(3.5) Quality control and Annotation Reliability

This dataset was labeled by different independent annotators, which shows disagreement for some instances. A data scientist researcher randomly selected samples from the data and assigned them labels without knowing the final labels from annotations. The data scientist labels match the dataset’s final labels, thus confirming the quality of the annotations. To further enhance annotation quality, periodic inter-annotator agreement analyzes were conducted. The final annotation reliability of the whole dataset (5,455 records) was measured using Krippendorff’s alpha (Krippendorff, 2022). This statistical measure is used because it can handle agreements among multiple annotators. This process yielded an alpha reliability of 0.6003, indicating moderate agreement (Landis & Koch, 1977), particularly in cases of multiclass annotation. The dataset comprises 14 classes, several of which are underrepresented, increasing annotation difficulty and the risk of disagreement.

(3.6) Ethical Considerations

Ethical concerns were taken into consideration throughout the annotation process. Annotators were informed about the nature of the data and trained to handle harmful or sensitive content safely. The dataset does not contain any identifiable information, as the @user term masks the usernames. Anonymization will hide the identities of targeted individuals and protect them from being identified. The dataset is used for research purposes only to improve hate speech analysis and is shared publicly without identifiable information to support responsible reuse.

(4) Results and discussion

(4.1) Illustrative Examples from the Dataset

Table 5 presents examples from the dataset that illustrate how the data appears. The table includes samples from each class, while anonymizing profanity and explicit slurs. The original dataset preserves the unaltered text for robust, efficient model implementation and accurate training and analysis. The hate speech data includes multiple slurs and profanity. Some tweets include hashtags related to immigrants, but the hate speech text is directed toward a specific subcategory, like black people or Muslims. Similarly, some text includes derogatory terms for women. However, the hate speech in the text is directed towards a particular subcategory other than women, like the example provided in “Religious Hate Speech (Anti-Christian Hate Speech)”.

Table 5

Examples from the dataset.

TEXTLABEL
No matter what color a girl is she still a [slur: derogatory term for women]Gender-Based Hate Speech (Misogyny)
@user @user thiers none because not all men rapeGender-Based Hate Speech (Misandry)
Illegal Alien Hit Man, Dragged Him For Blocks In Deadly DUI Crash #BuildTheWall #NoAmnesty #NoDACA #DeportThemAll #KeepAmericaSafeImmigration and Xenophobic Hate Speech (Anti-Immigrant Hate Speech)
@user @user real talk do you have eyes or were they gouged out by a [hate slur: derogatory anti-refugee term]?Immigration and Xenophobic Hate Speech (Anti-Refugee Hate Speech)
to hell with the #Tories. #SendThemHomeImmigration and Xenophobic Hate Speech (Xenophobia)
Victory for Trump ! But only 6 islamic countries now are stopped for immigration into USA ! There are 34 islamic countries more. To beware the USA from terrorists these also has to get banned.Religious Hate Speech (Islamophobia)
God is dead, Jesus is a [profanity] and the Virgin Mary was a [slur: derogatory term for women].Religious Hate Speech (Anti-Christian Hate Speech)
#sendthemback if your children want a peacefull life, without [sexual violence reference] on every corner.#forza. #Italia ##NoMoreAfricans !!Racial and Ethnic Hate Speech (Anti-Black Hate Speech)
Mexico’s not sending their best. They’re dumping their killers aka garbage on us. #StopTheInvasion #DeportThemAll #NoAmnesty #BuildTheWallRacial and Ethnic Hate Speech (Anti-Hispanic Hate Speech)
Let’s me honest, Jews offer things and the white women are just [slur: sexualized derogatory term]. Muslims [sexual violence reference] rape and beat white women.Racial and Ethnic Hate Speech (Anti-Semitic Hate Speech)
@user <– another son of a dirty [profanity] Korean [ethnic slur] [slur]Racial and Ethnic Hate Speech (Anti-Asian Hate Speech)
Your [slur: derogatory insult] ass disgusts me so muchProfanity and General Abuse
@user @user lol. you selfish terrorist [profanity] should be boiled alive. Maybe lynched by neonazis. Would serve you right.Threats and Violence
@user Bloody Germany who needs Germany we dont want their Visa plans were are fed up of being over ran by migrants no Uk Jobs threatenHate Speech toward Countries

(4.2) Corpus Statistics Across Hate Speech Categories

Table 6 presents descriptive corpus statistics for each hate speech category, including total word counts, maximum text length, and vocabulary size (unique words). These statistics provide insights into the distribution and linguistic diversity of the dataset across subcategories. This analysis reveals clear variations in word count across different classes, providing significant insights for classification model development. Anti-immigrant hate speech reported the highest values, with a word count of 57,357, a maximum sentence length of 88, and 8,905 unique words. Therefore, this class exhibits greater linguistic richness, with longer, more detailed texts. In contrast, less frequent subcategories such as Anti-Asian hate speech and Anti-Christian hate speech contain fewer words and a smaller lexicon, reflecting that these groups are underrepresented. Thus, relying on broad categories may obscure model learning of specific features related to these subcategories, keeping hate speech against these subgroups online without protecting them.

Table 6

Lexical statistics per class.

LABELTOTAL WORDSMAXIMUM TEXT LENGTHTOTAL UNIQUE WORDS
Gender-Based Hate Speech (Misogyny)37,184575,964
Gender-Based Hate Speech (Misandry)59493309
Immigration and Xenophobic Hate Speech (Anti-Immigrant Hate Speech)57,357888,905
Immigration and Xenophobic Hate Speech (Anti-Refugee Hate Speech)10,362592,906
Immigration and Xenophobic Hate Speech (Xenophobia)1,35054634
Religious Hate Speech (Islamophobia)4,898691,678
Religious Hate Speech (Anti-Christian Hate Speech)783864
Racial and Ethnic Hate Speech (Anti-Black Hate Speech)2,677541,050
Racial and Ethnic Hate Speech (Anti-Hispanic Hate Speech)44150260
Racial and Ethnic Hate Speech (Anti-Semitic Hate Speech)883473
Racial and Ethnic Hate Speech (Anti-Asian Hate Speech)423240
Profanity and General Abuse12,767703,181
Threats and Violence2,41964995
Hate Speech toward Countries66462396

(5) Implications and Applications

The availability of subgroup-level hate speech annotations enables more detailed documentation of hateful discourse in digital environments. The framework and annotated data distinguished between broad target categories and specific subgroups, enabling researchers to examine how identity, power, and social boundaries are constructed and contested through language. This level of granularity facilitates research on discourse patterns, processes of marginalization, and the differential visibility of minority groups in online communication.

For researchers in the digital humanities, sociolinguistics, media studies, and critical data studies, the resulting dataset provides a structured resource for examining how hostility varies across gender, race, religion, nationality, and migration status. It enables comparative and longitudinal analyzes, cross-cultural investigations, and qualitative-quantitative mixed-methods research on the circulation and normalization of harmful speech. The explicit documentation of annotation guidelines and interpretive criteria further supports transparent reuse and methodological reflection.

The dataset may also serve as a reference resource for developing and evaluating computational tools for multiclass hate speech analysis. In this context, its fine-grained structure allows researchers to examine classification biases, category overlap, and sources of misinterpretation, contributing to more robust language understanding. Rather than conflating general insults with targeted hostility, the resulting dataset encourages more context-sensitive approaches to automated analysis.

Beyond academic research, the resulting dataset can inform policy-oriented studies, civil society initiatives, and regulatory discussions concerned with online safety and digital inclusion. The dataset makes patterns of subgroup-targeted hate speech more visible. This supports evidence-based dialogue on platform governance, moderation practices, and the protection of vulnerable communities. However, the study remains attentive to ethical risks arising from automated monitoring and data-driven decision-making.

Despite its contributions, the dataset has several limitations. First, certain subcategories remain underrepresented, reflecting broader structural imbalances in available data. Expanding the dataset with additional sources and platforms would improve its representativeness and analytical scope. Second, incorporating material from different time periods and cultural contexts would enable more robust historical and comparative analysis. Future work may therefore focus on collecting a more diverse dataset and applying the presented annotation framework to extend the dataset while preserving transparency, ethical oversight, and documentation standards.

Data Accessibility Statement

The datasets of this paper are openly available via Figshare platform: https://doi.org/10.6084/m9.figshare.31292419.

Author Contributions

Sanaa Kaddoura: Conceptualization, Data curation, Methodology, Project administration, Software, Resources, Validation, Writing – original draft, Writing-review and editing.

Sumaia Al-Kohlani: Conceptualization, Data curation, Methodology, Project administration, Validation, Writing – original draft, Writing-review and editing.

DOI: https://doi.org/10.5334/johd.521 | Journal eISSN: 2059-481X
Language: English
Submitted on: Feb 11, 2026
Accepted on: Mar 30, 2026
Published on: Apr 27, 2026
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Sanaa Kaddoura, Sumaia Al-Kohlani, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.