text
string
source
string
pair. We tested different templates and chose the ones that resulted in the highest classification accuracy. Each prompt was designed 14 Table 4: Classification performance of the models across the datasets. Except for LLAMA3.1 1B on the sentiment classification datasets, where it always predicts positive sentiment, the models perform well across all tasks. Accuracy Precision Recall F1 Specificity HateXplainGemma2.2 2B 57.49 50.39 98.82 66.74 26.09 Gemma2.2 9B 65.12 55.31 100.00 71.22 38.62 Llama3.2 1B 50.51 46.14 87.54 60.43 22.38 Llama3.2 3B 63.74 54.65 93.94 69.10 40.79 Llama3.1 8B 54.94 48.93 100.00 65.71 20.72 Qwen2 1.5B 54.36 48.55 95.79 64.44 22.89 Qwen2 3B 69.62 59.50 92.76 72.50 52.05 MovieGemma2.2 2B 82.41 100.00 64.65 78.53 100.00 Gemma2.2 9B 95.48 98.91 91.92 95.29 99.00 Llama3.2 1B 49.75 49.75 100.00 66.44 0.00 Llama3.2 3B 92.46 87.50 98.99 92.89 86.00 Llama3.1 8B 88.44 81.67 98.99 89.50 78.00 Qwen2 1.5B 85.93 80.34 94.95 87.04 77.00 Qwen2 3B 85.93 97.33 73.74 83.91 98.00 TwitterGemma2.2 2B 84.28 89.69 79.11 84.07 89.98 Gemma2.2 9B 88.11 89.37 87.77 88.56 88.48 Llama3.2 1B 52.69 52.58 99.83 68.88 0.71 Llama3.2 3B 84.06 91.34 76.91 83.50 91.95 Llama3.1 8B 87.06 85.43 90.81 88.04 82.92 Qwen2 1.5B 77.79 73.92 89.06 80.79 65.35 Qwen2 3B 81.82 93.15 70.53 80.28 94.28 BoolQGemma2.2 2B 78.39 81.82 85.04 83.40 66.67 Gemma2.2 9B 82.41 87.10 85.04 86.06 77.78 Llama3.2 1B 65.33 69.86 80.31 74.73 38.89 Llama3.2 3B 67.84 87.95 57.48 69.52 86.11 Llama3.1 8B 76.88 89.32 72.44 80.00 84.72 Qwen2 1.5B 66.33 68.29 88.19 76.98 27.78 Qwen2 3B 64.32 87.84 51.18 64.68 87.50 FEVERGemma2.2 2B 81.41 85.19 75.70 80.17 87.04 Gemma2.2 9B 93.41 91.59 95.48 93.49 91.36 Llama3.2 1B 62.22 61.22 65.12 63.11 59.36 Llama3.2 3B 92.31 94.08 90.17 92.09 94.41 Llama3.1 8B 94.37 94.98 93.60 94.29 95.13 Qwen2 1.5B 92.10 94.09 89.71 91.85 94.44 Qwen2 3B 89.30 96.92 81.01 88.25 97.47 SciFactGemma2.2 2B 78.95 80.13 90.58 85.03 56.34 Gemma2.2 9B 86.60 92.31 86.96 89.55 85.92 Llama3.2 1B 49.76 77.97 33.33 46.70 81.69 Llama3.2 3B 82.78 91.13 81.88 86.26 84.51 Llama3.1 8B 83.73 94.83 79.71 86.61 91.55 Qwen2 1.5B 79.43 86.82 81.16 83.90 76.06 Qwen2 3B 77.03 97.87 66.67 79.31 97.18 15 such that the model would answer yes or no. The prompt templates are provided in our released code repository: https://anonymous.4open.science/r/explainable_transformer-D693 9.7 TSG at different temperatures In Table 5, we present the GIM results for different temperatures used in TSG. Note that we ran this experiment after having selected the temperature of 2 in the main paper. Each model-dataset pair has a sweetspot. Increasing to high temperatures degrade performance. 9.8 Circuit identification method comparison In Figure 5, Figure 6, and Figure 7, we present the circuit identification method comparison for LLAMA-3.2 1B, Gemma-2 2B, and LLAMA-3.2 3B across all datasets. We see similar results as in the main paper: GIM substantially improves the faithfulness in the earliest layers. Computing Integrated Gradients per layer takes several days per dataset for LLAMA-3.2 3B and Gemma-2 2B on an A100 GPU. We are still computing the results for integrated gradients for some of the datasets and will add them to the figures in
https://arxiv.org/abs/2505.17630v1
the camera-ready version. 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (a) Twitter 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.000.250.500.75← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (b) Hatexplain 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer−0.250.000.250.50← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (c) Scifact Figure 5: Faithfulness per layer for LLAMA-3.2 1B . The top row depicts comprehensiveness per layer, where higher is better. The bottom row depicts sufficiency, where lower is better. 9.9 Self-repair results In Figure 8, Figure 9, Figure 10, and Figure 11, we present additional empirical evidence for self- repair and TSG increasing the attributions when it occurs. The figures show similar results as in the main paper. We plotted all attention scores with β= 1. For models such as Qwen, the gradients are generally very large, which results in many points being plotted. For Qwen-2.5 3B, the number of dots crashed our experiments. We did not have time to rerun the experiments with a higher βbefore the submission deadline. 16 Table 5: GIM when using different temperatures for TSG. Comprehensiveness ↑ Sufficiency ↓ Gemma LLAMA Qwen Gemma LLAMA Qwen Temp 2B 1B 3B 1.5B 3B 2B 1B 3B 1.5B 3B BoolQ1 0.53 0.68 0.72 0.69 0.66 0.07 0.28 0.12 0.12 0.29 1.5 0.57 0.68 0.72 0.71 0.64 0.05 0.23 0.10 0.17 0.22 2 0.59 0.69 0.72 0.68 0.61 0.03 0.22 0.10 0.09 0.23 2.5 0.57 0.68 0.72 0.71 0.64 0.05 0.23 0.10 0.17 0.22 3 0.60 0.69 0.70 0.67 0.61 0.04 0.21 0.12 0.10 0.25 5 0.60 0.69 0.66 0.73 0.59 0.05 0.20 0.13 0.23 0.24 10 0.60 0.69 0.65 0.62 0.62 0.06 0.20 0.12 0.22 0.23 100 0.61 0.69 0.65 0.66 0.63 0.06 0.21 0.13 0.22 0.26 FEVER1 0.39 0.60 0.76 0.62 0.66 -0.03 0.29 0.39 0.25 0.46 1.5 0.39 0.61 0.75 0.64 0.67 0.03 0.26 0.38 0.35 0.43 2 0.42 0.62 0.75 0.51 0.68 -0.01 0.26 0.39 0.31 0.41 2.5 0.39 0.61 0.75 0.64 0.67 0.03 0.26 0.38 0.35 0.43 3 0.43 0.62 0.74 0.48 0.63 -0.13 0.27 0.40 0.33 0.42 5 0.42 0.61 0.74 0.55 0.63 -0.13 0.27 0.41 0.48 0.44 10 0.42 0.60 0.75 0.57 0.65 -0.13 0.27 0.41 0.48 0.46 100 0.42 0.60 0.75 0.57 0.67 -0.12 0.27 0.41 0.47 0.48 HateXplain1 0.63 0.67 0.64 0.85 0.50 0.31 0.37 0.52 0.28 0.91 1.5 0.64 0.68 0.65 0.83 0.62 0.23 0.28 0.48 0.43 0.59 2 0.64 0.68 0.64 0.80 0.63 0.26 0.26 0.46 0.35 0.51 2.5 0.64 0.68 0.65 0.83 0.62 0.23 0.28 0.48 0.43 0.59 3 0.61 0.68 0.59 0.77 0.56 0.29 0.22 0.49 0.36 0.46 5 0.60 0.68 0.54 0.82 0.58 0.31 0.20 0.51 0.48 0.45 10 0.61 0.68 0.56 0.77 0.68 0.33 0.21 0.52 0.52 0.45 100 0.61 0.68 0.58 0.93 0.76 0.33 0.24 0.52 0.51 0.45 Movie1 0.55 0.68 0.78
https://arxiv.org/abs/2505.17630v1
0.82 0.73 0.16 0.26 0.31 0.13 0.45 1.5 0.58 0.70 0.78 0.84 0.75 0.12 0.18 0.28 0.23 0.38 2 0.60 0.71 0.78 0.83 0.76 0.11 0.19 0.30 0.17 0.31 2.5 0.58 0.70 0.78 0.84 0.75 0.12 0.18 0.28 0.23 0.38 3 0.60 0.71 0.78 0.80 0.71 0.11 0.20 0.30 0.16 0.32 5 0.60 0.71 0.76 0.82 0.72 0.11 0.21 0.32 0.28 0.31 10 0.60 0.71 0.75 0.74 0.73 0.11 0.22 0.33 0.28 0.32 100 0.61 0.71 0.75 0.74 0.70 0.14 0.23 0.34 0.28 0.42 SciFact1 0.60 0.64 0.72 0.66 0.72 0.05 0.25 0.30 0.20 0.40 1.5 0.58 0.66 0.73 0.70 0.73 0.06 0.22 0.30 0.34 0.38 2 0.57 0.67 0.74 0.62 0.74 0.04 0.22 0.31 0.26 0.38 2.5 0.58 0.66 0.73 0.70 0.73 0.06 0.22 0.30 0.34 0.38 3 0.56 0.66 0.73 0.55 0.68 0.01 0.24 0.33 0.27 0.41 5 0.54 0.64 0.72 0.61 0.67 0.01 0.24 0.35 0.46 0.41 10 0.54 0.63 0.72 0.61 0.69 0.02 0.24 0.36 0.47 0.41 100 0.55 0.63 0.73 0.63 0.70 0.04 0.24 0.36 0.46 0.43 Twitter1 0.52 0.64 0.73 0.78 0.65 0.32 0.43 0.60 0.47 0.66 1.5 0.52 0.66 0.73 0.81 0.71 0.33 0.42 0.61 0.58 0.59 2 0.52 0.68 0.72 0.76 0.72 0.34 0.41 0.61 0.55 0.58 2.5 0.52 0.66 0.73 0.81 0.71 0.33 0.42 0.61 0.58 0.59 3 0.52 0.68 0.69 0.76 0.70 0.35 0.42 0.63 0.56 0.58 5 0.52 0.67 0.69 0.79 0.69 0.35 0.43 0.64 0.60 0.58 10 0.53 0.66 0.69 0.67 0.69 0.35 0.44 0.65 0.63 0.57 100 0.54 0.65 0.69 0.67 0.72 0.35 0.45 0.65 0.63 0.58 17 0 5 10 15 20 25 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.000.250.500.75← SufficiencyAttribution Patching Integrated Gradients ATP* GIM(a) BoolQ 0 5 10 15 20 25 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.00.20.40.60.8← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (b) Fever 0 5 10 15 20 25 Layer0.00.20.40.60.8→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.00.20.40.60.8← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (c) Movie review 0 5 10 15 20 25 Layer0.00.10.20.30.4→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.00.20.4← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (d) Twitter 0 5 10 15 20 25 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.00.20.40.6← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (e) Hatexplain 0 5 10 15 20 25 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer−0.20.00.20.40.6← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (f) Scifact Figure 6: Faithfulness per layer for Gemma-2 2B . The top row depicts comprehensiveness per layer, where higher is better. The bottom row depicts sufficiency, where lower is better. 18 0 5 10 15 20 25 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.00.20.40.6← SufficiencyAttribution Patching Integrated Gradients ATP* GIM(a) BoolQ 0 5 10 15 20 25 Layer0.00.10.20.30.4→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer−0.20.00.20.4← SufficiencyAttribution Patching Integrated Gradients ATP* GIM
https://arxiv.org/abs/2505.17630v1
(b) Fever 0 5 10 15 20 25 Layer0.00.20.40.60.8→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.00.20.40.60.8← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (c) Movie review 0 5 10 15 20 25 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.00.20.40.60.8← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (d) Twitter 0 5 10 15 20 25 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.00.20.40.60.8← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (e) Hatexplain 0 5 10 15 20 25 Layer0.00.20.40.60.8→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0 5 10 15 20 25 Layer0.00.20.40.60.8← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (f) Scifact Figure 7: Faithfulness per layer for LLAMA-3.2 3B . The top row depicts comprehensiveness per layer, where higher is better. The bottom row depicts sufficiency, where lower is better. 19 (a) Twitter (b) Hatexplain (c) Scifact (d) BoolQ (e) Movie (f) Fever Figure 8: Self-repair for Gemma-2 2B and how TSG increases the attributions for the attention scores with the strongest self-repair effects 20 (a) Twitter (b) Hatexplain (c) Scifact (d) BoolQ (e) Movie (f) Fever Figure 9: Self-repair for LLAMA-3.2 1B and how TSG increases the attributions for the attention scores with the strongest self-repair effects 21 (a) Twitter (b) Hatexplain (c) Scifact (d) BoolQ (e) Movie (f) Fever Figure 10: Self-repair for LLAMA-3.2 3B and how TSG increases the attributions for the attention scores with the strongest self-repair effects 22 (a) Twitter (b) Hatexplain (c) Scifact (d) BoolQ (e) Movie (f) Fever Figure 11: Self-repair for Qwen-2.5 1.5B and how TSG increases the attributions for the attention scores with the strongest self-repair effects 23
https://arxiv.org/abs/2505.17630v1
David C. Wyld et al. (Eds): Signal, CDKP, AdNLP, MLT, SAI – 2025 pp. 27-39, 2025. CS & IT - CSCP 2025 DOI: 10.5121/csit.2025.150903 SURFACING SEMANTIC ORTHOGONALITY ACROSS MODEL SAFETY BENCHMARKS : A MULTI -DIMENSIONAL ANALYSIS Jonathan Bennion 1, Shaona Ghosh 2, Mantek Singh 3, Nouha Dziri 4 1 The Objective AI, USA 2 Nvidia, USA 3 Google, USA 4 Allen Institute for AI (AI2), USA ABSTRACT Various AI safety datasets have been developed to measure LLMs against evolving interpretations of harm. Our evaluation of five recently published open -source safety benchmarks reveals distinct semantic clusters using UMAP dimensionality reductio n and k - means clustering (silhouette score: 0.470). W e identify six primary harm categories with varying benchmark representation. GretelAI, for example, focuses heavily on privacy concerns, while WildGuardMix emphasizes self -harm scenarios. Significant di fferences in prompt length distribution suggests confounds to data collection and interpretations of harm as well as offer possible context. Our analysis quantifies benchmark orthogonality among AI benchmarks, allowing for transparency in coverage gaps des pite topical similarities. Our quantitative framework for analyzing semantic orthogonality across safety benchmarks enables more targeted development of datasets that comprehensively address the evolving landscape of harms in AI use, however that is define d in the future. KEYWORDS AI benchmark meta -analysis, LLM Embeddings, Dimensionality reduction, K-means clustering, AI safety 1. INTRODUCTION Differentiated safety interpretations in AI history have led to varied safety evaluation frameworks that have aimed to reduce instances of undesired behavior in language models globally [1], and since these definitions are different to each culture and tim e, they become collectively opaque [2]. Early AI safety datasets focused on bias evaluation [3], but recent attention has shifted to adversarial robustness and preventing LLM jailbreaks [4]. While general LLM evaluations emphasize depth of intelligence aga inst pre -determined problems to solve [2], the depth has allowed for ample bias to effect the results [5]. Our breadth -based analysis of safety -focused benchmarks reveals coverage gaps and redundancies, highlighting a wider range of failure modes that is s till underexplored [6] and improves objectivity in comparing benchmark themes [7]. Using a clustering methodology that maximizes distances between similar semantic meaning based on their encoded numerical values from an embedding model [8], this systematic analysis promotes transparency while reducing the misrepresentation of AI capability improvements known as ”safetywashing” [2]. Since these datasets are critical for use as ground truth to measure how any language model may allow for harmful acts, such as content that exacerbates self 28 Computer Science & Information Technology (CS & IT) harmonline [9], the similarities and differences allow for gaps to be addressed in research for future benchmarks. 2. RELATED WORKS The safety benchmarks used in this comparison represent the most recent open -source contributions to AI safety research. AEGIS 2.0 targets commercial use cases, focusing on critical safety concerns in human -LLM interactions and allowing annotators to provide free text input for unclassified risks [9]. WildGuardMix integrates both syntheti c and human data with diverse perspectives to ensure objectivity when evaluating novel safety risks, adversarial
https://arxiv.org/abs/2505.17636v1
jailbreaks, and cultural contexts [10]. BeaverTails aligns human preferences within a 14 -category safety taxonomy to enhance ethical alignment of LLMs [11]. GretelAI uniquely focuses on evaluating synthetic data generation with privacy guarantees, emphasizing statistical fidelity while preserving differential privacy [12]. MLCommons’ AILuminate provides standardized evaluations across twelve haza rd categories with over 24,000 test prompts aimed at industrial use [13]. The differences and similarities among these datasets highlight potential gaps in resources available to the open -source community for measuring and adversarial fine -tuning of models to mitigate undesirable language model behavior. 3. METHODOLOGY We compared the semantic differences and similarities of datasets after concatenating each and analyzing the numerical values generated by an embedding model for each prompt string. We used an unsupervised machine learning approach to form clusters of similar semantic values, after cleaning the data and determining the minimum sample size needed for insights. Sample size aimed to measure for categorical frequency differences within harm cluster s across datasets was calculated on one iteration for a maximum cluster number of 15 to adhere to the most recent taxonomy [9], and used established research that k means accuracy depends on adequate representation of natural category distributions [14], w hile targeting an effect size of 0.5, which is presumed to be large enough to have effect on future research [2]. Variables optimized for silhouette score in the full dataset included at least one relevant variant of an embedding model, dimensionality reduction technique, key hyperparameters, distance metrics, and number of clusters, as shown in Figure 1. After optimizing parameters for defined clusters, we used prompt values at each centroid to infer labels for relevant harm from another language mode l, then visualize the semantic space differences between benchmarks. Further analysis of prompt length distributions by benchmark (and by harm by benchmark) illustrate how any potential differences in data collection could contribute the results, since th is is the most important aspect to consider for adding context to the results [15]. Computer Science & Information Technology (CS & IT) 29 Fig.1. Experimental methodology for clustering optimization configurations.This compar ed Euclidean and Mahalanobis distance metrics, across two embedding models, followed by dimensionality reduction . Dimensionality reduction included Uniform Manifold Approximation and Projection (UMAP), optimizing n neighbors as the most consequential hyperp arameter, and t -distributed Stochastic Neighbor Embedding (tSNE), optimizing perplexity as the most consequential hyperparameter. Cluster quality was evaluated using Silhouette score as well as p rocessingspeed for scaling (BERTScore was attempted but was n ot designed to measure clusters and did not differentiate results between configurations). 3.1. Sample Size Calculation a nd Outlier Removal We aimed to create clusters that maximize silhouette scores for categorical frequency counts between benchmarks, based on prior research indicating these scores reflect internal frequency count differences [14]. Additionally, research indicates that frequency differences in AI safety datasets are insignificant below an effect size of 0.5 [2], supporting its use in Cohen’s sample size formula for detecting statistical differences [16], defined below in Equation 1. (1) We also calculated the minimal sample size for each benchmark using
https://arxiv.org/abs/2505.17636v1
past research methods [17], assuming 15 clusters to align with the 12 core harm categories from Aegis 2.0 [9], plus 20 percent for broader semantic coverage [18]. This is illustrated in Eq uation 2, with kmax= 15 clusters and b = 5 benchmarks. ntotal= (max(nkmeans ) × (kmax× 1.2) × b) (2) Assumptions for this sample size are in Table 1. To enable comparative analysis across all five benchmarks and allow for smaller benchmark sizes, we selected an 85% confidence level ( α = 0.15). This trade -off increases Type I error risk for broader dataset inclusion, prioritizing exploratory breadth over strict significance thresholds [16]. Research indicates that model misuse or safety harms in AI use occur frequently enough [2] for this significance level to be acceptable. Based on this, we calculated a cluster sample size of 109, resulting in a total required sample size of 8,175 across all five benchmarks to enable meaningful cluster differentiation . Table 1. Sample Size Assumptions for Detecting Differences in Harm Categories Per Benchmark 30 Computer Science & Information Technology (CS & IT) Parameter Value Effect size ( d) 0.5 Statistical power ( β) 0.8 Significance level ( α) 0.15 Sample size per cluster ( nkmeans ) 109 Maximum clusters ( kmax) 15 Number of benchmarks ( b) 5 Required sample size per benchmark 1,635 Total sample size across all benchmarks 8,175 We used IQR bounds to compare prompt length distributions for outlier removal, flagging values outside the typical spread of 50% of the data [20], as shown in Equation 3. [Q1 − 1.5 × IQR,Q 3 + 1.5 × IQR] (3) This was compared against the same distribution using z -score thresholds for outliers, for comparison [19], as shown in Equation 4, where values beyond 3 standard deviations of the mean are considered outliers. (4) Fig.2. Distribution of prompt lengths show more prompts removed when removing outliers by IQR rather than z -score, which could be problematic for completeness of analysis. The IQR and z -score methods vary in outlier sensitivity, particularly in long -tailed distributions, since z -score assumes normality, which this dataset is not. This normality assumption is likely to overly filter high -variance data [19], while the IQR met hod commonly suits skewed distributions [20]. As shown in the visualization that compares distribution by kernel density (Figure 3), z - score filtering was counterintuitively looser due to inflated σ in the originally skewed distribution. IQR, being non -parametric, more aggressively excluded long -tail prompts. Since more variance was included in the z -score filtering, the longer tailed variant of the data made more sense to use in this analysis, to capture divergent characteristics of safety benchmarks [9], therefore the dataset after removing outliers by z -score was used in the analysis. Computer Science & Information Technology (CS & IT) 31 Fig.3. Distribution of prompt lengths showing a right -skewed density plot with a median of 157.0 characters (green dashed line) and mean of 453.92 characters (red dashed line) after outlier removal and benchmark normalization. The highest concentration lengths peak near 100 characters,
https://arxiv.org/abs/2505.17636v1
with a long tail extending for a smaller number of prompts over 2500 characters. 3.2. Embedding Model Comparison MiniLM and MPNet excel in di fferent tasks, while similar in size, and were good candidates to use in this analysis for comparison [7] in a rigorous analysis. MiniLM provides fast, efficient embeddings with quality representation [21]. MPNet enhances contextual and sequential encoding , often surpassing BERT in NLP tasks [22]. While MiniLM emphasizes speed, MPNet enhances context modeling [7] and was used to determine the optimal silhouette score for differentiating embedding clusters. 3.3. Dimensionality Reduction Techniques t-distributed Stochastic Neighbor Embedding (t -SNE) preserves local structure for fine -grained clustering but struggles with global relationships and is computationally greedy [23]. Uniform Manifold Approximation and Projection (UMAP) preserves both local and global st ructure while scaling efficiently, making it more suitable for larger datasets [24]. These methods complement each other; t -SNE is used for cluster separation, while UMAP handles larger data structures in AI safety semantic categorizations [25]. The select ed embedding models normalize prompt value embeddings; however, UMAP can distort point distances [25] which is why these 2 methods are compared. The mathematical basis for normalization lies in ensuring consistent angle and gradient calculations throughout the pipeline [26]. While three -dimensional analysis is possible, we use two dimensions for optimal clustering capture without added noise [24], and we optimize by evaluating silhouette scores and computational efficiency for large -scale use. 3.4. Hyperparamet er Selection w ithin UMAP and T -SNE Not all hyperparameters are essential [27], so we prioritized optimizing for maximum differences, enabling efficient replication by researchers. UMAP nneighbors balances local and global structures, while min dist mainta ins well -separated clusters [24], particularly with a min dist of 0.1 for tighter clusters. nneighbors was optimized due to the global and local structures that are evident in safety datasets [2], starting with a value of 15 and evaluating an increase to 3 0 [24]. For t -SNE, perplexity ensures robust neighborhood relationships and learning rate stabilizes embedding convergence [28]. These methods enhance cluster separation and embedding coherence, improving silhouette score differentiation. In optimizing t -SNE hyperparameters, we prioritized perplexity [28] while keeping the learning rate constant at 100, associating higher values with 32 Computer Science & Information Technology (CS & IT) past optimization. Perplexity values from 5 -50 are ideal, with a default recommendation of 30 [23], so we analyzed 30 and 50 using grid search, further optimizing for silhouette scores as well as computational efficiency. 3.5. Distance Metrics Compared Distance metrics deterministically shape embedding similarity assessment and, consequently, cluster formation integrity. We compare Euclidean and Mahalanobis distances, each offering distinct semantic relationship perspectives in safety benchmark analysis. Euclidean distance, as calculated in Equation 5 below, works well in low -dimensional spaces with uniform variance but fails with h igh-dimensional correlated data, common in AI safety datasets [29]. 𝑑= √(𝑥2−𝑥1)2+(𝑦2−𝑦1)2 (5) The Mahalanobis distance, as calculated in Equation 6, addresses these limitations by incorporating the embedding covariance matrix ( Σ), establishing a statistical measure to account for dimensional correlations [30], which normalizes
https://arxiv.org/abs/2505.17636v1
dimensional importance and accommodates anisotropic clusters [31]. This facilitates the inclusion of diverse semantic relationships, length variations, and potential semanti c outliers, commonly seen in jailbreak attempts and adversarial prompting strategies [2]. 𝑑𝑀(𝑥,𝑦)= √(𝑥−𝑦)𝑇Σ−1(𝑥−𝑦) (6) Since the later distance metric considers covariance, where x and y are vectors, it has been proven effective in anisotropic clusters [30], while also relevant for longer prompts and outliers. Since these metrics complement each other, they were used as variables for comparison in the output [31]. 3.6. Computational Efficiency Considerati ons Computational efficiency measured via processing time [26] is a secondary metric that quantifies real-world scalability across the variants of this analysis. Our analysis was performed on a hosted Nvidia Tesla T4 GPU with 2,560 CUDA cores and 16 GB of GDDR6 memory, based on the Turing architecture, and the same hardware was used for all configurations. This efficiency analysis complements silhouette scores to identify configurations that balance semantic accuracy and computational feasibility [24], cru cial for timely identification of self -harm patterns in evolving safety benchmarks, potentially accelerating intervention development for vulnerable populations. 3.7. Optimal Clustering Configurations Identified Bootstrapping confidence intervals estimate unc ertainty in silhouette scores [33], allowing comparison of multiple model configurations and highlighting the need to distinguish models with similar quality metrics by computational efficiency [26]. Dimensionality reduction optimizations for each variant is shown by the silhouette score strength Ssilfrom [35], calculated in Equation 7 below, where a(i) is the mean intra -cluster distance, while b(i) is mean nearest -cluster distance for the point i. (7) Computer Science & Information Technology (CS & IT) 33 As shown in Figure 4, MiniLM with Euclidean distance and UMAP reduction achieves one of the highest silhouette scores of 0.470 ± 0.024. Computationally, this configuration had the most efficient processing time (69.8s) among configurations showing statistically similar scores, making it the optimal clustering configuration [23]. This configuration’s superiority aligns with earlier findings [7] that smaller, efficient transformer models like MiniLM can produce high - quality clustering embeddings despite reduced dimensionality, while also supporting res earch [24] indicating UMAP preserves both local and global structures better than t -SNE. Fig.4. Comparative analysis of model configurations showing silhouette scores and computational efficiency in the form of processing time across different embeddin g models (MPNet, MiniLM), distance metrics (Euclidean, Mahalanobis), and dimension reduction techniques (UMAP, t -SNE). 3.8. Refinement for Optimum Number o f Clusters A higher sensitivity threshold refines the optimum cluster count by identifying groups that, while less distinct in the embedding space, represent unique semantic concepts [15]. Previous studies show that higher sensitivity thresholds reveal more significant structural changes in highdimensional data [34], so we analyzed this sensitivity using sil houette scores and confirmed it with the elbow method [36]. Fig.5. Cluster optimization analysis using elbow and silhouette methods. The elbow plot (left) indicates an optimal k=5 clusters, while the silhouette score analysis (right) shows peak performance at k=6 with a score of 0.455, which are close enough to validate each other. 34 Computer Science & Information Technology (CS & IT) While silhouette scores for each
https://arxiv.org/abs/2505.17636v1
k provide insight into intra -cluster cohesion for individual data points, the elbow method focuses on the overall within -cluster sum of squares, calculated [36] in Equation 8 below, where Wkis the sum of squares within clusters. (8) The convergence of both elbow and silhouette methods near k=6 (Figure 5) confirms that the semantic space of safety benchmarks naturally organizes into six distinct harm categories, supporting earlier findings [2] that optimal harm taxonomies typically consolidate into 5 -7 primary categories despite more granular subcategorizations. We chose 6 since we expected a higher number of clusters from the taxono my illustrated most recently in the Aegis 2.0 dataset [9], and best practice is to decide between possible clusters that best fit a use case [39]. 4. CENTROID LABELING A ND INFERENCE We extracted four prompt values at each centroid edge based on past research [37] and created a prompt template using GPT -4 from OpenAI with the Aegis 2.0 taxonomy as labels [9]. We instructed the model to label anything outside the prompt values as ’Other’ under the Aegis 2.0 taxonomy, adhering to best practices [37], but the model did not use this ’Other’ definition. We ran this inference call multiple times to ensure consistent results, serving as bootstrapping for the labels found by model inference [33]. The five categories identified as most differentiated by the inference model were ’Hate/Identity Hate’, ’Suicide and Self Harm’, ’Guns/Illegal Weapons’, ’PII/Privacy’, and ’Criminal Planning’. 5. CLUSTERING RESULTS B Y BENCHMARK The UMAP dimensionality reduction visualization (Figure 6) revealed natural clustering of safety prompts into six distinct harm categories, with spatial organization that suggests underlying semantic relationships. The clear cluster separation (silhouette score 0.472, corroborated by an additional run) suggests strong thematic boundaries, while overlaps imply shared harm conceptualization among benchmark creators. This visualization emphasizes specific harm types, like GretelAI’s focus on privacy and WildGuardMix’s on self -harm scenarios. The spatial proximity between related concepts (e.g., Cri minal Planning adjacent to Controlled Substances) further validates the embedding space’s capture of semantic relationships. This clustering pattern provides empirical evidence for both the distinctness of harm categories and the varying coverage prioritie s across benchmark datasets, informing future benchmark development needs [2]. Computer Science & Information Technology (CS & IT) 35 Fig.6. UMAP visualization of safety benchmark prompts colored by dataset source, with labeled clusters revealing distinct semantic groupings around key harm categories. The p lot demonstrates clear separation between harm types while showing dataset -specific concentrations within certain domains. 6. CONTEXTUAL CONFOUNDS A ND BIASES The differences in density of prompt lengths from each dataset in semantic coverage around each cluster can be explained partially through biases in data collection. Prompt length distribution of the entire dataset used in analysis, for example (Figure 7), highlights potential differences in data collection strategies between benchmarks. In contrast to each other, GretelAI and WildGuardMix utilize substantially longer prompts (medians with over 700 characters), indicating more complex, context -rich scenarios for safety evaluation. This divergence indicates differing safety assessment approaches: shorter prompts target specific vulnerabilities, while longer prompts evaluate model behavior in
https://arxiv.org/abs/2505.17636v1
nuanced real -world scenarios [38]. This divergence in prompt design likely reflects differing safety assessment approaches: shorter prompts target specific vulnerabili ties, while longer ones evaluate model behavior in nuanced, real -world scenarios [38]. Fig.7. Kernel density distribution of prompt lengths across five AI safety benchmarks, revealing a bimodal pattern: Aegis, BeaverTails, and AILuminate favor concise prompts (medians 55 -136 characters), while GretelAI and WildGuardMix employ substantially longer prompts (medians with over 700 characters). 36 Computer Science & Information Technology (CS & IT) 7. LIMITATIONS Our analysis has several methodological constraints. Using 85% confidence intervals ( α=0.15) rather than the conventional 95% [16] represents a deliberate trade -off between statistical rigor and practical detection sensitivity [39], introducing marginally higher Type I error potential while improving cluster detection. The clustering reveals patterns spe cific to these five benchmarks that may not generalize across broader AI safety contexts [1]. Our methodology exhibits potential biases, as dimensionality reduction loses information [25]; embedding model selection imposes representational constraints [7]; and implicit Western views of harm in the benchmarks [40] may skew category definitions. The ecological fallacy of generalizing cluster -level findings to individual prompts and the base rate fallacy of overemphasizing harm categories with lower prevalence further limit interpretation. Separate domain -specific safety datasets likely exhibit different semantic structures [2], and our equal benchmark weighting failed to reflect variations in deployment likelihood or importance. Future work should expand this framework to include benchmarks from more diverse cultural contexts [4], evaluate embedding bias propagation [15], and explore prompt -response relationships in adversarial settings [41]. Furthermore, we acknowledge potential disciplinary bias, as the autho rs’ backgrounds predominantly in technology may limit our perspective on interdisciplinary trends and applications outside the technical domain, reflecting a homogeneity in expertise. 8. CONCLUSIONS Our multidimensional clustering analysis identifies critical semantic gaps across AI safety benchmarks, revealing asymmetric coverage where WildGuardMix excels in self -harm detection while GretelAI prioritizes privacy over suicide content. These imbalances create vulnerabilities akin to social media, where content moderation flaws have led to self -harm proliferation. AI teams should conduct cross -benchmark evaluations using this dimensional reduction approach, focusing on underrepresented harm categories to proactively identify and mitigate emerging risks before deployment. ACKNOWLEDGMENTS We thank Sari Andoni at Trase for guidance and MLCommons colleagues for their vital expertise in industry priorities, along with the reviewers of the International Conference on Advanced Natural Language Processing. REFE RENCES [1] Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.S., et al. (2021) ”Ethical and social risks of harm fromlanguage models”, arXiv preprint arXiv:2112.04359. [2] Hendrycks, D., Mazeika, M., & Woodside, T. (2021) ”An overview of cata strophic ai risks”, arXiv preprint arXiv:2306.12001. [3] Bender, E.M., Gebru, T., McMillan -Major, A., & Mitchell, M. (2021) ”On the dangers of stochastic parrots: Can languagemodels be too big?”, Proceedings of the 2021 ACM conference on fairness, accountabili ty, and transparency. [4] Rottger, P., Pierleoni, F., &Hovy, D. (2024) ”SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and¨ Improving Large Language Model Safety”, arXiv preprint arXiv:2404.05399. [5] Ren, R., Basart, S.,
https://arxiv.org/abs/2505.17636v1
Khoja, A., Gatti, A., Phan, L., Yin, X., et al. (2024) ”Safetywashing: Do AI Safety Benchmarks ActuallyMeasure Safety Progress?”, arXiv preprint arXiv:2405.19928. [6] Lee, N., Hong, J., & Thorne, J. (2025) ”Evaluating the Consistency of LLM Evaluators”, Proceedings of the 31st Inte rnational Conference on Computational Linguistics, pp. 10650 -10659. Computer Science & Information Technology (CS & IT) 37 [7] Reimers, N. &Gurevych, I. (2019) ”Sentence -BERT: Sentence embeddings using Siamese BERT - networks”, Proceedingsof the 2019 Conference on EMNLP -IJCNLP, pp. 3982 -3992. [8] MacQueen, J. (1967) ”So me methods for classification and analysis of multivariate observations”, Proceedings of the fifthBerkeley symposium on mathematical statistics and probability, V ol. 1, pp. 281-297. [9] Ghosh, S., Varshney, P., Sreedhar, M.N., Padmakumar, A., Rebedea, T., Varg hese, J.R., & Parisien, C. (2025) ”AEGIS2.0: A Diverse AI Safety Dataset and Risks Taxonomy for Alignment of LLM Guardrails”, arXiv preprint arXiv:2501.09004. [10] Han, X., Dziri, N., Zhang, J., et al. (2024) ”WildGuard: Open One -stop Moderation Tools for Safet y Risks, Jailbreaks, andRefusals of LLMs”, arXiv preprint arXiv:2406.18495. [11] Ji,J.,Liu,M.,Dai,J.,Pan,X.,Zhang,C.,Bian,C.,Chen,B.,Sun,R.,Wang,Y .,&Yang,Y .(2023)”BEA VERTAI LS:TowardsImproved Safety Alignment of LLM via a Human -Preference Dataset”, NeurIPS 2023 Track on Datasets and Benchmarks. [12] Ramaswamy, L., Van Segbroeck, M., &Nathawani, D. (2024) ”Building Datasets to Enable Safer AI Responses”, Gretel.aiBlog. [13] MLCommons AI Safety Working Group (2024) ”AILuminate: A Benchmark for Evaluating AI Risks”, Wired. [14] Ahmad, P. H., & Dang, S. (2015). Performance evaluation of clustering algorithm using different datasets. Journal of Information Engineering and Applications, 5(1), 39 -45. [15] Thrun, M.C. (2021) ”Pitfalls in clustering”, Scientific reports, V ol. 11, No. 1, pp. 1 8997. [16] Cohen, J. (1988) ”Statistical power analysis for the behavioral sciences”, Routledge. [17] Pandey, K.K. & Shukla, D. (2021) ”Euclidean distance stratified random sampling based clustering model for big data mining”, Computational and Mathematical Methods, V ol. 3, No. 6. [18] Roscoe, J. T. (1975). Fundamental research statistics for the behavioral sciences (2nd ed.). Holt, Rinehart and Winston. [19] Leys, C., Ley, C., Klein, O., Bernard, P., & Licata, L. (2013) ”Detecting outliers: Do not use standard deviation aroun d themean, use absolute deviation around the median”, Journal of Experimental Social Psychology, V ol. 49, No. 4, pp. 764 -766. 20. [20] Wilcox, R.R. (2012) ”Modern statistics for the social and behavioral sciences: A practical introduction”, CRC press. [21] Wang, W. , Bao, H., Huang, S., Dong, L., & Wei, F. (2020) ”MiniLM: Deep self -attention distillation for task -agnosticcompression of pre -trained transformers”, arXiv preprint arXiv:2002.10957. [22] Song, K., Tan, X., Qin, T., Lu, J., & Liu, T.Y . (2020) ”MPNet: Masked and Permuted Pre -training for Language Understanding”, arXiv preprint arXiv:2004.09297. [23] Van der Maaten, L. & Hinton, G. (2008) ”Visualizing data using t -SNE”, Journal of Machine Learning Research, V ol. 9,pp. 2579 -2605. [24] McInnes, L., Healy, J., & Melville, J. ( 2018) ”UMAP: Uniform Manifold Approximation and Projection for DimensionReduction”, arXiv preprint arXiv:1802.03426. [25] Kobak, D. & Berens, P. (2019) ”The art of using t -SNE for single -cell transcriptomics”, Nature Communications, V ol. 10,No. 1,
https://arxiv.org/abs/2505.17636v1
pp. 1 -14. [26] Wang, Y ., Huang, H., Rudin, C., &Shaposhnik, Y . (2021). Understanding how dimension reduction tools work: An empirical approach to deciphering t -SNE, UMAP, TriMap, and PaCMAP for data visualization. Journal of Machine Learning Research, 22, 1 -73. [27] Bergstra, J. , & Bengio, Y . (2012). ”Random search for hyper -parameter optimization.” Journal of Machine Learning Research, 13(Feb), 281 -305. [28] Belkina, A.C., Ciccolella, C.O., Anno, R., Halpert, R., Spidlen, J., & Snyder -Cappione, J.E. (2019) ”Automated optimizedparamet ers for t -distributed stochastic neighbor embedding improve visualization and analysis of large datasets”, Nature Communications, V ol. 10, No. 1, pp. 1 -12. [29] Aggarwal, C.C., Hinneburg, A., & Keim, D.A. (2001) ”On the surprising behavior of distance metrics i n high -dimensionalspace”, Proceedings of the 8th International Conference on Database Theory (ICDT), pp. 420 -434. [30] De Maesschalck, R., Jouan -Rimbaud, D., &Massart, D.L. (2000) ”TheMahalanobis distance”, Chemometrics and Intelligent Laboratory Systems, V ol. 50, No. 1, pp. 1 -18. [31] McLachlan, G.J. (1999) ”Mahalanobis distance”, Research Letters in Information and Mathematical Sciences, V ol. 1, No.1, pp. 33 -41. [32] EleutherAI (2023) ”EleutherAI: Democratizing AI Research”, EleutherAI Blog. [33] Efron, B., &Tibshirani, R. J. (1994). An Introduction to the Bootstrap. Chapman and Hall/CRC. 38 Computer Science & Information Technology (CS & IT) [34] Satopa¨a, V . A., Albrecht, J., Irwin, D., & Raghavan, B. (2011). Finding a ”kneedle” in a haystack: Detecting knee points in¨ system behavior. In Proceedings of the 31st International Confe rence on Distributed Computing Systems Workshops (pp. 166 -171). IEEE. [35] Rousseeuw, P.J. (1987) ”Silhouettes: a graphical aid to the interpretation and validation of cluster analysis”, Journal ofComputational and Applied Mathematics, V ol. 20, pp. 53 -65. [36] Tibsh irani, R., Walther, G., & Hastie, T. (2001) ”Estimating the number of clusters in a data set via the gap statistic”,Journal of the Royal Statistical Society B, V ol. 63, No. 2, pp. 411 -423. [37] Ailem, M., Marazopoulou, K., Siska, C., & Bono, J. (2024) ”Examinin g the robustness of LLM evaluation to the distributional assumptions of benchmarks”, Annual Meeting of the Association for Computational Linguistics. [38] Lacroix, T. & Luccioni, A.S. (2024) ”Metaethical perspectives on ’benchmarking’ AI ethics”, unpublished. [39] Faul, F., Erdfelder, E., Lang, A.G., & Buchner, A. (2007) ”G* power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences”, Behavior research methods, V ol. 39, No. 2, pp. 175 -191. [40] Caliskan, A., Bryson, J.J., & Narayanan, A. (2017) ”Semantics derived automatically from language corpora containhuman -like biases”, arXiv preprint arXiv:1608.07187. [41] Kenton,J.,Xu,A.,Dathathri,A.,Glaese,M.,Martens,S.,Lillicrap,T.,etal.(2021)”Alignment for honesty”,arXivpreprintarXiv:211 2.00849. © 2025 By AIRCC Publishing Corporation . This article is published under the Creative Commons Attribution (CC BY) license.
https://arxiv.org/abs/2505.17636v1
Stereotype Detection in Natural Language Processing ALESSANDRA TERESA CIGNARELLA, Language and Translation Technology Team, Ghent University, Belgium ANASTASIA GIACHANOU, Department of Methodology and Statistics, Utrecht University, The Netherlands ELS LEFEVER, Language and Translation Technology Team, Ghent University, Belgium Abstract. Stereotypes influence social perceptions and can escalate into discrimination and violence. While NLP research has extensively addressed gender bias and hate speech, stereotype detection remains an emerging field with significant societal implications. In this work is presented a survey of existing research, analyzing definitions from psychology, sociology, and philosophy. A semi-automatic literature review was performed by using Semantic Scholar. We retrieved and filtered over 6,000 papers (in the year range 2000–2025), identifying key trends, methodologies, challenges and future directions. The findings emphasize stereotype detection as a potential early-monitoring tool to prevent bias escalation and the rise of hate speech. Conclusions highlight the need for a broader, multilingual, and intersectional approach in NLP studies. Additional Key Words and Phrases: stereotype detection, natural language processing, social psychology, literature review, hate speech, gender bias, intersectionality ACM Reference Format: Alessandra Teresa Cignarella, Anastasia Giachanou, and Els Lefever. 2025. Stereotype Detection in Natural Language Processing. 1, 1 (April 2025), 31 pages. https://doi.org/XXXXXXX.XXXXXXX Warning: This paper contains examples of stereotypical and offensive content. 1 INTRODUCTION The rise of digital environments, like social media, offers great opportunities to share ideas, grow business or build communities on a global scale. At the same time, however, this low-barrier and real-time accessibility also fosters the rapid proliferation of harmful content and hate speech. As the boundaries between online and offline spaces blur, real-world incidents frequently ignite waves of hate targeting specific demographic groups – whether based on race, religion, gender, or sexual orientation – across both spaces. As a very relevant and impactful real-world example, we can refer to the attacks and military response in Israel and Gaza, which have triggered an alarming rise in threats and violence against both the Jewish and the Muslim community in online and offline settings across all of Europe. Reports of the European Commission1indeed signal a considerable increase in hate speech and hate crime over the last twenty years towards such communities. Expressing hate seems to have become socially acceptable [Bilewicz and Soral 2020], stigmatizing and dehumanizing individuals and groups of people, based on race, age, ethnicity, religion, gender, or 1https://www.europarl.europa.eu/RegData/etudes/BRIE/2024/762389/EPRS_BRI(2024)762389_EN.pdf. Authors’ addresses: Alessandra Teresa Cignarella, alessandrateresa.cignarella@ugent.be, Language and Translation Technology Team, Ghent University, Ghent, Belgium; Anastasia Giachanou, a.giachanou@uu.nl, Department of Methodology and Statistics, Utrecht University, Utrecht, The Netherlands; Els Lefever, els.lefever@ugent.be, Language and Translation Technology Team, Ghent University, Ghent, Belgium. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior
https://arxiv.org/abs/2505.17642v1
specific permission and/or a fee. Request permissions from permissions@acm.org. ©2025 Copyright held by the owner/author(s). Publication rights licensed to ACM. Manuscript submitted to ACM Manuscript submitted to ACM 1 2 Cignarella et al. sexual orientation, leading to polarization, radicalization and violence. While victims often suffer immediate emotional distress and fear, the longer-term impact can be just as damaging, as members of minority communities adapt their behavior to avoid hate, limiting their full participation or engagement in society [Paterson et al. 2019]. A crucial yet often under-examined component of hate speech is its foundation in stereotypes — widely held but oversimplified and generalized beliefs about social groups. Stereotypes are not merely cognitive shortcuts used to simplify the cognitive overload of the real world, but they are deeply ingrained social constructs that influence perception, reinforce inequalities, and shape discriminatory behaviors over time [Augoustinos and Walker 1998]. Repeated exposure to stereotypes in media, conversations, and online discourse eventually leads to their normalization, making them more socially acceptable and even unconscious in everyday interactions. Over time, these internalized biases evolve into prejudices and discriminatory behaviors, fueling societal division and violence. Consequently, hate speech does not arise in isolation: it is the product of entrenched stereotypes that have been reinforced through cultural, political, and technological means. In particular, both hate speech and stereotypes are products of “shared cultural knowledge, beliefs, and group ideals that create ethnic identities and influence group actions” [Ayansola 2021], and recent studies have highlighted their intricate relationship [Davani et al. 2023; Vargas et al. 2023]. Sociological research has long established that stereotypes, prejudice, and discrimination are interconnected, but distinct phenomena. Although all rooted in bias against the outgroup , i.e., people outside the own social group or ingroup [Dovidio et al .2010], a distinction is made in sociology between (1) prejudice or emotional bias, when people show negative attitudes or emotions towards the outgroup, (2) stereotypes or cognitive bias, which are based on beliefs or cognitive associations about group characteristics, and (3) behavior bias expressed through discrimination , which involves actions or practices that disadvantage certain demographic groups [Fiske 1998; Stangor 2000]. Although considered to be different types of bias, researchers have argued that stereotypes and prejudices are linked, with prejudice often stemming from stereotypic beliefs [Dovidio et al. 2010; Fiske 1998]. In our research, we will focus on stereotypes, which are considered as category-based generalizations that not only reflect “beliefs about the traits characterizing typical group members but also contain information about other qualities such as social roles and the degree to which members of the group share specific qualities” [Dovidio et al . 2010]. Understanding and detecting stereotypes is particularly important because they operate not only at the explicit level but also implicitly, shaping thought processes and behaviors outside of conscious awareness. Even individuals who consciously reject stereotypes can still exhibit implicit biases [Greenwald and Banaji 1995]. These implicit biases are shaped by social and cultural experiences and can impact decision-making in areas such as hiring, policing, and healthcare. For example, Hoffman et al .[2016] showed that medical professionals may unintentionally
https://arxiv.org/abs/2505.17642v1
provide different levels of care to patients based on their race, often offering less pain relief to black patients compared to white patients, despite similar reported pain levels.2 A well-known method to measure implicit bias is the Implicit Association Test (IAT), a widely used tool in social cognition research to measure associations between concepts and attributes [Greenwald et al .1998]. While mainly used to detect implicit attitudes, recent research shows its output may also represent explicit attitudes [Tahamata and Tseng 2024]. The test involves categorizing stimuli into concept and attribute pairs, with faster responses indicating stronger associations [De Houwer 2001]. The tool claims to also reveal implicit attitudes of which the individuals taking the test were unaware.3While the IAT measures implicit biases which are influenced by broader societal stereotypes, the Stereotype Content Model (SCM) explains how these stereotypes are formed. SCM stems from social psychology and 2Based on the stereotypical assumption that black people experience less pain because they have thicker skin or less sensitive nerve endings). 3https://implicit.harvard.edu/implicit/takeatest.html. Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 3 proposes to classify stereotypes along two dimensions or axes: warmth , or encompassing sociability and morality – which is associated with cooperative groups, and competence , or encompassing ability and agency – which is associated with high-status groups [Fiske et al .2002]. On one end of the spectrum, the ingroup is often perceived as both warm and competent, while on the opposite end are social outcasts, such as homeless individuals, who are frequently stereotyped as not being cooperative and being incompetent. Certain demographic groups may score high on one dimension but low on the other, such as older people that are often viewed as high in warmth but low in competence.4 In order to take into account also a certain degree of subjectivity, the SCM model was later on expanded by Koch et al.[2016] giving light to the Agency-Beliefs-Communion (ABC) model. This newer model directly stems from the SCM but introduces a crucial third dimension: beliefs . While the SCM focuses on how people stereotype groups based on warmth and competence, the ABC model also explains how people form impressions of individuals by considering whether they share our same values and beliefs. Imagine you meet a new coworker. The SCM model [Fiske et al .2002] would suggest that you judge them based onwarmth (Are they friendly?) and competence (Are they good at their job?). If they’re both friendly and skilled, you admire them. If they seem unfriendly but competent, you may resent them. On the other hand, the ABC model [Koch et al.2016, 2020] suggests that beyond warmth and competence, you also consider their beliefs , i.e., do they share your values or opinions? For example, if your coworker is friendly and competent but has completely different political or ethical views, you might still feel distant from them. With this paper, we aim to provide a survey on papers on different aspects of stereotype detection. Although our main focus is on the papers that study stereotypes from the NLP aspect, we also include
https://arxiv.org/abs/2505.17642v1
theoretical papers that cover a sociological perspective of the problem. Stereotype detection is a rather new field in NLP and computational linguistics, and as explained before, is closely related to the fields of bias and hate speech detection. In particular, although gender bias has been researched to a great extent, most of the work is conducted for English or other high-resourced languages. Fig. 1. The Pyramid of Hate.Other limitations for the detection and mitigation of gender bias are the strict definition of gender as a binary variable, and lacking evaluation baselines and pipelines [Stańczak and Augenstein 2021]. This survey is inspired by the survey on automatic de- tection of hate speech by Fortuna and Nunes [2018], who provide an overview of the research on hate speech detection; its definition, applied methods and resources. We believe stereotype detection is an important NLP task with a potential far-reaching societal im- pact, as it allows to detect bias in its early-stage manifesting, before it turns into hate or violence and it can be helpful to counteract them. This is inspired by the Pyramid of Hate , illustrated by Figure 1, that shows how killing and other acts of violence might often be the extreme consequences that dangerously evolve from too-often overlooked acts of bias and prejudice, such as offensive jokes, offensive humor, non-inclusive language and stereotypes.5 4Example from Fiske: https://nobaproject.com/modules/prejudice-discrimination-and-stereotyping. 5Adapted from the Anti-Defamation League. Manuscript submitted to ACM 4 Cignarella et al. 1.1 What are Stereotypes? As mentioned earlier, stereotypes are widely held, simplified, and often generalized beliefs about a particular group of people. Fiske [2000] defines stereotypes as cognitive structures that contain knowledge, beliefs, and expectations about social groups. These mental shortcuts help individuals navigate social interactions but can also reinforce biases and perpetuate discrimination. Beyond psychology, sociology offers a broader perspective on stereotypes, viewing them as socially constructed narratives that influence group identities, power dynamics, and social inequalities. Sociologist Lippmann [1922] was one of the first to conceptualize stereotypes as “pictures in our heads”, highlighting how they shape perceptions and maintain social hierarchies. Meanwhile, in philosophy, scholars like Haslanger [2000] argue that stereotypes play a role in sustaining oppressive structures, particularly in relation to race, gender, and class. Institutional frameworks also address stereotypes, particularly in relation to gender. The European Union and the United Nations define gender stereotypes as preconceived ideas about the roles and characteristics of women and men, often leading to unequal treatment and reinforcing structural barriers. While gender is a significant dimension, it is essential to broaden the scope of stereotype research to include ethnicity, disability, sexual orientation, and other social categories. A more comprehensive approach allows for an intersectional analysis, recognizing how multiple identity factors interact to shape experiences of bias and discrimination. Intersectionality, a concept introduced by Crenshaw [1989], highlights how overlapping social identities create unique experiences of oppression, making it crucial to study stereotypes beyond isolated categories. By incorporating an intersectional perspective, we can better understand how different forms of bias intersect and reinforce one another in social and linguistic contexts. In this work, we define
https://arxiv.org/abs/2505.17642v1
stereotypes as recurrent, socially constructed (generalized) associations that ascribe fixed characteristics to individuals based on their group membership. These associations often simplify complex realities, reinforcing bias and prejudices and limiting freedom and opportunities for those subjected to them. Stereotypical text can manifest in various ways across different target groups. For example: •Genderstereo types: “Women are naturally bad at driving” or “Men don’t know how to show emotions”; •Ethnicstereo types: “Asian people are good at math” or “Latinos are always passionate and loud”; •Age stereo types: “Older employees struggle with technology” or “Young people are lazy and entitled”; •Disabilitystereo types: “People with disabilities are always inspirational” or “People with autism lack empathy”; •LGBTQIA+ stereo types: “Gay men are always flamboyant” or “Lesbians are just tomboys who hate men”. These examples illustrate how stereotypes reduce individuals to predefined categories, influencing social perception and reinforcing systemic biases. Addressing and analyzing such stereotypes is crucial for developing fair and inclusive NLP models capable of mitigating their harmful effects. 1.2 Scope of this Survey This review paper aims to provide an overview of stereotype detection in NLP, summarizing and discussing theoretical frameworks stemming from psychology studies, existing methodologies, datasets, challenges, and future directions. While previous research has extensively focused on hate speech and mitigating stereotypical bias in AI systems, stereotypes, which are a crucial aspect of harmful language, have only recently attracted research attention. Our focus on stereotypes, rather than hate speech, is motivated by their subtle yet pervasive influence on language and social interactions. Stereotypes serve as the cognitive foundation for prejudice and discrimination, shaping societal attitudes and influencing behaviors both explicitly and implicitly. Hate speech is often explicit, characterized by hostility Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 5 and derogatory expressions targeting specific groups. Stereotypes, in contrast, can manifest in more implicit and normalized ways, reinforcing social hierarchies and biases without overtly aggressive language. This distinction is important because harmful stereotypes can propagate discrimination even in the absence of explicit hate speech, making their detection vital for inclusiveness-aware AI development. Addressing stereotypes at an early stage can serve as a preventative measure, mitigating their escalation into more explicit forms of bias and discrimination. Please note that this survey is nota systematic review, since conducting one would require an exhaustive analysis of all 139+ papers related to stereotype detection in NLP that were retrieved (see details Section 2). Instead, we tried to find patterns based on the retrieved papers to provide a broad yet selective overview of relevant work, focusing on current key trends, methodologies, and gaps in the field. Ultimately, this survey aims at being a starting point for scholars who are now entering this research topic, offering a structured understanding of what has been done so far and identifying open challenges. The rest of the survey is structured as follows: Section 2 outlines the methodology adopted for conducting the literature review. Section 3 presents an organized overview of the literature, categorizing works based on their focus, techniques, and findings related to stereotypes in NLP. Section 4 discusses the main challenges
https://arxiv.org/abs/2505.17642v1
faced in this field, including limitations in data, evaluation metrics, and language coverage. Finally, Section 5 concludes the survey by summarizing the insights gained and outlining future research directions, with Section 5.1 highlighting specific opportunities for advancing stereotype detection in NLP. 2 WHAT NLP RESEARCH HAS BEEN CONDUCTED SO FAR REGARDING STEREOTYPE DETECTION ? To gain insight into NLP researchers’ perspectives, methodologies and types of research on the topic of Stereotype Detection , we conducted an automated search to identify relevant literature. This section outlines the methodology employed and presents a detailed analysis of the results. Fig. 2. Pipeline describing the different steps of the survey process. 2.1 Methodology of the Literature Review Our methodology was designed to ensure a comprehensive approach to identifying relevant literature. The process began with selecting appropriate keywords, followed by a manual search to obtain an initial set of seed papers. Later Manuscript submitted to ACM 6 Cignarella et al. on, an automatic search was conducted, retrieving up to 1,000 papers per year from 2000 to 2025. The results of the automatic search were filtered and cross-referenced with previously known works on the topic to ensure completeness. We then performed a cross-search to identify additional relevant publications, followed by an automatic filtering process to refine the dataset. Finally, some bibliometric analyses were conducted to provide insights into the research trends, to highlight the most recurrent patterns, and the key contributions within the body of literature analyzed. The methodology can be divided into three main phases: (i) Literature Search, (ii) Filtering, and (iii) Analysis. In Figure 2, we outline the steps of the adopted pipeline. In the subsections below, we describe the three phases in detail. 2.1.1 Literature Search .In the initial stage we conducted a manual search on Google Scholar6using the keywords <stereotype +detection > and < stereotype +nlp>. To ensure the creation of a relevant database, we limited our selection to the first five pages of results (approximately 50 papers), sorted by relevance and without applying any time filter. We selected only open-access papers. Despite its popularity among fellow researchers, Google Scholar posed significant challenges due to its lack of an official API. To overcome these limitations, therefore, we decided to rely on the Semantic Scholar API7, which offers a fairer, more robust and compliant solution for retrieving academic literature. We searched for a maximum of 1,000 articles per year over a 26-year span (2000–2025), using the query < stereotype +detection +nlp> within the Paper Bulk Search . For every hit of the search, we gathered the following information: title, publication year, abstract, field of study (e.g. engineering, chemistry, linguistics), type of publication (e.g. review, journal article, conference), publishing venue (e.g. Nature, IEEE, New England Journal of Medicine), and the citation count.8 Title 6,304 Year 6,304 Abstract 5,762 Field of Study 4,501 Publication Type 4,883 Venue 5,126 Citation Count 4,365 Table 1. Resulting size of the database after the search on Semantic Scholar.As shown by the varying numbers in Table 1, some entries in the database were incomplete. Consequently, a filtering step was necessary to ensure
https://arxiv.org/abs/2505.17642v1
the data’s consistency and reliability (see Section 2.1.2). However, before proceeding with an extreme step of filtering, we aimed to identify the main research topics that present a significant overlap with the terms < stereotype detection > in the papers we collected. To achieve this, we applied BERTopic9on the 6,304 collected papers to analyze the texts of all abstracts (which were pre-processed by removing punctuation, stop words and query-related terms). The details on the parameters we used with BERTopic can be found in Appendix A. To ensure relevance was maintained, we set a minimum threshold of 10 papers per topic. The model then identified eight key topics closely related to Stereotype Detection (in descending order): Fake News, Gender Bias, Emotions, Hate Speech, Sentiment, Mental Health, Computer Vision, and Sarcasm. The relevance of each topic is presented in Figure 3. Despite the interesting results obtained so far, the data still contained a significant amount of noise. To address this while preserving meaningful insights, we implemented a careful filtering process, refining the dataset step by step. 2.1.2 Filtering .To ensure no relevant data got lost, we approached the filtering process gradually, in small and careful steps. First, we removed all entries with corrupted data from the database (i.e., if any of the retrieved fields shown in Table 1 was empty). Secondly, we manually inspected some random samples of the database, and observed that 6https://scholar.google.com/ 7https://www.semanticscholar.org/product/api 8Please note that the last version of the database has been created on February 10, 2025. 9https://maartengr.github.io/BERTopic/index.html Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 7 Fig. 3. Topics extracted with BERTopic from the text of the abstracts. despite including the word < nlp> in the original search query on Semantic Scholar, the results were still quite noisy, containing many entries belonging to disciplines and fields outside the scope of this survey. In Figure 4 we display the number of articles per field of study. As is immediately obvious, the main subjects that seem related to the use of the words < stereotype detection > are Computer Science, Medicine, Psychology, Biology and Engineering. Fig. 4. Number of publications per field of study. We performed a semi-automatic review of the downloaded articles, by extracting the most common words (from titles and abstracts) and their relative frequency, and this analysis revealed that a significant number of papers contained terms such as autism spectrum ,sound event ,motor movement ,dcase ,intrusion detection ,bomb attack ,child abuse , Manuscript submitted to ACM 8 Cignarella et al. blue whale , and stimuli which are not relevant for our goal. To address the removal of this extra noise, a more thorough filtering process was performed. Therefore, we began by removing papers from scientific areas unrelated to our focus, such as Medicine, Chemistry, Biology, Physics, Geography, Geology, Mathematics, Environmental Science, Engineering, Art, History, Economics, Political Science, Business, and Materials Science. After this filtering steps, the number of papers was reduced from 6,304 to 4,959. In the final filtering step, we retained only those papers where the terms <stereotype > or < stereotypes > or <
https://arxiv.org/abs/2505.17642v1
stereotyped > or < stereotyping >AND <nlp> or < natural language processing > appeared in the title ORin the abstract. This ensured the dataset remained focused on the topic of Stereotype Detection in NLP . The final count amounts to 139papers . 2.1.3 Analyses .The analyses that follow are thus carried out on the final database, consisting of 139 research papers. First, we tokenized and lemmatized the titles and abstracts to standardize the text, by using NLTK10. Next, we removed common English stop words (e.g., ‘the’, ‘and’) to eliminate noise. Finally, we filtered out query-specific and domain- related terms such as [‘nlp’, ‘natural’, ‘language’, ‘processing’, ‘stereotype’, ‘detection’, ‘et al’], which were too frequent or generic to meaningfully contribute to the analysis. This allowed us to focus on the most distinctive terms in the corpus. We were curious to analyze the textual content of Titles and Abstracts after removing the main query words to identify whether any interesting patterns would emerge regarding the relationship with Stereotype Detection . For this purpose, we extracted the most frequent bigrams from both Titles and Abstracts. The results are presented in Figure 5. Fig. 5. Most frequent bigrams in Titles and Abstracts. The results highlight that gender bias is overwhelmingly prevalent in our corpus of selected articles. This is not a surprising result given the attention that gender bias has received in the field. Then we observe terms associated with the technical aspects of NLP, such as word embeddings ,large language models ,machine learning , and training data , as well as concepts related to societal phenomena like hate speech orsocial biases . The term machine translation is also quite frequent, and, unsurprisingly, given the textual genre most NLP research focuses on, social media also emerges as a frequently occurring bigram. 10https://www.nltk.org/ Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 9 To examine the evolving interest in Stereotype Detection in NLP , we further analyzed the occurrence of such terms over time, as illustrated in Figure 6. The data reveals a notable rise in the number of publications starting in 2019, with a significant surge in 2021. This upward trajectory continued sharply through 2023 and 2024, indicating a growing focus on the topic. Given this trend, we anticipate that research in this area will expand even further in the coming years. Fig. 6. Publication trend of ‘Stereotype Detection in NLP’ over time. Another indication of the field’s novelty can be observed by examining its primary publishing venues. In Table 2 we show the most prominent conferences and workshops where papers regarding Stereotype Detection are currently being published. The leading venue for papers of this topic is the open-access repository arXiv.org , suggesting that much of the research in this area is either very recent or still in its pre-print stages. Unsurprisingly, other prominent venues include some of the main *CL conferences (ACL, EMNLP, NAACL and LREC). Consistently with the bigram analysis shown above, two other notable venues are the Conference on Fairness, Accountability, and Transparency (FAccT) and the Workshop on Gender Bias in Natural Language
https://arxiv.org/abs/2505.17642v1
Processing (GeBNLP). Venue N# papers arXiv.org 21 Annual Meeting of the Association for Computational Linguistics (ACL) 11 Conference on Empirical Methods in Natural Language Processing (EMNLP) 8 North American Chapter of the Association for Computational Linguistics (NAACL) 6 International Conference on Language Resources and Evaluation (LREC) 5 Workshop on Gender Bias in Natural Language Processing (GEBNLP) 4 Conference on Fairness, Accountability and Transparency (FAccT) 3 AAAI/ACM Conference on AI, Ethics, and Society (AIES) 3 Workshop on Online Abuse and Harms (WOAH) 2 Conference on Lexical and Computational Semantics (*SEM) 2 Table 2. Most frequent publishing venues. To conclude this section, we present a concise list of key papers, prioritizing the most influential ones – specifically, those with over 50 citations (see Table 3). These highly cited works played a central role in our reading process, offering well-established insights, methodologies, and datasets in the field of Stereotype Detection , which, despite its relatively recent history as a self-standing research area on its own. Manuscript submitted to ACM 10 Cignarella et al. Reference Title Citations [Nangia et al. 2020] CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models 587 [Sun et al. 2019] Mitigating Gender Bias in Natural Language Processing: Literature Review 513 [Kurita et al. 2019] Measuring Bias in Contextualized Word Representations 421 [Parrish et al. 2022] BBQ: A hand-built bias benchmark for question answering 292 [Deshpande et al. 2023] Toxicity in ChatGPT: Analyzing Persona-assigned Language Models 285 [Zmigrod et al. 2019] Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology 265 [Blodgett et al. 2021] Stereotyping Norwegian Salmon: An Inventory of Pitfalls in Fairness Benchmark Datasets 261 [Liang et al. 2020] Towards Debiasing Sentence Representations 215 [Shaikh et al. 2023] On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning 152 [Rudinger et al. 2017] Social Bias in Elicited Natural Language Inferences 132 [Nozza et al. 2021] HONEST: Measuring Hurtful Sentence Completion in Language Models 128 [Stańczak and Augenstein 2021] A Survey on Gender Bias in Natural Language Processing 98 [Papakyriakopoulos et al. 2020] Bias in word embeddings 95 [Chang et al. 2019] Bias and Fairness in Natural Language Processing 59 [Levy et al. 2021] Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation 58 [Cryan et al. 2020] Detecting Gender Stereotypes: Lexicon vs. Supervised Learning Methods 50 Table 3. Most-cited papers with more than 50 citations. 2.2 Extra: Projects on GitHub and HuggingFace To get a complete overview of the existing Stereotype Detection projects that are currently being developed in the NLP community, we searched GitHub and HuggingFace using the terms Stereotype Detection in the platforms’ search engines. These queries yielded 24 hits on GitHub and 3 hits on HuggingFace containing relevant content. The search for projects was last performed in February 2025. Most of the repositories found were created by participants of the shared task “Profiling Irony and Stereotype Spreaders on Twitter (IROSTEREO)” organized at PAN in 2022 [Ortega-Bueno et al .2021], which focused on determining whether the author of a Twitter feed was keen on
https://arxiv.org/abs/2505.17642v1
spreading irony and stereotypes towards immigrants, women and LGBT+ people. Similarly, other repositories belong to participants of the 2024 shared task “DETESTS-Dis” organized at IberLEF in 2024, to detect and classify explicit and implicit stereotypes in texts from social media and comments on news articles in Spanish, incorporating learning with disagreement techniques [Schmeisser-Nieto et al .2024b]. The remaining repositories are dataset-related, which we will describe in detail in a later section (see Section 3.2). In the following section (§3), starting from these foundational works, we transition into the main core of our literature review. By listing and analyzing these contributions, at first, we aim to establish a solid foundation for future research inStereotype Detection , highlighting existing advancements, identifying gaps and key challenges. 3 ORGANIZED OVERVIEW OF LITERATURE ON STEREOTYPES This section forms the core of the manuscript, presenting an extensive compilation of the research work we have reviewed. Our initial selection comprised the 50 most cited papers in the field, which we then expanded to include highly interconnected works frequently referenced within our starting database. The section is conceived as a vademecum – a reference guide offering a broad yet organized overview of research conducted thus far in Stereotype Detection. While not systematic it provides a structured classification of the key research directions and subfields in NLP related to stereotypes. In Table 4, we propose a reasoned grouping of the findings from published studies. Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 11 Category Description References 1Theoretical Papers focused on defining, conceptualizing, or analyz- ing theoretical frameworks related to stereotypes, bias, prejudice, or discrimination. These works typically pro- vide a foundation for future empirical research or pro- pose new perspectives. Papers focused on operationaliz- ing stereotypes. Including taxonomies and annotation schemes for stereotype detection.Cao et al .[2022]; Fiske [1993, 1998, 2024]; Fiske et al . [2002]; Fraser et al .[2024]; Kirk et al .[2022]; Koch et al . [2016, 2020]; Sánchez-Junquera et al .[2021]; Schmeisser- Nieto et al. [2022] 2Dataset/Corpora Papers presenting new datasets, annotated corpora, or language resources, often designed to facilitate research on stereotypes, biases, or related topics.Bosco et al .[2023]; Bourgeade et al .[2023]; Cignarella et al.[2024]; Jha et al .[2023]; Levy et al .[2021]; Nadeem et al.[2021]; Nangia et al .[2020]; Névéol et al .[2022]; Nozza et al .[2021]; Ortega-Bueno et al .[2021]; Parrish et al.[2022]; Sánchez-Junquera et al .[2021]; Sanguinetti et al. [2020]; Schmeisser-Nieto et al. [2024a,b] 3Data Analysis Papers applying methods to specific use cases or provid- ing real-world analysis of stereotypes and biases. Papers analyzing specific datasets to uncover trends, patterns, or prevalence of stereotypes, biases, or abusive content. These studies often provide insights into real-world phe- nomena through data-driven methods.Deshpande et al .[2023]; Fraser et al .[2024, 2021]; Lo- catelli et al .[2023]; Mendelsohn et al .[2020]; Rudinger et al. [2017] 4Debiasing/Mitigation Papers proposing methods, techniques, algorithms, or frameworks to mitigate or reduce bias and stereotypes in data, language models, or outputs. These studies typi- cally evaluate the effectiveness of such approaches.Birhane et al .[2023]; Blodgett et al .[2022, 2021]; Cao
https://arxiv.org/abs/2505.17642v1
et al.[2022]; Chang et al .[2019]; Davani et al .[2023]; Kurita et al .[2019]; Liang et al .[2020]; Nadeem et al . [2021]; Nozza et al .[2021]; Pujari et al .[2022]; Ranaldi et al.[2024]; Shaikh et al .[2023]; Shen et al .[2021]; Sun et al.[2019]; Ungless et al .[2022]; Vargas et al .[2023]; Zhou et al. [2023]; Zmigrod et al. [2019] 5Stereotype Detection (Classification Task)Papers focused on developing or utilizing computational methods for detecting stereotypes and/or biases. This category includes papers proposing or testing models and algorithms in classification tasks.Bosco et al .[2023]; Bourgeade et al .[2023]; Cignarella et al.[2024]; Davani et al .[2023]; Pujari et al .[2022]; Sánchez-Junquera et al .[2021]; Sanguinetti et al .[2020]; Shen et al. [2021]; Vargas et al. [2023] 6Other: Shared Tasks, Projects, Tutorials, Position PapersPapers that do not fit neatly into the other categories, such as descriptions of large-scale projects, overviews of shared tasks, or position papers discussing challenges in the field.Bhatt et al .[2022]; Blodgett et al .[2021]; Chang et al . [2019]; D’Errico et al .[2024]; Fortuna and Nunes [2018]; Ortega-Bueno et al .[2021]; Sanguinetti et al .[2020]; Schmeisser-Nieto et al .[2024b]; Schütz et al .[2021]; Stańczak and Augenstein [2021]; Sun et al. [2019] Table 4. Summary of the surveyed papers, divided by main categories. The organization in Table 4 includes theoretical contributions defining and conceptualizing stereotypes, dataset creation for stereotype research, data analysis for real-world bias identification, debiasing and mitigation strategies ,stereotype detection through computational methods, and miscellaneous contributions such as shared tasks, tutorials, and position papers. Each category highlights key studies that contribute to understanding and addressing biases in NLP. The following subsections will delve deeper into each category, outlining their methodologies, findings, and implications for bias research. The primary goal, here in this section, is to list prominent work rather than analyze it, whereas we will follow-up with in-depth commentary and discussion in Section 4. 3.1 Theoretical Work Theoretical studies and frameworks are key in the field of stereotypes as they provide structured lenses to understand, identify, and analyze stereotypes within social, cultural, and linguistic contexts. Table 5 shows an overview of the 12 papers that we have identified and which focus on defining, conceptualizing, or analyzing theoretical frameworks related to stereotypes, bias, prejudice, or discrimination. Early work on the field of stereotypes explored the cognitive processes behind prejudice and discrimination, and highlighted their impact on social interactions and group dynamics [Fiske 1998]. One key area of interest was the Manuscript submitted to ACM 12 Cignarella et al. Reference Overall Description Phenomenon [Fiske 1993] Deals with how individuals in positions of power are more likely to rely on stereotypes when evaluating others, as power influences cognitive processing and reduces the complexity of social judgments.(Gender) stereotypes. [Fiske 1998] Presents how stereotypes are formed and how they shape biased attitudes and behaviors.Social stereotypes, prejudice, and discrimination across var- ious social groups. [Fiske et al. 2002] Presents the Stereotype Content Model (SCM) which suggests that stereotypes are shaped by two primary dimensions: competence and warmth.Stereotypes across different so- cial groups. [Koch et al. 2016]
https://arxiv.org/abs/2505.17642v1
Extends stereotype content research by introducing the ABC model, which cat- egorizes stereotypes along three dimensions: Agency (socioeconomic success), Beliefs (conservative-progressive ideology), and Communion (warmth and trust- worthiness).Stereotypes across different so- cial groups, such as race, gen- der, and political affiliation. [Koch et al. 2020] Observes how perceptions of group warmth vary across individuals, highlighting that warmth judgments are shaped by personal experiences and social context rather than being universally consistent.Various social groups (e.g., based on race, gender, or sta- tus). [Prabhumoye et al. 2021] Explores the application of deontological ethics — namely, the generalization principle and respect for autonomy via informed consent — within the field of NLP.Various social groups. [Sánchez-Junquera et al .2021] Introduces a detailed taxonomy for classifying stereotypes about immigrants. It categorizes biases into key themes such as economic threats, cultural differences, criminalization, and victimization, helping to systematically analyze how immi- grants are portrayed in social media discourse.Stereotypes towards immi- grants. [Cao et al. 2022] Presents the sensitivity test (SeT) for measuring stereotypical associations from language models.Stereotypes across various so- cial groups. [Kirk et al. 2022] Proposes an analytical framework categorising harms on three axes: (1) the harm type; (2) whether a harm is sought as a feature of the research design; and (3) who it affectsMarginalized groups. [Nicolas et al. 2022] Proposes the spontaneous stereotype content model (SSCM) as an initial and com- prehensive descriptive model with the aim to understand the structure, properties, and predictive value of spontaneous stereotypes.Stereotypes across various so- cial groups. [Schmeisser-Nieto et al. 2022] Provides an operationalised definition for the annotation of new corpora by characterising the different forms in which stereotypes appear.Implicit stereotypes (racial/ethnic). [Fiske 2024] This web contribution aims to deepen the understanding of biases in contemporary society by distinguishing between prejudice, stereotypes, and discrimination while exploring the differences between blatant and subtle forms of bias.Prejudice, Stereotype, Discrim- ination, Bias. [Fraser et al. 2024] Analyzes different data sources and different data collection methods regarding the study of stereotypes.Stereotype (stereotypical be- liefs common in the society). Table 5. Theoretical work and papers presenting taxonomies or annotation schemes. reciprocal relationship between power and stereotyping [Fiske 1993]. Fiske [1993] argued that those in power tend to pay less attention to others, making them more prone to relying on stereotypes. Several papers also proposed theoretical models to understand stereotypes, their structure, and their social implications. Fiske et al .[2002] introduced the Stereotype Content Model (SCM), a model according to which stereotypes are systematically structured along two primary dimensions: warmth (how friendly or trustworthy a group is perceived) and competence (how capable a group is perceived). These perceptions are shaped by status and competition, meaning groups with high status are seen as competent, while those perceived as competitors are seen as cold. Expanding this model, Koch et al .[2016] proposed adding a third dimension — agency (power or dominance) — arguing that stereotypes are also linked to socioeconomic success and political ideology. Similarly, Koch et al .[2020] explored how warmth perceptions vary depending on individual experiences and social consensus, challenging the idea that warmth judgments
https://arxiv.org/abs/2505.17642v1
are universal. Another theoretical contribution was made by Nicolas et al .[2022], who explored how stereotypes form in different Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 13 contexts, developing a taxonomy of stereotypes and demonstrating that they can emerge in various social and cultural settings, rather than being fixed. Several studies expand beyond psychological models to examine stereotypes in the context of language and compu- tational systems. For example, Cao et al .[2022] and Fraser et al .[2024] analyze how AI systems encode, reproduce, and even amplify social stereotypes. The intersection of stereotypes and NLP is also explored by Kirk et al .[2022] and Prabhumoye et al .[2021], who emphasize the ethical implications of stereotype propagation in AI systems. Prabhumoye et al.[2021] argue that bias in AI is not merely a technical issue but a moral responsibility, requiring NLP researchers to adopt ethical guidelines to minimize harm. Additionally, Sánchez-Junquera et al .[2021] introduced the StereoImmi- grants dataset, along with a taxonomy for classifying stereotypes about immigrants, enabling a structured approach to identifying and analyzing biases in discourse. 3.2 Dataset and Corpora Table 6 shows the papers that have proposed datasets containing stereotype as one of the annotated dimensions. The developed datasets employ a variety of data types, including social media data, curated sentence pairs, synthetic data, and multilingual corpora, and aim to address different aspects of bias detection. All the datasets are ordered chronologically according to their release date, i.e. the publication that describes them. A predominant feature among the datasets presented in the table is the presence of English-centric datasets, such asCrowS-Pairs [Nangia et al .2020], StereoSet [Nadeem et al .2021], BBQ [Parrish et al .2022], SeeGULL [Jha et al . 2023], BUG [Levy et al .2021], and IROSTEREO [Ortega-Bueno et al .2021]. While the focus on English is expected due to the availability of large-scale corpora, efforts to expand bias detection to other languages are evident. Datasets such as Honest [Nozza et al .2021] span multiple Romance languages, while specific resources like FB-Stereotypes [Bosco et al .2023], MRHC [Bourgeade et al .2023], Queereotypes [Cignarella et al .2024], and StereoImmigrants [Sánchez-Junquera et al. 2021] introduce bias-related corpora in Italian, Spanish, and French. The datasets listed vary significantly in size, ranging from a few hundred manually created instances (e.g., honest with 420 instances per language) to large-scale corpora containing tens of thousands of sentences (e.g., bbqwith 58,492 examples and bugwith 108K sentences). Medium-sized datasets, such as Queereotypes (6,315 texts), FB-stereotypes (2,990 Facebook posts), and StereoImmigrants (3,635 manually annotated sentences), provide focused yet substantial corpora for analysis. The multilingual datasets, such as mrhc andStereoHoax , contain little more than 15,000 instances, ensuring coverage across different languages. In terms of the linguistic phenomena captured, most datasets focus on stereotypes, as seen in CrowS-Pairs ,SeeGULL , Honest , and StereoHoax [Schmeisser-Nieto et al .2024a], with others addressing broader biases, such as BBQ. Additional considerations include irony and stereotype spread in IROSTEREO and proself versus prosocial discourse inFB-Stereotypes . Several datasets, such as MRHC andStereoHoax , emphasize implicitness and contextuality, underscoring the challenge
https://arxiv.org/abs/2505.17642v1
of detecting biases that are not overtly expressed. The intersection of bias detection with hate speech, offensiveness, and stance analysis is also evident in datasets like Queereotypes ,HaSpeeDe2 , and StereoImmigrants . The datasets focus on various biases, stereotypes, and discriminatory content across multiple domains, including gender, race, profession, religion, immigration, and LGBTQIA+ identities. Some datasets, such as CrowS-pairs and StereoSet , specifically target stereotypical associations, while others, like HaSpeeDe2 ,Queereotypes , and StereoIm- migrants , also analyze hate speech, offensiveness, and stance. The StereoImmigrants dataset is particularly notable for its structured taxonomy of stereotypes about immigrants, covering themes such as criminality, economic impact, and Manuscript submitted to ACM 14 Cignarella et al. Reference Language Size/Source Phenomena Target/Characteristics [Nangia et al. 2020] CrowS-pairsEnglish 1,508 minimally distant sentence pairsStereotypes See Appendix B [Sanguinetti et al. 2020] HaSpeeDe2Italian 8,602 tweets and news Hate speech, stereotype and nominal utterancesMuslims, Roma and im- migrants [Levy et al. 2021] BUGEnglish 108K sentences sampled from Wikipedia, PubMed abstracts, and Covid19 research papersGender bias Gender [Nadeem et al. 2021] StereoSetEnglish Sentences triplets 1) stereotypical, 2) anti-stereotypical, 3) unrelated associationGender, profession, race, and religion [Nozza et al. 2021] HonestEnglish, Italian, French, Portuguese, Romanian, Spanish420 manually-created sentences for each languageStereotypes Gender [Ortega-Bueno et al. 2021] IROSTEREOEnglish posts from 600 Twitter users Irony and stereotype spreadersSee Appendix B [Sánchez-Junquera et al .2021] StereoImmigrantsSpanish 3635 sentences extracted from transcriptions of the Spanish par- liament speechesStereotypes Immigrants [Névéol et al. 2022] French CrowS-pairsFrench 1,677 sentence pairs (1467 trans- lated from English Crows-Pairs + 210 new in French + backtrans- lated to English)Stereotypes See Appendix B [Parrish et al. 2022] BBQEnglish 58,492 unique examples of sen- tences and paragraphsBiases See Appendix B [Bosco et al. 2023] FB-StereotypesItalian 2,990 Facebook posts Proself/prosocial, hate speech, stereotype, preju- dice, discreditImmigrants [Bourgeade et al. 2023] Multilingual Racial Hoaxes Corpus (MRHC)Italian, Spanish, French16,906 tweets Racial stereotypes, Con- textuality, Implicitness, Forms of discreditRace [Jha et al. 2023] SeeGULLEnglish 7,750 (identity+attribute) pairs Stereotypes Geographical identity [Cignarella et al. 2024] QueereotypesItalian 3,427 tweets and 2,888 Facebook status + comment pairsHate Speech, aggressive- ness, offensiveness, irony, stereotype and stanceLGBTQIA+ [Schmeisser-Nieto et al .2024a] StereoHoaxItalian, Spanish, French17,814 tweets Stereotype, implicitness, contextuality, discreditImmigrants [Schmeisser-Nieto et al .2024b] DETESTS-DisSpanish 12,111 comment sentences and tweetsStereotype, implicitness Race Table 6. Papers presenting datasets, corpora and resources cultural integration. Additionally, datasets like IROSTEREO andMRHC examine the role of irony, implicitness, and contextuality in spreading stereotypes. The multilingual nature of several corpora, including Honest ,StereoHoax , andStereoImmigrants , enables cross-linguistic comparisons of bias. Further details on the specific characteristics of each dataset can be found in Appendix B. In this section, we provided a qualitative and quantitative overview of the datasets, while in Section 4, we adopt a more critical perspective to explore their characteristics and areas for further development. 3.3 Computational Analyses of Stereotypes The studies listed in Table 7 all present some kind of analyses of stereotypes by using computational techniques. Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 15 Reference Language Goal Phenomenon [Rudinger et al. 2017] English The main goal of the paper is to
https://arxiv.org/abs/2505.17642v1
investigate and quantify the social biases that emerge in natural language inference data as a result of the human elicitation process.Social bias across various targets. [Mendelsohn et al. 2020] English The main goal of this paper is to propose a systematic, computational framework that al- lows researchers to analyze dehumanizing lan- guage.Dehumanization (with a specific fo- cus on LGBTQIA+ people). [Fraser et al. 2021] English (U.S. context)The main goal of this paper is to leverage the Stereotype Content Model (SCM) to develop computational methods that both detect and mitigate stereotypical language.Stereotypes (SCM) across 79 differ- ent target groups. [Locatelli et al. 2023] German, French, English, Italian, Spanish, Portuguese and NorwegianThe paper’s main goal is to investigate how homotransphobia manifests across different cultural and linguistic contexts on Twitter.LGBTQIA+, Cultural stereotypes, and Homotransphobia. [Deshpande et al. 2023] English The paper aims to investigate how assigning different personas to a language model (in this case, ChatGPT) influences the level and nature of toxic language it generates.Toxicity towards various job titles re- garding gender, race, sexual orien- tation, country, profession, religion, name, political organization, govern- ment type. [Fraser et al. 2024] English (U.S. context)The goal of the paper is to investigate how the content of stereotypes varies depending on the data source used to derive them.Politicians, teachers, CEOs, scien- tists, bankers, accountants, engi- neers, farmers, lawyers, and nurses. Table 7. Papers presenting computational analyses of stereotypes. For instance, Rudinger et al .[2017] examines biases introduced through human data elicitation in Natural Language Inference tasks, specifically analyzing the SNLI dataset [Bowman et al .2015]. The authors demonstrate how hypotheses generated by crowdworkers can amplify stereotypical associations (e.g., gender, race, or age biases). They employ both statistical measures, such as pointwise mutual information (PMI), and qualitative analyses to illustrate how biases emerge and influence NLP models. Fraser et al .[2021], in contrast, applies the Stereotype Content Model to computationally detect and mitigate stereotypes across 79 target groups. Their approach seeks to quantify these associations and develop methods for countering their negative impact in NLP applications. Other studies extend beyond the computational analysis of stereotypes into specific manifestations of harmful language. Mendelsohn et al .[2020] bridges social psychology theories with computational linguistic methods to analyze dehumanizing language (particularly targeting the LGBTQIA+ community). The authors propose a systematic framework that operationalizes dehumanization, identifying, quantifying, and interpreting linguistic markers of dehumanizing discourse. Similarly, Locatelli et al .[2023] investigate homotransphobia across multiple languages and cultural contexts, analyzing queer-related discourse on Twitter/X in seven different languages. In addition to assessing the prevalence of homotransphobic content, the authors introduce a taxonomy to classify public discourse surrounding LGBTQ+ issues, providing insights to inform hate speech detection strategies. On the theoretical aspects see also Section 3.1. A recent paper from Fraser et al .[2024] investigated how the content of stereotypes varies depending on the data source used to derive them. In essence, the paper compares stereotype content extracted from different corpora (spontaneous adjectives [Nicolas et al .2022], crowd-sourced StereoSet [Nadeem et al .2021], ChatGPT, Twitter/X), analyzing whether and how patterns of associations shift across sources. This
https://arxiv.org/abs/2505.17642v1
work sheds some light on the importance of data selection for our understanding of societal stereotypes, emphasizing the need for context-aware models that account for data-source-variations in stereotype manifestations. Manuscript submitted to ACM 16 Cignarella et al. Another work that includes ChatGPT for studying bias in AI-generated text is that from Deshpande et al .[2023], in which the authors examine how attributing specific personality traits to ChatGPT can influence the level of toxicity in its responses. Their study quantifies how different persona assignments vary the model’s language outputs, providing insights that could inform mitigation strategies for harmful language in AI-driven systems. 3.4 Stereotype Mitigation and Debiasing The papers regarding the mitigation of stereotypes and biases constitute the broader category in our literature review. In Table 8 we enumerate all those that belong to this category along with the languages included in the study and the framing of the specific phenomenon. Papers are listed chronologically and then alphabetically. Reference Language Phenomenon Chang et al. [2019] English Gender bias, racial bias, and social biases in NLP models. Kurita et al. [2019] English Bias in embeddings (mostly occupational). Sun et al. [2019] English Gender bias in NLP models and word embeddings. Zmigrod et al. [2019] Spanish, Hebrew, French, Italian Gender bias / stereotyping in languages with rich morphology. Liang et al. [2020] English Sentence-level social biases (gender, race, and religion). Blodgett et al. [2021] English Stereotyping and social biases in NLP benchmark datasets. Nadeem et al. [2021] English Stereotypical biases (gender, profession, race, religion). Nozza et al. [2021] English, Italian, Portuguese, French, Romanian, SpanishHurtful stereotypes and biases related to gender roles. Shen et al. [2021] English Biases in tasks such as: sentiment analysis, hate speech detection, profession classification, activity recognition. Blodgett et al. [2022] Language agnostic Fairness, transparency, justice, and ethics of computational systems. Cao et al. [2022] English (U.S.) Stereotypical associations related to social groups and intersectional identities. Fraser et al. [2022] English Age-related stereotypes and over-generalizations. Pujari et al. [2022] English Stereotype (explicit, implicit and non-stereotype). Ungless et al. [2022] English Stereotypical biases, “cold” and “incompetent” traits. Birhane et al. [2023] English Hateful content and societal biases. Davani et al. [2023] English Social stereotypes and biases in hate speech annotation and classification processes. Shaikh et al. [2023] English Social biases and toxic content. Vargas et al. [2023] Portuguese, English Social stereotypes and biases in hate speech detection systems. Zhou et al. [2023] English Demographic biases and social stereotypes. Ranaldi et al. [2024] English Biases related to gender, race, religion, and profession. Table 8. Papers presenting approaches and techniques for stereotype mitigation and for debiasing language models. Among the work we have reviewed, Chang et al .[2019] provide a comprehensive survey of bias and fairness in NLP, examining historical contexts, bias quantification methodologies, and mitigation strategies. Their study spans various NLP applications, including word embeddings, coreference resolution, machine translation, and vision-and-language tasks. Similarly, Sun et al .[2019] analyze gender bias in NLP, categorizing mitigation strategies into pre-processing, in-processing, and post-processing methods while also highlighting challenges. As seen in Section 3.2, datasets play a crucial
https://arxiv.org/abs/2505.17642v1
role in measuring and evaluating biases in NLP models. For instance, Nadeem et al .[2021] introduce StereoSet, a dataset specifically designed to assess stereotypical biases in pretrained language models across gender, profession, race, and religion using the Context Association Test. These evaluations Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 17 underscore the limitations of current datasets in capturing and quantifying bias effectively, necessitating more refined methodologies for stereotype assessment. However, dataset creation can often pose some criticalities. Starting from this point of view, Blodgett et al .[2021] audits four widely used benchmark datasets (StereoSet, CrowS-Pairs WinoBias, and Winogender), identifying key pitfalls such as ambiguous definitions and assumptions that undermine their effectiveness in quantifying stereotyping. In a similar direction, several other studies have focused on the quantification of bias in NLP models. Among these, the work of Kurita et al .[2019] proposes a template-based approach to assess bias in BERT, particularly in occupational contexts, demonstrating how biases embedded in pre-trained language models could influence downstream tasks. Secondly, Liang et al .[2020] introduce a post-hoc method aimed at reducing social biases in sentence representations, targeting gender, race, and religious biases, while Cao et al .[2022] analyze group-trait associations in language models, employing the sensitivity test to compare model outputs against human judgments. Finally, Nozza et al .[2021] propose HONEST , a systematic evaluation framework that utilizes lexicon-based methodologies for recognizing stereotypes in language model completions across multiple languages, with a particular emphasis on gender-specific biases. Among the many relevant research papers, we have selected works that present actual computational methodologies for bias mitigation in NLP models. For instance, Zmigrod et al .[2019] introduce a counterfactual data augmentation (CDA) approach to mitigate gender biases in morphologically rich languages by converting masculine-inflected sentences into their feminine counterparts and vice versa. Shen et al .[2021] propose a contrastive learning framework that encourages similarity among instances sharing the same class label while pushing apart those with different protected attributes; and Ungless et al .[2022] build on this by applying the SCM to modify contextualized word embeddings, ensuring stereotype reduction without compromising overall model performance. Some work focuses on debiasing at the fine-tuning level or at the training level, such as Zhou et al .[2023], who propose Causal-Debias , a framework that integrates debiasing directly into the fine-tuning process of pretrained language models by isolating non-causal bias-related factors while preserving task-relevant information. Ranaldi et al .[2024] investigate biases in large language models, specifically LLaMA and OPT, and apply Low-Rank Adaptation (LoRA) to effectively reduce biases related to gender, race, religion, and profession. Pujari et al .[2022] introduce a reinforcement learning-based approach for detecting and mitigating both explicit and implicit stereotypes in language models, improving model fairness during training and inference. Other relevant work that we have included in this section, focuses on bias and its intertwined relationship with implicit hate speech. Among others, Davani et al .[2023] highlight how annotators’ biases influence hate speech detection, revealing that classifiers often reinforce societal stereotypes against marginalized groups. Secondly, Vargas et al.[2023] propose an analysis method method to assess
https://arxiv.org/abs/2505.17642v1
bias in hate speech classification systems, particularly in English and Portuguese datasets from social media platforms. Additionally, Shaikh et al .[2023] explore the effects of zero-shot Chain of Thought (CoT) prompting in large language models, demonstrating that CoT often exacerbates bias and toxicity in model outputs. To conclude, beyond all these more technical contributions, other researchers have inquired the more ethical aspects of this field of research. Blodgett et al .[2022] advocate for a human-centered approach to NLP fairness, incorporating insights from Human-Computer Interaction to mitigate potential harms in language technologies. Manuscript submitted to ACM 18 Cignarella et al. 3.5 Stereotype Detection as a Classification Task Table 9 presents an overview of various studies on stereotype detection across different languages, models, and methodologies. Each row represents a research paper, detailing the language(s) analyzed, the models employed, and the corresponding metrics used to assess the performance. Reference Language Models / Methodology Performance Metrics Sanguinetti et al. [2020] Italian BERT, AlBERTo and UmBERTo, XLMRoBERTa, DBMDZ, Bi- LSTM, Rule-based classifiers.Best scores in the shared task : Macro F1: 0.770 (Twitter) Macro F1: 0.720 (News data) Sánchez-Junquera et al .[2021] Spanish M-BERT, XLM-RoBERTa, BETO, Span- BERTa leveraging Pointwise Mutual In- formation.M-BERT, Accuracy: 0.829 XLM-RoBERTa, Accuracy: 0.780 BETO, Accuracy: 0.861 SpanBERTa, Accuracy: 0.766 Pujari et al. [2022] English Multi-task learning (MTL) model and Re- inforcement Learning guided multi-task learning model (RL-MTL).coarse-grained and fine-grained : Baseline, F1: 0.657 and 0.614 MTL, F1: 0.683 and 0.650 RL-MTL, F1: 0.742 and 0.679 Bosco et al. [2023] Italian 10-fold cross-validation with SVM-RBF for baselines. GilBERTo with the addition of lexically relevant information.Baseline SVM-RBF, F1: 0594 GilBERTo base, F1: 0.631 GilBERTo + lexical info, F1: 0.661 Vargas et al. [2023] Brazilian Portuguese, EnglishBERT and fastText combined with differ- ent feature sets based on: Social Stereo- type Analysis, bag-of-words, offensive lexicon.Reporting only the best scores for each language setting: Portuguese, F1: 0.860, English, F1: 0.780 Cignarella et al. [2024] Italian mBERT, AlBERTo in base setting and ex- panded setting with extra data from the Italian Hate Speech detection task.Reporting only the best scores: mBERT, F1: 0.735, AlBERTo, F1: 0.744 Schmeisser-Nieto et al .[2024a] Italian, Spanish, French GilBERTo, BETO and CamemBERT base + second setting enhanced with contex- tual information.Reporting only the best scores: Italian, Macro F1: 0.805 Spanish, Macro F1: 0.826 French, Macro F1: 0.716 Table 9. Summary of classification experiments in selected studies. Several studies focus on Italian, such as Sanguinetti et al .[2020], which presents the results of the shared task on Hate Speech Detection (HaSpeeDe2), where subtask B is Stereotype Detection. The teams participating in the competition employ a diverse set of classifiers including BERT variants (AlBERTo, UmBERTo, XLMRoBERTa) and the best scores achieved are: macro F1 of 0.770 (for the Twitter dataset) and 0.720 (for News dataset). Also in Italian, Bosco et al . [2023] explores an SVM-RBF baseline and the GilBERTo model, incorporating lexical features to improve performance, reaching an F1 score of 0.661. Cignarella et al .[2024] further investigates Italian stereotype detection with mBERT and AlBERTo, reporting F1 scores of 0.735 and 0.744, respectively. Meanwhile, Schmeisser-Nieto et
https://arxiv.org/abs/2505.17642v1
al .[2024a] extends the analysis to Italian, Spanish, and French, using GilBERTo, BETO, and CamemBERT, with the best macro F1 scores reaching 0.805 (Italian), 0.826 (Spanish), and 0.716 (French). For Spanish, Sánchez-Junquera et al .[2021] explores several transformer models, including M-BERT, XLM-RoBERTa, BETO, and SpanBERTa, achieving the highest accuracy of 0.861 with BETO. The inclusion of BETO, a Spanish-specific model, demonstrates the importance of leveraging language-specific embeddings for improved performance. In English, Pujari et al .[2022] employs multi-task learning (MTL) and reinforcement learning-enhanced MTL (RL- MTL), showing a progressive improvement in F1 scores from a baseline (0.657 coarse-grained, 0.614 fine-grained settings) to RL-MTL (0.742 and 0.679, respectively). Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 19 Vargas et al .[2023] explores Brazilian Portuguese and English, integrating BERT and fastText models with feature sets based on a new methodology named Social Stereotype Analysis, bag-of-words, and the addition of offensive lexicons. The best F1 scores are 0.860 for Portuguese and 0.780 for English. 3.6 Miscellaneous work on Stereotype Detection Beyond the primary categories of theoretical works, datasets and corpora, analyses, methodologies for mitigation and debiasing, and classification framework, several works contribute to the broader discourse on stereotype detection by presenting large-scale projects, shared task overviews, tutorials, and position papers. These studies often serve as foundational discussions on challenges in bias detection, ethical concerns, and emerging trends in the field. For instance, Bhatt et al .[2022] and Fortuna and Nunes [2018] provide comprehensive overviews of fairness and hate speech detection in NLP, while Sun et al .[2019] and Stańczak and Augenstein [2021] focus specifically on gender bias and strategies for its mitigation. Shared task descriptions, such as Sanguinetti et al .[2020] and Schmeisser-Nieto et al .[2024b], outline benchmark competitions designed to advance hate speech and stereotype identification across multiple domains. Similarly, projects like D’Errico et al .[2024] and Ortega-Bueno et al .[2021] highlight interdisciplinary approaches to stereotype detection, incorporating insights from linguistics, psychology, and computational ethics. Position papers such as Blodgett et al . [2021] and Chang et al .[2019] critically examine the limitations of existing work on stereotypes/biases, taking into account datasets, models, and emphasizing the need for improved annotation frameworks and fairer evaluation metrics. In this section, we provided a qualitative overview of the papers reviewed for our literature analysis. To facilitate clarity and accessibility, we have structured key information into tables, enabling readers to efficiently reference aspects such as the most prevalent theoretical frameworks, the scope of prior research, the languages of the studies, the models employed, and the corresponding results. In the following section (§4), we will examine the challenges that emerged from our analysis. Our findings suggest that this field presents significant opportunities for further development, and we will outline key areas where advancements are due. 4 CHALLENGES Despite significant advancements in the field of Stereotype Detection within the domain of Natural Language Processing, several challenges still exist that hinder the development of fairer and unbiased models. In this section, we explore the key points we discovered during our literature review and briefly discuss them. 4.1 Consensus on
https://arxiv.org/abs/2505.17642v1
Definitions and Theoretical Frameworks Throughout our survey of over 50+ papers on stereotype detection in NLP, we observed a general consensus on the definition of stereotypes. Most studies align with the foundational work of social psychologists such as Fiske et al . [2002] and Koch et al .[2016], whose frameworks dominate the conceptual landscape. However, other contributions, such as the work conducted by Abele and Wojciszke [2007], remain largely absent from being referenced to. Despite this apparent agreement on what constitutes a stereotype, we found a notable lack of clarity when it comes to distinguishing stereotypes from related concepts such as biases andprejudice . Most of the studies we reviewed tend to use these terms interchangeably, with no explicit attempt to delineate their boundaries. This fuzziness is likely due to the inherently subjective nature of stereotypes, which depend on socio-cultural context and evolve over time. As a result, while theoretical definitions exist, operationalizing them in computational terms remains a significant challenge. Manuscript submitted to ACM 20 Cignarella et al. Also, we observed that theoretical studies on stereotypes often avoid focusing on specific target groups but aim for a broader range of attributes. This comes in contrast to the rest of the papers that we reviewed which often frame their contributions as addressing stereotypes in a general sense, but later focus on highly specific instances, often restricted to particular datasets, a specific language or a very limited number of targets. 4.2 Attribute Selection Throughout our survey, we noticed that most computational methods rely on predefined attributes to operationalize stereotype detection. This targeted approach allows for more precise identification of stereotypes but comes at the cost of reduced generalizability, as models tend to be optimized for specific biases rather than addressing stereotypes in a broader, more inclusive manner. Our literature review reveals that a significant portion of NLP research on stereotype detection centers around gender and racial stereotypes, with gender often analyzed in conjunction with occupational biases. However, as also noted by Stańczak and Augenstein [2021] and by Sun et al .[2019] in their surveys, most studies still adhere to a binary representation of gender (male/female), failing to represent non-binary identities, with the exclusion of the notable work of Cao et al. [2022]; Deshpande et al. [2023] and [Jha et al. 2023]. Other important social attributes, such as age and sexual orientation, receive little to no attention in the literature with few exceptions [Fraser et al .2022]. Moreover, the concept of intersectionality[Crenshaw 1989], which refers to the interconnected nature of social categorizations such as race, gender, and class in creating overlapping systems of discrimination or disadvantage, remains, still nowadays, largely unexplored with only a few notable examples [Lalor et al. 2022; Subramanian et al. 2021; Tan and Celis 2019]. 4.3 Datasets Data collection and data quality are key factors for developing robust computational methods for stereotype detection. In total, we identified 15 papers that have developed a dataset for the task of stereotype detection. Some of those studies focus only on stereotypes [Nadeem et al .2021; Nangia et al .2020] while others address this
https://arxiv.org/abs/2505.17642v1
problem as part of a broader task, such as hate speech detection [Cignarella et al .2024; Sanguinetti et al .2020]. The datasets may vary in terms of the source/type of textual data, size, language, phenomenon, etc. •Source / Type. The choice of data types is a very important factor in effectively measuring and mitigating stereotypes in NLP models. The datasets that we have found contain text from social media, curated text and synthetic data. In particular, 8 out of 15 proposed datasets rely on text from social media (e.g., tweets, Facebook posts or Reddit). As this type of data is taken from the real-world, it is often unstructured and context-dependent, containing noisy and informal language. Another limitation of using content from social media is that data may not fully represent the diversity of stereotypes present in broader societal discourse, as platforms have different user demographics and content moderation policies. It could be interesting to incorporate a wider range of digital platforms, such as YouTube or TikTok, to capture a more comprehensive spectrum of biased discourse. Other researchers use sentence pairs to measure the stereotypes in language models [Nadeem et al .2021; Nangia et al.2020]. Jha et al .[2023] leverages generative models to create synthetic examples of stereotypes. Although this is an efficient way to create datasets, synthetic data can introduce challenges in ensuring data quality. Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 21 •Phenomenon / Target. The analysis of phenomena modeled reveals a strong emphasis on hate speech, which appears most frequently among the surveyed papers. This suggests that a significant portion of research in this domain focuses on detecting and mitigating harmful language directed at individuals or groups. The presence of stereotypes, bias, and prejudice as recurring topics further highlights the growing interest in understanding how language and discourse contribute to societal inequalities and discrimination. The intersection of these phenomena indicates that many studies are not solely focused on a single aspect of offensive speech but instead explore multiple overlapping issues, such as how bias in datasets can contribute to the propagation of stereotypical narratives. Another noteworthy pattern is the variety in categorization approaches used by different papers. While some explicitly label their research under broad terms like hate speech orstereotype , others employ more nuanced descriptions, such as homotransphobia ,aggressiveness , and implicit bias . This diversity in terminology underscores the complexity of the field and the challenge of establishing universally accepted definitions. The presence of multi-label classifications in several entries suggests an awareness of the interconnected nature of these linguistic phenomena, reinforcing the idea that addressing only one aspect might not be sufficiently accurate (see also Section 4.2). A closer examination of target groups reveals a strong focus on gender biases, as evidenced in BUG [Levy et al .2021], Honest [Nozza et al .2021], and StereoSet [Nadeem et al .2021]. Racial biases are addressed in datasets such as MRHC [Bourgeade et al .2023] and DETESTS-Dis [Schmeisser-Nieto et al .2024b], while biases against immigrants appear in FB-Stereotypes [Bosco et al .2023], StereoHoax [Schmeisser-Nieto et al .2024a], and
https://arxiv.org/abs/2505.17642v1
HaSpeeDe2 [Sanguinetti et al . 2020]. The inclusion of datasets dedicated to the LGBTQIA+ community, such as Queereotypes [Cignarella et al .2024], demonstrates the growing awareness of biases affecting marginalized social groups beyond traditional categories like gender and race. •Language and geographical focus. We observe that 6 out of the 15 datasets contain only English content and 3 out of the 15 contain content on more than one language at the same time. The overview suggests a strong emphasis on European languages, with English, Italian, Spanish, French and Portuguese appearing frequently. This suggests that much of the research in this dataset focuses on linguistic contexts that are well-resourced and commonly studied in NLP. The dominance of English often with regional specifications such as English (India) and English (U.S.) indicates a consideration of geographical and cultural variations in language use. Some datasets incorporate multiple languages, with mentions of gender-inflected languages like Italian, Romanian, and Spanish, suggesting a focus on linguistic structures that encode genders. Additionally, there are references to Hebrew (and very few others non-European languages). One dataset, SeeGULL [Jha et al .2023], explicitly states that it includes stereotypes from 178 countries across multiple geopolitical regions, reflecting a more global perspective compared to other datasets that primarily focus on Western languages. •Size. We observe that the datasets we exountered in our literature review vary a lot in size. Some datasets contain only a few thousand annotated instances, such as the FB-stereotypes with 2,990 instances, while others, like Fraser et al.[2022], comprise hundreds of thousands of entries, with 300,000 tweets. This discrepancy reflects differences in data collection strategies, annotation efforts, and the computational resources available to different research groups. Larger datasets, such as HaSpeeDe2 andSeeGULL , aim for broad coverage and diverse representation, often leveraging automated collection and annotation techniques. However, they may suffer from issues related to annotation quality and data noise. In contrast, smaller datasets, often manually curated, provide precise and high-quality annotations but may lack the scalability needed for robust model generalization. This variability in dataset size not only affects model training and evaluation but also highlights the challenges of balancing dataset breadth. Manuscript submitted to ACM 22 Cignarella et al. •Annotation. The literature reveals a diverse range of annotation approaches, reflecting different levels of expertise and methodology across the resources. Some datasets were annotated by in-house experts or researchers, ensuring a higher degree of linguistic or domain-specific knowledge in the labeling process. For example, one dataset was annotated by a mix of computational linguists, social psychologists, and linguistics students, which suggests a multidisciplinary approach to ensuring both technical and social science perspectives in the data labeling. Other datasets relied on crowdsourcing platforms such as Amazon Mechanical Turk ,Figure8 ,Appen , and Prolific , where workers were selected based on specific criteria, such as geographic location and task approval rates. This approach enables large-scale data annotation but may introduce quality control challenges, as crowdworkers typically lack specialized expertise. Additionally, some datasets were annotated using a semi-supervised approach, which combines human annotation with automated methods, striking a balance between efficiency and human
https://arxiv.org/abs/2505.17642v1
judgment. Notably, there are datasets with no explicit annotation or cases where annotation details are unclear or not always specified. One dataset creation involved 89 annotators from 16 different countries, demonstrating a commitment to linguistic and regional diversity [Jha et al .2023]. Overall, the annotation strategies vary widely, from expert-driven methods ensuring quality to crowd-sourced efforts enabling scalability, highlighting the trade-offs between precision, cost, and efficiency in dataset creation. •Agreement. Regarding the measurement of inter-annotator agreement (IAA), we encountered a wide range of approaches, mostly dependent on the size of the corpus, the type of data and on the annotator pool. Some datasets rigorously assessed annotator consistency, while others either do not mention it or explicitly state that IAA was not calculated. The presence of various statistical methods such as Fleiss’ Kappa, Cohen’s Kappa, and Krippendorff’s Alpha suggests that for many datasets, ensuring annotation reliability was a priority. Some datasets report detailed IAA scores across multiple annotation categories (e.g., hate speech, offensiveness, irony, stereotypes, stance), indicating a more granular evaluation of agreement levels. However, a significant number of datasets either do not calculate agreement or do not provide sufficient details. Additionally, some datasets mention specific cases where lower agreement was observed, particularly for neutral contexts, implying that subjectivity in annotation can be a challenge, especially in nuanced linguistic phenomena. •Availability. We encountered a mixed landscape of dataset accessibility, indicating that while some datasets are openly shared, others have restricted or unclear availability. 8 datasets out of 15 are explicitly marked as publicly accessible, often through a direct access through a URL link, linking to repositories such as GitHub, OSF or HuggingFace. However, there are 3 cases where the dataset is not available (e.g broken link, or empty repository) and 4 cases where it is shared only upon specific request to the authors, which can introduce barriers to accessibility and slow down research progress. Overall, while a good number of datasets are openly available, there is still inconsistency in accessibility across studies. Encouraging standardized data-sharing policies and ensuring proper dataset hosting would help improve the reproducibility and usability of research in this field. 4.4 Languages As previously noted in the discussion on datasets (Section 4.3), the overall trend in the literature review reveals a strong predominance of English, reaffirming its position as the most extensively studied language in computational linguistics research. Many of the papers reviewed focus on English-only, often with regional specifications such as English (U.S.) or English (India), suggesting that despite some awareness of national linguistic variations, the primary emphasis remains Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 23 on Anglophone discourse. This dominance reflects the centrality of English in NLP advancements, but it is also limiting the applicability of findings to non-English contexts. Beyond English, several papers include research in Italian, Spanish, French, Portuguese, Norwegian, Romanian and German, indicating a moderate presence of European languages in the literature. However, there is a notable absence (or very limited representation) of low-resource languages, which means that many linguistic communities, especially those in Africa, South Asia, and Indigenous groups, remain
https://arxiv.org/abs/2505.17642v1
underrepresented in stereotype and bias-related studies. While the dataset includes Tamil and code-mixed Tamil-English [Chakravarthi et al .2022], such instances remain isolated rather than part of a broader effort to engage with underrepresented linguistic groups. Regarding national perspectives, relatively few papers seem to explicitly account for cross-cultural differences, though some studies attempt to analyze stereotypes across multiple geopolitical regions [Jha et al .2023]. However, in most cases, multilingual research remains confined to European languages, and national perspectives outside of Western frameworks are rarely explored in depth. 4.5 Generalizability and Staticity Many models are trained on specific datasets that reflect narrow cultural perspectives, limiting their ability to transfer effectively to diverse linguistic environments [Jha et al .2023; Ranaldi et al .2024]. This raises concerns about domain adaptation , as biases embedded in training data may affect how models interpret stereotypes in new settings. Another critical issue is that most models rely on static datasets, which fail to capture the evolving nature of stereotypes. Stereotypical associations shift over time, yet NLP models often struggle with temporal bias, leading to outdated predictions [Cao et al .2022; Mendelsohn et al .2020]. Moreover, implicit stereotypes remain difficult to detect due to their context-dependent nature, highlighting the need for adaptive learning approaches and multilingual fine-tuning. Finally, inconsistencies in annotation and stereotype categorization hinder generalization. The absence of standardized frameworks results in models that overfit to specific datasets rather than learning robust detection strategies. 4.6 Evaluation One of the most persistent challenges in stereotype detection in NLP is the difficulty of establishing a universally applicable evaluation framework. While existing benchmarks and well-known metrics, such as F-score, Accuracy, Precision and Recall, provide a structured means of assessing performance, the fundamental issue lies in the subjectivity and cultural dependence of defining what constitutes a stereotype. The lack of consensus on a standardized evaluation procedure stems from the tension between broad, generalizable approaches and those tailored to specific social or cultural contexts. Addressing bias at a high level often results in overly abstract criteria, whereas focusing on specific stereotypes, such as those targeting a particular marginalized group, risks being too narrow. Similarly, multilingual evaluation efforts frequently fail to capture cultural and geographic nuances, while monolingual approaches remain constrained to particular linguistic communities. A major issue in current evaluation frameworks is the oversimplification of bias metrics. Many studies attempt to capture bias through numerical aggregation methods, where models are assigned scores that quantify their level of stereotype reinforcement or fairness. However, bias in language is highly context-dependent and multifaceted, making it problematic to reduce such complexities to single numerical values. Cao et al .[2022], for example, caution against using averaged human judgments to assess bias, as this approach often collapses diverse perspectives into a single statistic, obscuring how different social groups perceive and experience stereotypes. Without a more nuanced way to Manuscript submitted to ACM 24 Cignarella et al. analyze bias in NLP models, current benchmarks risk misrepresenting real-world social biases and failing to capture the intersecting nature of identity-based stereotypes. Beyond dataset biases, benchmarking frameworks also struggle with real-world applicability. Many stereotype
https://arxiv.org/abs/2505.17642v1
detection models are evaluated using static datasets, which fail to capture the evolving nature of stereotypes, biases, and societal attitudes over time. For example, datasets collected several years ago may no longer reflect current lin- guistic trends or emerging biases, making them less effective for evaluating contemporary NLP systems. Additionally, benchmarks often focus on explicit stereotypes and overt biases, neglecting subtler, implicit biases that require more sophisticated detection techniques. Studies such as Schmeisser-Nieto et al .[2022] highlight the need for more compre- hensive datasets that incorporate implicit stereotypes, multimodal biases (e.g., images, hashtags, URLs), and dynamic linguistic shifts to improve the accuracy of stereotype detection models. Another key limitation in benchmarking is the lack of consistency in how bias mitigation techniques are evaluated. Some studies propose debiasing methods that work well under specific conditions, but these approaches do not always generalize to different datasets or linguistic settings. Without a unified evaluation framework, it remains unclear whether a given mitigation strategy truly reduces bias or simply reconfigures it in a way that is less detectable under the existing benchmarks. Moreover, some bias mitigation strategies may introduce new biases or degrade model performance on other NLP tasks, which current benchmarks often fail to assess. 5 CONCLUSIONS AND FUTURE WORK This survey has examined the landscape of Stereotype Detection in Natural Language Processing, reviewing over 50+ studies that span diverse methodologies, datasets, and theoretical perspectives. Our literature review has highlighted that, while there is broad agreement on the fundamental nature of stereotypes – often referring to established psychological theories such as those by Fiske and Koch – many studies use stereotype ,bias, and prejudice interchangeably, leading to conceptual ambiguity. This lack of clear boundaries complicates the operationalization of stereotype detection in computational settings, making classification and evaluation inconsistent across studies. Despite this initial challenge, and the newness of the research field, NLP researchers have made a significant effort in advancing their work regarding stereotypes in the last 5 years especially. Recent work has proposed alternative debiasing techniques, refined stereotype taxonomies, and improved annotation frameworks. Additionally, interdisciplinary efforts drawing from social psychology, linguistics, and ethics have played a crucial role in contextualizing stereotype detection within broader societal and cultural frameworks. Moreover, the growing emphasis on transparency and reproducibility from the NLP/*CL community11has led to an increase in publicly available datasets, facilitating further research and model benchmarking. However, there is still a pressing need to move beyond traditional, predominantly English-centric datasets and benchmarks to create more representative and culturally aware NLP models. 5.1 Research Opportunities From the information gathered during the literature review, we believe that future research should prioritize the following directions: Clarifying Conceptual Boundaries. One of the most pressing challenges is the need for clearer distinctions between stereotypes, biases, and prejudices. Many NLP studies conflate these terms, making it difficult to standardize annotation 11See for instance: https://aclrollingreview.org/static/responsibleNLPresearch.pdf. Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 25 guidelines and evaluate models effectively. Future research should establish more precise theoretical frameworks that can be consistently operationalized in computational settings, reducing conceptual fuzziness and ambiguity. Exploring
https://arxiv.org/abs/2505.17642v1
Alternative Stereotype Models. Current stereotype detection frameworks largely rely on predefined taxonomies, but alternative models may offer deeper insights into how stereotypes manifest in language. Davani et al . [2023] propose the Agent-Beliefs-Communion (ABC) model as a more structured way to analyze stereotypes, while Névéol et al .[2022] suggest leveraging Social Frames for cross-cultural bias characterization. Fraser et al .[2021] advocate for using large-scale social media corpora to analyze how stereotypes emerge and evolve in real-world discourse. Future research should investigate these alternative models to develop more robust stereotype detection systems. Broadening the Scope of Target Attributes. Current research predominantly focuses on gender and racial stereotypes, often neglecting other important dimensions such as sexual orientation or identity, (dis)ability, and socio-economic status. Moreover, most studies treat gender as a binary construct, failing to include non-binary identities. Another significant gap is the lack of research on intersectionality [Crenshaw 1989]. Future work should expand coverage to these underrepresented attributes to provide a more comprehensive understanding of stereotypes in NLP. Enhancing Context Awareness in Models. Stereotypes are highly context-dependent, yet many existing NLP models struggle to differentiate between stereotypes that vary across cultural and situational contexts. Addressing this challenge requires models that go beyond simple classification and incorporate context-aware detection strategies, particularly for implicit stereotypes. Future research should explore ways to integrate socio-cultural awareness into NLP systems, potentially through the use of multimodal and discourse-aware models. Improving Dataset Creation and Annotation. The quality and scope of datasets remain a critical issue. Some studies rely on small, highly specific datasets, while others use large-scale corpora with minimal human annotation. This inconsistency affects model generalizability and reliability. Future efforts should aim to develop scalable datasets that strike a balance between annotation precision and diversity, while also embracing participatory approaches that actively involve the most affected communities and ensure a more inclusive representation of diverse subjectivities (e.g. Felkner et al. [2023]; Queerinai et al. [2023]). Expanding Linguistic and Cultural Representation. The majority of stereotype detection research focuses on English and a handful of widely spoken languages, leaving low-resource languages and regional dialects underrepresented. Future studies should aim to develop multilingual datasets that account for linguistic diversity and cultural nuances. With this, we conclude our literature review on stereotype detection in NLP. While significant progress has been made, challenges remain in conceptual clarity, dataset diversity, and model generalizability. The lack of standardized evaluation frameworks and reliance on static datasets further hinder robust stereotype detection. This research field offers exciting opportunities. Expanding linguistic representation, refining context-aware models, and integrating interdisciplinary insights can enhance fairness and inclusivity in NLP. Investigating these areas will be crucial for developing socially responsible language technologies. ACKNOWLEDGMENTS This work is funded by the European Union under grant number HEU.MPF.2024.0007.01. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Executive Agency (REA). Neither the European Union nor the granting authority can be held responsible for them. Manuscript submitted to ACM 26 Cignarella et al. REFERENCES Andrea E. Abele and Bogdan Wojciszke. 2007.
https://arxiv.org/abs/2505.17642v1
Agency and communion from the perspective of self versus others. Journal of Personality and Social Psychology 93, 5 (2007), 751–763. https://doi.org/10.1037/0022-3514.93.5.751 Martha Augoustinos and Iain Walker. 1998. The construction of stereotypes within social psychology: From social cognition to ideology. Theory & Psychology 8, 5 (1998), 629–652. Abayomi O. Ayansola. 2021. Hate speech and the Expression of Stereotypes as Products of Social Cognition in Nigeria. Critical Approaches to Discourse Analysis across Disciplines 13, 2, 1–17. https://api.semanticscholar.org/CorpusID:236536724 Shaily Bhatt, Sunipa Dev, Partha Talukdar, Shachi Dave, and Vinodkumar Prabhakaran. 2022. Re-contextualizing Fairness in NLP: The Case of India. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , Yulan He, Heng Ji, Sujian Li, Yang Liu, and Chua-Hui Chang (Eds.). Association for Computational Linguistics, Online only, 727–740. https://doi.org/10.18653/v1/2022.aacl-main.55 Michał Bilewicz and Wiktor Soral. 2020. Hate speech epidemic. The dynamic effects of derogatory language on intergroup relations and political radicalization. Political Psychology 41 (2020), 3–33. Abeba Birhane, Vinay Prabhu, Sang Han, and Vishnu Naresh Boddeti. 2023. On Hate Scaling Laws For Data-Swamps. https://doi.org/10.48550/arXiv. 2306.13141 arXiv:2306.13141 [cs]. Su Lin Blodgett, Q. Vera Liao, Alexandra Olteanu, Rada Mihalcea, Michael Muller, Morgan Klaus Scheuerman, Chenhao Tan, and Qian Yang. 2022. Responsible Language Technologies: Foreseeing and Mitigating Harms. In CHI Conference on Human Factors in Computing Systems Extended Abstracts . ACM, New Orleans LA USA, 1–3. https://doi.org/10.1145/3491101.3516502 Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) . 1004–1015. Cristina Bosco, Viviana Patti, Simona Frenda, Alessandra Teresa Cignarella, Marinella Paciello, and Francesca D’Errico. 2023. Detecting racial stereotypes: An Italian social media corpus where psychology meets NLP. Information Processing & Management 60, 1 (2023), 103118. Tom Bourgeade, Alessandra Teresa Cignarella, Simona Frenda, Mario Laurent, Wolfgang Schmeisser-Nieto, Farah Benamara, Cristina Bosco, Véronique Moriceau, Viviana Patti, and Mariona Taulé. 2023. A Multilingual Dataset of Racial Stereotypes in Social Media Conversational Threads. In Findings of the Association for Computational Linguistics: EACL 2023 . Association for Computational Linguistics, 686–696. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing , Lluís Màrquez, Chris Callison-Burch, and Jian Su (Eds.). Association for Computational Linguistics, Lisbon, Portugal, 632–642. https://doi.org/10.18653/v1/D15-1075 Yang Trista Cao, Anna Sotnikova, Hal Daumé III, Rachel Rudinger, and Linda Zou. 2022. Theory-Grounded Measurement of U.S. Social Stereotypes in English Language Models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , Marine Carpuat, Marie-Catherine de Marneffe, and Ivan Vladimir Meza Ruiz (Eds.). Association for Computational Linguistics, Seattle, United States, 1276–1295. https://doi.org/10.18653/v1/2022.naacl-main.92 Bharathi Raja Chakravarthi, Adeep Hande, Rahul Ponnusamy, Prasanna Kumar Kumaresan, and Ruba Priyadharshini. 2022. How can we detect
https://arxiv.org/abs/2505.17642v1
Homophobia and Transphobia? Experiments in a multilingual code-mixed setting for social media governance. International Journal of Information Management Data Insights 2, 2 (2022), 100119. Kai-Wei Chang, Vinodkumar Prabhakaran, and Vicente Ordonez. 2019. Bias and fairness in Natural Language Processing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts . Alessandra Teresa Cignarella, Manuela Sanguinetti, Simona Frenda, Andrea Marra, Cristina Bosco, and Valerio Basile. 2024. QUEEREOTYPES: A Multi-Source Italian Corpus of Stereotypes towards LGBTQIA+ Community Members. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) . ELRA and ICCL, Torino, Italia, 13429–13441. Kimberlé Crenshaw. 1989. Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. University of Chicago Legal Forum 1989, 1 (1989), 139–167. Jenna Cryan, Shiliang Tang, Xinyi Zhang, Miriam Metzger, Haitao Zheng, and Ben Y Zhao. 2020. Detecting gender stereotypes: Lexicon vs. supervised learning methods. In Proceedings of the 2020 CHI conference on human factors in computing systems . 1–11. Aida Mostafazadeh Davani, Mohammad Atari, Brendan Kennedy, and Morteza Dehghani. 2023. Hate Speech Classifiers Learn Normative Social Stereotypes. Transactions of the Association for Computational Linguistics 11 (2023), 300–319. Jan De Houwer. 2001. A Structural and Process Analysis of the Implicit Association Test. Journal of Experimental Social Psychology 37 (2001), 443–451. https://api.semanticscholar.org/CorpusID:15236981 Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. 2023. Toxicity in chatgpt: Analyzing persona-assigned language models. In Findings of the Association for Computational Linguistics: EMNLP 2023 . 1236–1270. John F. Dovidio, Miles Hewstone, Peter Glick, and Victoria M. Esses. 2010. Prejudice, stereotyping and discrimination: Theoretical and empirical overview. 3–28. https://api.semanticscholar.org/CorpusID:7822089 Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 27 Francesca D’Errico, Cristina Bosco, Marinella Paciello, Farah Benamara, Paolo Giovanni Cicirelli, Viviana Patti, Véronique Moriceau, and Mariona Taulé. 2024. SteRHeotypes Project. Detecting and Countering Ethnic Stereotypes emerging from Italian, Spanish and French Racial hoaxes. In Proceedings of the Seminar of the Spanish Society for Natural Language Processing: Projects and System Demonstrations (SEPLN-CEDI-PD 2024) co-located with the 7th Spanish Conference on Informatics (CEDI 2024) . CEUR Workshop Proceedings (CEUR-WS. org), A Coruña, Spain, 77–81. Virginia Felkner, Ho-Chun Herbert Chang, Eugene Jang, and Jonathan May. 2023. WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . 9126–9140. S. T. Fiske. 1993. Controlling other people: the impact of power on stereotyping. American Psychologist 48 (1993), 621–628. Susan T Fiske. 1998. Stereotyping, prejudice, and discrimination. In The handbook of social psychology . McGraw-Hill, 357–411. Susan T Fiske. 2000. Stereotyping, prejudice, and discrimination at the seam between the centuries: Evolution, culture, mind, and brain. European journal of social psychology 30, 3 (2000), 299–322. S. T. Fiske. 2024. Prejudice, Discrimination, and Stereotyping. NOBA Project. https://nobaproject.com/modules/prejudice-discrimination-and-stereotyping S. T. Fiske, A. J. C. Cuddy, P. Glick, and J. Xu. 2002. A model of (often mixed) stereotype content: Competence and warmth respectively
https://arxiv.org/abs/2505.17642v1
follow from perceived status and competition. Journal of Personality and Social Psychology (2002), 878–902. Paula Fortuna and Sérgio Nunes. 2018. A Survey on Automatic Detection of Hate Speech in Text. ACM Computing Surveys (CSUR) 51 (2018), 1–30. https://api.semanticscholar.org/CorpusID:52184457 Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. 2022. Extracting Age-Related Stereotypes from Social Media Texts. In Proceedings of the Thirteenth Language Resources and Evaluation Conference . European Language Resources Association, 3183–3194. Kathleen C Fraser, Svetlana Kiritchenko, and Isar Nejadgholi. 2024. How Does Stereotype Content Differ across Data Sources?. In Proceedings of the 13th joint conference on lexical and computational semantics (* SEM 2024) . 18–34. Kathleen C. Fraser, Isar Nejadgholi, and Svetlana Kiritchenko. 2021. Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) . Association for Computational Linguistics, Online, 600–616. Anthony G Greenwald and Mahzarin R Banaji. 1995. Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychological review 102, 1 (1995), 4. Anthony G. Greenwald, Debbie E. McGhee, and Jordan L. K. Schwartz. 1998. Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology 74, 6 (1998), 1464–1480. https://api.semanticscholar.org/CorpusID:7840819 Sally Haslanger. 2000. Feminism in metaphysics: Negotiating the natural. The Cambridge companion to feminism in philosophy (2000), 107–126. Kerri M Hoffman, Sophie Trawalter, Janna R Axt, and Melanie N Oliver. 2016. Racial bias in pain assessment and treatment recommendations, and false beliefs about racial differences in pain. JAMA 316, 17 (2016), 1763–1771. Akshita Jha, Aida Mostafazadeh Davani, Chandan K Reddy, Shachi Dave, Vinodkumar Prabhakaran, and Sunipa Dev. 2023. SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 9851–9870. https://doi.org/10.18653/v1/2023.acl-long.548 Hannah Kirk, Abeba Birhane, Bertie Vidgen, and Leon Derczynski. 2022. Handling and Presenting Harmful Text in NLP Research. In Findings of the Association for Computational Linguistics: EMNLP 2022 . Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 497–510. https://doi.org/10.18653/v1/2022.findings-emnlp.35 Alex Koch, Roland Imhoff, Ron Dotsch, Christian Unkelbach, and Hans Alves. 2016. The ABC of stereotypes about groups: Agency/socioeconomic success, conservative-progressive beliefs, and communion. Journal of personality and social psychology 110 (2016), 675–709. https://api.semanticscholar.org/ CorpusID:6287638 Alex Koch, Roland Imhoff, Christian Unkelbach, Gandalf Nicolas, Susan Fiske, Julie Terache, Antonin Carrier, and Vincent Yzerbyt. 2020. Groups’ warmth is a personal matter: Understanding consensus on stereotype dimensions reconciles adversarial models of social evaluation. Journal of Experimental Social Psychology 89 (2020). Publisher Copyright: ©2020 Elsevier Inc.. Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring Bias in Contextualized Word Representations. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing . 166–172. John P Lalor, Yi Yang, Kendall Smith, Nicole Forsgren, and Ahmed Abbasi. 2022. Benchmarking intersectional biases in NLP. In Proceedings of the 2022 conference of the North American chapter of
https://arxiv.org/abs/2505.17642v1
the association for computational linguistics: Human language technologies . 3598–3609. Shahar Levy, Koren Lazar, and Gabriel Stanovsky. 2021. Collecting a Large-Scale Gender Bias Dataset for Coreference Resolution and Machine Translation. InFindings of the Association for Computational Linguistics: EMNLP 2021 . 2470–2480. Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2020. Towards Debiasing Sentence Representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics . Walter Lippmann. 1922. Stereotypes. (1922). Davide Locatelli, Greta Damo, and Debora Nozza. 2023. A Cross-Lingual Study of Homotransphobia on Twitter. In Proceedings of the First Workshop on Cross- Cultural Considerations in NLP (C3NLP) . Association for Computational Linguistics, Dubrovnik, Croatia, 16–24. https://doi.org/10.18653/v1/2023.c3nlp- 1.3 Julia Mendelsohn, Yulia Tsvetkov, and Dan Jurafsky. 2020. A framework for the computational linguistic analysis of dehumanization. Frontiers in artificial intelligence 3 (2020), 55. Manuscript submitted to ACM 28 Cignarella et al. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) . 5356–5371. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) . Association for Computational Linguistics, 1953–1967. Aurélie Névéol, Yoann Dupont, Julien Bezançon, and Karën Fort. 2022. French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Smaranda Muresan, Preslav Nakov, and Aline Villavicencio (Eds.). Association for Computational Linguistics, Dublin, Ireland, 8521–8531. https://doi.org/10.18653/v1/2022.acl-long.583 Gandalf Nicolas, Xuechunzi Bai, and Susan T. Fiske. 2022. A spontaneous stereotype content model: Taxonomy, properties, and prediction. Journal of Personality and Social Psychology 123, 6 (Dec. 2022), 1243–1263. https://doi.org/10.1037/pspa0000312 Debora Nozza, Federico Bianchi, and Dirk Hovy. 2021. HONEST: Measuring Hurtful Sentence Completion in Language Models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies . 2398–2406. Reynier Ortega-Bueno, Berta Chulvi, Francisco Rangel, Paolo Rosso, and Elisabetta Fersini. 2021. Profiling Irony and Stereotype Spreaders on Twitter (IROSTEREO). In Proceedings of the Working Notes of CLEF 2022 - Conference and Labs of the Evaluation Forum . Orestis Papakyriakopoulos, Simon Hegelich, Juan Carlos Medina Serrano, and Fabienne Marco. 2020. Bias in word embeddings. In Proceedings of the 2020 conference on fairness, accountability, and transparency . 446–457. Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel Bowman. 2022. BBQ: A hand-built bias benchmark for question answering. In Findings of the Association for Computational Linguistics: ACL 2022 . 2086–2105. Jenny L Paterson, Rupert Brown, and Mark A Walters. 2019. The short and longer term impacts of hate crimes experienced directly, indirectly, and through the media. Personality and Social Psychology Bulletin 45, 7 (2019), 994–1010.
https://arxiv.org/abs/2505.17642v1
Shrimai Prabhumoye, Brendon Boldt, Ruslan Salakhutdinov, and Alan W Black. 2021. Case Study: Deontological Ethics in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies . Association for Computational Linguistics, Online, 3784–3798. https://doi.org/10.18653/v1/2021.naacl-main.297 Rajkumar Pujari, Erik Oveson, Priyanka Kulkarni, and Elnaz Nouri. 2022. Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . Association for Computational Linguistics, Dublin, Ireland, 6703–6712. Organizers Of Queerinai, Anaelia Ovalle, Arjun Subramonian, Ashwin Singh, Claas Voelcker, Danica J Sutherland, Davide Locatelli, Eva Breznik, Filip Klubicka, Hang Yuan, et al .2023. Queer in AI: A case study in community-led participatory AI. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency . 1882–1895. Leonardo Ranaldi, Elena Ruzzetti, Davide Venditti, Dario Onorati, and Fabio Zanzotto. 2024. A Trip Towards Fairness: Bias and De-Biasing in Large Language Models. In Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024) . Association for Computational Linguistics, Mexico City, Mexico, 372–384. https://doi.org/10.18653/v1/2024.starsem-1.30 Rachel Rudinger, Chandler May, and Benjamin Van Durme. 2017. Social bias in elicited natural language inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing . 74–79. Javier Sánchez-Junquera, Berta Chulvi, Paolo Rosso, and Simone Paolo Ponzetto. 2021. How do you speak about immigrants? taxonomy and stereoimmi- grants dataset for identifying stereotypes about immigrants. Applied Sciences 11, 8 (2021), 3610. Manuela Sanguinetti, Gloria Comandini, Elisa Di Nuovo, Simona Frenda, Marco Stranisci, Cristina Bosco, Tommaso Caselli, Viviana Patti, and Irene Russo. 2020. Haspeede2@EVALITA2020: Overview of the evalita 2020 hate speech detection task. In Proceedings of the Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020) . CEUR Workshop Proceedings (CEUR-WS. org), Online event. Wolfgang Schmeisser-Nieto, Montserrat Nofre, and Mariona Taulé. 2022. Criteria for the Annotation of Implicit Stereotypes. In Proceedings of the Thirteenth Language Resources and Evaluation Conference . 753–762. Wolfgang S Schmeisser-Nieto, Alessandra Teresa Cignarella, Tom Bourgeade, Simona Frenda, Alejandro Ariza-Casabona, Mario Laurent, Paolo Giovanni Cicirelli, Andrea Marra, Giuseppe Corbelli, Farah Benamara, et al .2024a. Stereohoax: a multilingual corpus of racial hoaxes and social media reactions annotated for stereotypes. Language Resources and Evaluation (2024), 1–39. Wolfgang S Schmeisser-Nieto, Pol Pastells, Simona Frenda, Alejandro Ariza-Casabona, Mireia Farrús, Paolo Rosso, and Mariona Taulé. 2024b. Overview of DETESTS-Dis at IberLEF 2024: DETEction and classification of racial STereotypes in Spanish - Learning with Disagreement. 73 (2024). Mina Schütz, Jaqueline Boeck, Daria Liakhovets, Djordje Slijepčević, Armin Kirchknopf, Manuel Hecht, Johannes Bogensperger, Sven Schlarb, Alexander Schindler, and Matthias Zeppelzauer. 2021. Automatic sexism detection with multilingual transformer models. arXiv preprint arXiv:2106.04908 (2021). Omar Shaikh, Hongxin Zhang, William Held, Michael Bernstein, and Diyi Yang. 2023. On Second Thought, Let‘s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 4454–4470. https://doi.org/10.18653/v1/2023.acl-long.244
https://arxiv.org/abs/2505.17642v1
Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, and Lea Frermann. 2021. Contrastive Learning for Fair Representations. https://doi.org/10. 48550/arXiv.2109.10645 arXiv:2109.10645 [cs]. Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 29 Karolina Stańczak and Isabelle Augenstein. 2021. A Survey on Gender Bias in Natural Language Processing. arXive.org 1, 1 (2021). Charles G. Stangor. 2000. Stereotypes and prejudice: essential readings . https://api.semanticscholar.org/CorpusID:141804082 Shivashankar Subramanian, Xudong Han, Timothy Baldwin, Trevor Cohn, and Lea Frermann. 2021. Evaluating Debiasing Techniques for Intersectional Biases. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tau Yih (Eds.). Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, 2492–2498. https: //doi.org/10.18653/v1/2021.emnlp-main.193 Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating Gender Bias in Natural Language Processing: Literature Review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics . 1630–1640. Valentino Marcel Tahamata and Philip Tseng. 2024. WHAT DOES THE IMPLICIT ASSOCIATION TEST REALLY MEASURE? INSIGHTS FROM THE THEORETICAL DEBATE. PSYCHOLOGIA (2024). https://api.semanticscholar.org/CorpusID:270557173 Yi Chern Tan and L Elisa Celis. 2019. Assessing social and intersectional biases in contextualized word representations. Advances in neural information processing systems 32 (2019). Eddie Ungless, Amy Rafferty, Hrichika Nag, and Björn Ross. 2022. A Robust Bias Mitigation Procedure Based on the Stereotype Content Model. In Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS) . Association for Computational Linguistics, Abu Dhabi, UAE, 207–217. Francielle Alves Vargas, Isabelle Carvalho, Ali Hürriyeto ˇglu, Thiago A. S. Pardo, and Fabrício Benevenuto. 2023. Socially Responsible Hate Speech Detection: Can Classifiers Reflect Social Stereotypes?. In Recent Advances in Natural Language Processing . https://api.semanticscholar.org/CorpusID:265068377 Fan Zhou, Yuzhou Mao, Liu Yu, Yi Yang, and Ting Zhong. 2023. Causal-Debias: Unifying Debiasing in Pretrained Language Models and Fine-tuning via Causal Invariant Learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) . Association for Computational Linguistics, Toronto, Canada, 4227–4241. https://doi.org/10.18653/v1/2023.acl-long.232 Ran Zmigrod, Sabrina J Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics . 1651–1661. Manuscript submitted to ACM 30 Cignarella et al. A APPENDIX Implementation details of BERTopic used for the topic analysis in Section 2.1.3. 1from bertopic import BERTopic 2from sklearn.feature_extraction.text import CountVectorizer 3 4topic_model = BERTopic( 5 language="english", 6 calculate_probabilities= True , 7 verbose= True , 8 n_gram_range=(1, 2), 9 min_topic_size=10, 10 embedding_model="all-MiniLM-L6-v2" 11) 12 13topics, probs = topic_model.fit_transform(all_abstracts_cleaned) Manuscript submitted to ACM Stereotype Detection in Natural Language Processing 31 B APPENDIX Categories used in [Ortega-Bueno et al .2021]: national majority groups, illness/health groups, age and role family groups, victims, political groups, ethnic/racial minorities, immigration/national minorities professional and class groups, sexual orientation groups, women, physical appearance groups, religious groups, style of life groups, non-normative behaviour groups, man/male groups, minorities expressed in generic terms and white people. Categories used in [Nangia et al
https://arxiv.org/abs/2505.17642v1
arXiv:2505.17643v1 [cs.CL] 23 May 2025Bridging Electronic Health Records and Clinical Texts: Contrastive Learning for Enhanced Clinical Tasks Sara Ketabi Vector Institute, University of Toronto, The Hospital for Sick Children Dhanesh Ramachandram Vector Institute Abstract Conventional machine learning models, particularly tree-based approaches, have demonstrated promising performance across various clinical prediction tasks using electronic health record (EHR) data. Despite their strengths, these models struggle with tasks that require deeper contextual understanding, such as predicting 30- day hospital readmission. This can be primarily due to the limited semantic information available in structured EHR data. To address this limitation, we propose a deep multimodal contrastive learning (CL) framework that aligns the latent representations of structured EHR data with unstructured discharge summary notes. It works by pulling together paired EHR and text embeddings while pushing apart unpaired ones. Fine-tuning the pretrained EHR encoder extracted from this framework significantly boosts downstream task performance, e.g, a 4.1% AUROC enhancement over XGBoost for 30-day readmission prediction. Such results demonstrate the effect of integrating domain knowledge from clinical notes into EHR-based pipelines, enabling more accurate and context-aware clinical decision support systems. 1 Introduction Electronic health record (EHR) is one of the most common data modalities in healthcare, comprising various patient-related information including demographic records, lab results, and medication history Johnson et al. (2023). Conventional machine learning (ML) algorithms, e.g., XGBoost, have achieved significant performance in extracting useful patterns from this data and using them to predict various clinical tasks Li et al. (2024). These models have even outperformed deep learning (DL) approaches when applied to EHR data, making them the preferred models in this domain. However, there are several challenging tasks, such as hospital readmission prediction or mortality forecasting, that depend on complex and context-rich factors unavailable in EHR data. In such cases, models trained solely on tabular EHR features usually struggle to capture the nuanced clinical reasoning and latent patterns that human experts rely on, leading to suboptimal performance. To overcome this challenge and improve the predictive performance of ML models in such complex tasks, additional data modalities representing domain knowledge can be integrated with EHR. A potential option can be clinical texts, which are provided by clinicians and can be invaluable sources of their knowledge. In contrast to non-DL models which are mostly applicable to unimodal tasks, DL architectures can learn joint feature representations of multimodal frameworks, such as EHR and these texts in our case, to enhance the representation of single modalities. Contrastive learning (CL) Chen et al. (2020), as a subtype of self-supervised learning, can learn the alignment between multiple input modalities by minimizing the distance between paired samples Preprint. Under review. while maximizing that between unpaired modalities. Through this process, the model learns to generate meaningful representations, where semantically related elements, such as EHR and its corresponding text, are embedded close to each other. This alignment enhances the model’s ability to perform downstream tasks such as classification and multimodal reasoning. Prior works have proposed multimodal CL frameworks in the medical domain, mostly on medical images and radiology reports Huang et al. (2021); Wang et al. (2022);
https://arxiv.org/abs/2505.17643v1
Ji et al. (2021), to improve the performance of downstream clinical tasks. While these approaches have demonstrated significant potential in image-text settings, there has been relatively limited exploration of CL applied to other clinical modalities. In particular, the integration of EHR with unstructured textual data, such as discharge summaries, remains less explored. While some recent studies have investigated CL using EHR data and clinical text Ma et al. (2024b); King et al. (2023), they mainly treat EHR as temporal sequences (e.g., visit-based or longitudinal records) paired with textual summaries. Such approaches can capture dynamic patient trends over time. In contrast, our work utilizes static EHR data along with corresponding discharge summaries. This distinction is important in cases where time series elements are unavailable or unnecessary. By working with static EHR and clinical notes, our framework increases the applicability of multimodal CL in clinical settings and serves as a foundation model for downstream tasks operating on more readily available forms of EHR data. To that end, we propose a multimodal CL framework trained on static EHR data and unstructured discharge summary notes to enrich the semantic content of EHR representations. By aligning the representations of these two modalities in a shared embedding space, our framework can capture clinically meaningful patterns that may not be apparent from structured EHR alone. Subsequently, we fine-tune the learned representations on two downstream classification tasks, which conventional models such as XGBoost achieve low predictive performance on, to measure the effectiveness of our framework in enhancing these tasks. 2 Literature Review 2.1 Deep Learning Architectures for Tabular Data Processing Classical ML models, such as XGBoost and Random Forest, have achieved promising results on various tabular data classification problems, particularly in the clinical domain. These problems include, but are not limited to, kidney injury risk forecasting Li et al. (2025), infection diagnosis Horng et al. (2017), and drillium prediction Ma et al. (2024a). Efficiency and ease of implementation have made these models a popular and reliable choice for practitioners working with structured tabular data. Nevertheless, to address the limitations of these models, e.g., their inability to learn hierarchical feature representations or handle multimodal inputs, different DL architectures have been proposed to process tabular data more effectively. TabNet Arik and Pfister (2021), is a DL algorithm consisting of an attentive feature selection mechanism, which can determine the importance of individual features and select the most informative ones at each decision step. Neural Oblivious Decision Ensembles (NODE) Popov et al. (1909) combines tree-based models with neural networks by applying backpropagation to a differentiable ensemble of decision trees. Tabtransformer Huang et al. (2020)involves a sequence of attention-based transformer layers to convert original feature embeddings into useful contextual ones. Despite these advances, consistent improvement of DL models over classical ML has not been widely demonstrated in tabular data encoding. Moreover, several complex clinical tasks, e.g., treatment response or long-term outcome prediction, are challenging to tackle using tabular data alone. A potential method for addressing such challenges can be extracting semantic context from other data modalities, such as unstructured clinical notes, by leveraging the multimodal
https://arxiv.org/abs/2505.17643v1
learning capabilities of DL frameworks. Such frameworks can integrate EHR data with free-text input, enabling more semantically rich EHR representations. 2 2.2 Multimodal Contrastive Learning Frameworks Multimodal CL frameworks have achieved remarkable success in integrating various modalities of input data and learning semantically aligned representations. In the healthcare domain, these approaches have been widely applied to medical images, e.g., chest X-ray, and corresponding radiology reports, aligning the representations of paired images and texts Huang et al. (2021); Wang et al. (2022); Ji et al. (2021). Regarding other modalities, CL has been explored to learn the alignment between tabular data, e.g, Morphometric features, and cardiac magnetic resonance (MR) images Hager et al. (2023). However, very limited works have proposed CL frameworks for aligning EHR data and clinical texts Ma et al. (2024b); King et al. (2023). Such methods typically assume that EHR data is structured as time series, limiting their applicability in settings where temporal dynamics of patient records, e.g., event timestamps, longitudinal measurements, are unavailable, unreliable, or inconsistently gathered across institutions. To the best of our knowledge, this is the first study that proposes a CL framework for aligning the representations of static-form EHR data, which is more readily available compared to time series, and clinical texts. This eliminates the dependency on time-series representations and and broadens the applicability of this framework to situations where EHR data is only available in the static format, e.g., hospital admission or discharge summaries. Consequently, the framework can be applied to improve the performance of a wide range of downstream tasks involving the static format of EHR data and across diverse clinical environments. 3 Methodology Our approach in this study can be divided into two main stages: 1) EHR-text contrastive learning (Subsection 3.1), where we pretrained a CL framework to align the representations of static-form EHR data and associated discharge summary notes using a contrastive objective, and 2) Downstream classification fine-tuning (Subsection 3.2), where we adapted the pretrained EHR representations to two classification tasks using labeled data. 3.1 EHR-Text Contrastive Learning Our proposed CL framework is designed to align static-form EHR data with unstructured discharge summary notes, leveraging the inherent cross-modal correspondence between these modalities. We trained this framework on EHR-report pairs from the same patient admission, with the objective of minimizing the distance between the embeddings of matched pairs and maximizing the distance between mismatched ones. Consequently, the model learns to embed semantically related EHR and text representations closely in the shared latent space, thereby enriching EHR representations with the contextual and semantic information available in clinical texts. This framework is visualized in Figure 1. To encode the structured EHR inputs Xehr∈Rd, where dis the number of features, and get EHR representations, we employed the encoder portion of TabNet. As discussed in Section 2, TabNet utilizes a sequential attention mechanism and sparse feature selection to extract complex interactions within the tabular data. Let ebe the EHR representation and Tindicate an instance of TabNet: e=T(Xehr)∈R128(1) Following the pretraining approach proposed in the original TabNet paper Arik and Pfister (2021), we first pretrained the EHR encoder using
https://arxiv.org/abs/2505.17643v1
a self-supervised masked feature reconstruction task. This pretraining stage enables the model to capture internal dependencies and useful patterns within the EHR modality. To enhance the training stability and efficiency, we froze the initial embedding and feature-splitting layers of TabNet. These layers are believed to capture general low-level patterns that remain stable across a wide range of tasks. Consequently, the model could focus on adapting deeper layers which can be more abstract and task-specific, significantly reducing the number of trainable parameters. For the discharge summary notes, we used Longformer Beltagy et al. (2020), a state-of-the-art transformer capable of processing long input sequences (up to 4096 tokens), as the text encoder since 3 it was crucial for capturing context in long discharge documents. We initialized this transformer with “clinical longformer” Li et al. (2022) weights, a version of Longformer pretrained on MIMIC- III clinical notes. This would enable the model to learn specific representations of clinical terms, enhancing the training process of the main CL framework. Similar to the EHR encoder, we froze the initial layers of the Longformer model, i.e., first 10 layers, and fine-tuned the final two layers to reduce the computational complexity. As the discharge notes were typically longer than the maximum token size that can be processed by Longformer (4096), we divided each note into chunks of 256 tokens, and each chunk was passed independently through the model. Therefore, given L, as an instance of Longformer, and a discharge noteXn= [c1, c2, ..., c l], where {ci}l i=1indicates a separate chunk, and lis the total number of chunks extracted from the text: t=1 llX i=1L(ci)[0]∈R768(2) The representation of each text sequence can be obtained by using mean pooling over the [cls] token embeddings of chunk representations extracted from the last Longformer layer, which appears as the first element. This embedding can be considered as the aggregation of the whole sequence’s representation. To align the shape of the EHR and text representations and map them onto a shared embedding space, we added a multi-layer perceptron (MLP) projection head to the encoders As shown by Chen et al. (2020). This module consist of linear layers with sizes (768, 128) and (128,128), applied to the text and EHR representations, respectively. The framework was trained using a CLIP loss Radford et al. (2021). Given a batch of Npaired EHR–text samples {(ei, ti)}N i=1, leteiandtibe the normalized representations of the structured EHR and unstructured discharge summary notes, respectively. The cosine similarity between each EHR–text representation pair can be calculated as: sij=sim(ei, tj) =e⊤ itj ∥ei∥∥tj∥ The EHR-to-Text loss is defined as: Lehr→text=1 NNX i=1−log exp(sii/τ)PN j=1exp(sij/τ)! Similarly, the Text-to-EHR loss is: Ltext→ehr=1 NNX i=1−log exp(sii/τ)PN j=1exp(sji/τ)! τis the temperature parameter Radford et al. (2021) used for controlling the sharpness of the similarity distribution. The final CLIP loss is the summation of the two directions: LCLIP=Lehr→text+Ltext→ehr 3.2 Downstream Tasks Following the pretraining of our EHR-text CL framework, we evaluated the effectiveness of the learned EHR representations by fine-tuning them on two clinically significant downstream classifi- cation tasks. Notably, only the EHR encoder was used during
https://arxiv.org/abs/2505.17643v1
fine-tuning, assessing the impact of transferring cross-modal semantic knowledge from clinical notes into structured EHR representations. The architecture used for fine-tuning the downstream tasks can be observed in Figure 2. 4 Figure 1: The Proposed EHR-Text CL Framework: EHR is encoded with TabNet initialized with masked pretraining and then forwarded to a linear layer to get its representation. Each discharge summary is divided into chunks of 256 tokens, each encoded by a separate Longformer transformer. The mean of the CLS embeddings extracted from the last layer of all longformer models is then passed through a linear layer to obtain the text representation. A contrastive loss (CLIP-style) is applied between the EHR and text representations to align the related pairs and train the framework. 5 The first downstream task is 30-day hospital readmission prediction, where the objective is to determine whether a patient is likely to be readmitted within 30 days of discharge, a key factor determining in patient health monitoring and hospital resource management. The second task is critical outcome prediction, which involves identifying patients at risk of life-threatening events such as hospital mortality. For both tasks, we employed the TabNet encoder architecture as our classifier. We initialized the model using the pretrained weights obtained from the CL stage. To preserve the general low-level features acquired during pretraining, we froze the embedding and initial feature-splitting layers and fine-tuned only the remaining layers along with two additional linear layers on the downstream tasks. The models were fine-tuned using the binary cross-entropy loss, as indicated in Equation 3: LFT=−1 NNX i=1[yilog ˆyi+ (1−yi) log(1 −ˆyi)] (3) Where yi∈[0,1]is the ground-truth downstream label corresponding to the ithdatapoint, and ˆyiis the predicted probability for that datapoint. 4 Experiments and Results 4.1 Dataset For training the proposed CL framework, we utilized MIMIC-IV Johnson et al. (2023), an open- source dataset accessed under the associated data use agreement with online consent. This dataset contains EHR information related to admissions to Beth Israel Deaconess Medical Center between 2008 and 2019. It consists of structured data such as demographic records, medication orders, lab results, and unstructured clinical texts, e.g., radiology reports and discharge summary notes. Since our CL framework requires paired EHR data and discharge summary notes, we excluded all cases for which these notes were not available, resulting in a total of 376,021 EHR-text pairs and 105 features (columns) used for training. To prepare the EHR data for training, we first removed redundant or non-informative columns, such as patient and admission ID, which do not contribute meaningful clinical information. We also excluded features containing missing values and any columns directly related to the downstream classification targets, e.g., readmission outcome, to avoid data leakage. This resulted in a total of 105 features used for training our CL framework. We determined all features with categorical values or numerical variables with fewer than three unique values as categorical features. The remaining features, showing a wider numeric range, were highlighted as numerical. Prior to being processed by the model, categorical features were ordinally encoded. The TabNet model then employs an embedding layer to
https://arxiv.org/abs/2505.17643v1
transform these encoded values into dense vectorized representations, enabling the model to capture semantic relationships between feature categories. For the text data, we standardized all discharge summaries by converting them into lowercase. We also removed all dates, numbers, additional spaces, and punctuation marks. Furthermore, several sections within the discharge summaries which do not contain important clinical information, such as Technique, Discharge Instructions, and administrative segments, were excluded. To fine-tune the CL framework on downstream classification tasks, we employed a held-out portion of the dataset which was excluded during the pretraining CL stage, i.e., datapoints without corresponding discharge summary notes. This design ensures no data leakage from the pretraining. We randomly sampled five non-overlapping subsets from this data to robustly evaluate model performance. Each subset consisted of 6,000 datapoints for model training and validation, and 2,496 datapoints were utilized for final testing. 4.2 Implementation Details For training the proposed CL framework, we applied Distributed Data Parallel (DDP) computing to accelerate the training process. The experiments were conducted on four “NVIDIA A40” GPUs. The 6 Figure 2: The Downstream Classification Architecture: The TabNet encoder is initialized with the proposed CL weights. The initial embedding and feature-splitting layers are kept frozen, and the remaining layers along with two fully-connected layers are fine-tuned for each downstream task. model was developed using AdamW optimizer with a learning rate of 1e-4 and weight-decay of 1e-4 to regularize the model and reduce overfitting. The batch-size and number of epochs were set to 64 and 13, respectively. The temperature in the CLIP loss was chosen as 0.1. For the downstream classification fine-tuning, DDP was employed across 4 “NVIDIA RTX6000” GPUs. AdamW with an increased learning rate of 5e-4 and weight-decay of 1e-4 was set as the optimizer. The batch-size remained as 64, and the models were fine-tuned for 15 epochs. The code used for developing the experiments and the pretrained framework’s weights are provided in the supplementary material. 4.3 Evaluation Setting We evaluated the effectiveness of the proposed EHR-text CL framework by comparing the per- formance of a TabNet model initialized with the pretrained CL weights and fine-tuned on the aforementioned downstream classification tasks with two baseline models: • XGBoost 7 •A TabNet model with the same architecture but initialized with only TabNet masked pre- training weights without proposed CL pretraining We used the Area Under the Receiver Operating Characteristic Curve (AUC) as our primary perfor- mance metric, as it is well-suited for imbalanced classification tasks and can reflect the trade-off between sensitivity and specificity. We computed the mean and standard deviation across the five random seeds to compare the models and evaluate their generalization. Moreover, we aimed to assess the impact of the proposed framework on the amount of training data required for effective fine-tuning. This was achieved by reducing the number of training datapoints by 50% and re-evaluating the performance of all three models on the reduced dataset. 4.4 Results Tables 1 and 2 demonstrate the mean test AUC achieved by the proposed CL-based fine-tuned model as well as the baselines for 30-day readmission and critical outcome prediction,
https://arxiv.org/abs/2505.17643v1
respectively. Moreover, to assess the statistical significance of performance differences, we conducted two-tailed t-tests on the AUC scores obtained across the five seeds. The detailed results of these tests are presented in the supplementary material. 4.5 Hospital Readmission Prediction Table 1: Mean Test AUC Results for 30-day Hospital Readmission Prediction (When 100% and 50% of training data are used for model fine-tuning) Model 100% Training Data 50% Training Data XGBoost 0.777 ( ±0.012) 0.756 ( ±0.018) TabNet (initialized with masked pretraining) 0.759 ( ±0.025) 0.741 ( ±0.016) TabNet (initialized With proposed CL) 0.809 (±0.016) 0.788 (±0.008) As can be observed in Table 1, the proposed model, achieving a mean test AUC of 0.809 on the whole dataset, consistently outperforms both XGBoost and the masked-pretrained TabNet by 4.12% (p-value=0.0082, statistically significant) and 6.59% (p-value=0.0074, statistically significant), respectively. Moreover, when only 50% of the training data, i.e, 3000 datapoints, is used for model fine-tuning, this model still demonstrates strong discriminative capability, i.e., an AUC of 0.788, which is higher than the performance of both baselines even when trained on the whole training data. 4.6 Critical Outcome Prediction Table 2: Mean Test AUC Results for Critical Outcome Prediction (When 100% and 50% of training data are used for model fine-tuning) Model 100% Training Data 50% Training Data XGBoost 0.725( ±0.017) 0.725( ±0.048) TabNet (initialized with masked pretraining) 0.745 ( ±0.019) 0.693 ( ±0.014) TabNet (initialized With proposed CL) 0.821 (±0.022) 0.782 (±0.027) As indicated in Table 2, the performance gap is even larger for critical outcome prediction, where the CL-based fine-tuned model surpasses XGBoost and the masked-pretrained model by 13.25% (p-value=0.0001, statistically significant) and 10.2% (p-value=0.0004, statistically significant), respec- tively, on the whole training data. Moreover, this model still shows superior performance compared to both baselines when only 50% of the training data is utilized. 5 Discussion In this study, we proposed a novel multimodal CL framework on EHR and discharge summary notes, to align the representations of the two modalities and learn rich and task-agnostic EHR representations. 8 By jointly pretraining these modalities using a CLIP loss, our approach enables the EHR encoder to benefit from the clinical details and semantic information available in discharge summaries. Fine- tuning this framework on two important downstream clinical prediction tasks, namely 30-day hospital readmission and critical outcome prediction demonstrates that CL remarkably improves classification performance over conventional baselines. Our results indicate that the TabNet model initialized with the proposed CL-pretrained weights consistently outperforms both a TabNet pretrained with only masked modeling weights and a powerful non-DL baseline, i.e, XGBoost, across five random seeds of data split. The evaluation was performed based on the average test AUC, which is a widely used metric in clinical settings. These findings suggest that aligning static-form EHR data with semantically rich discharge summaries during pretraining leads to a more meaningful initialization for tabular-based downstream predictive tasks, effectively guiding the model toward clinically relevant EHR representations. TabNet’s use of sequential decision steps and attentive feature selection shows to be highly compatible with contrastive pretraining. The model not only learns generalizable feature representations but
https://arxiv.org/abs/2505.17643v1
also is able to incorporate new semantic information efficiently during fine-tuning. Freezing the embedding and initial splitting layers during downstream fine-tuning helped retain generalized feature patterns learned through pretraining, while allowing the deeper layers to adjust to the target task. The framework’s strong performance on readmission and critical outcome prediction, two tasks with high clinical importance, without relying on text during fine-tuning or inference, demonstrates the practical applicability of multimodal contrastive pretraining in real-world hospital settings. Moreover, our model architecture is modular and modality-agnostic. Therefore, it can be adapted to other types of structured data, e.g., imaging metadata, or unstructured clinical texts, e.g., radiology reports, showing its generalizability. Furthermore, under the low-resource regime, when the framework was fine-tuned on only half of the training data, it could outperform both baselines trained on the whole data across both downstream tasks, suggesting that it learns more generalizable features regardless of the data size. This finding supports the hypothesis that CL-based pretraining acts as a strong regularizer and can be particularly advantageous when labeled data is limited. Despite the promising performance achieved by the proposed framework, there are several limitations associated with this work. First, the reliance on discharge summaries during pretraining restricts the model to cases where high-quality clinical notes are available. This may introduce selection bias, as patients without notes may systematically differ in clinically relevant ways. Second, although TabNet supports interpretability, our current analysis did not explore how contrastive pretraining affects the learned feature importance. Moreover, our evaluation focuses on two binary classification tasks. We should also investigate whether the proposed framework generalizes well to other settings, such as multi-label classification, or generative modeling, e.g., report generation from structured EHR data. Several steps could be performed for extending this study in future work. First, we plan to evaluate the framework on another clinical dataset to assess its cross-domain generalizability. Second, incorporating additional modalities—such as imaging (e.g., chest X-rays), into the multimodal framework could lead to even richer data representations. Furthermore, alternative contrastive objectives, such as Triplet Loss with hard negative mining or supervised CL, could be explored to determine whether the loss choice could have a significant impact on the results. Moreover, we will explore TabNet interpretability and attention weight analysis to identify whether the pretrained model attends to different clinical variables than a randomly initialized one. Finally, we will use generative language models to generate discharge summary notes from EHR data, an important step towards explainability improvement of tabular-based models. 6 Conclusion In this work, we introduced a deep multimodal CL framework that leverages paired static-form EHR data and discharge summary notes to learn semantically rich EHR representations. Our results indicate that fine-tuning this framework on two high-impact clinical downstream tasks, i.e., 30-day hospital readmission and critical outcome prediction, outperforms both a non-CL-based pretrained and conventional ML baselines based on test AUC. Moreover, our downstream model works completely independent of clinical texts, providing a foundation for a wide range of tabular-based clinical analysis tasks. Deploying our model in real-world clinical settings can lead to more effective decision support systems, thereby enhancing
https://arxiv.org/abs/2505.17643v1
clinical workflow and patient outcomes. 9 References Sercan ¨O Arik and Tomas Pfister. Tabnet: Attentive interpretable tabular learning. In Proceedings of the AAAI conference on artificial intelligence , volume 35, pages 6679–6687, 2021. Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150 , 2020. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning , pages 1597–1607. PmLR, 2020. Paul Hager, Martin J Menten, and Daniel Rueckert. Best of both worlds: Multimodal contrastive learning with tabular and imaging data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 23924–23935, 2023. Steven Horng, David A Sontag, Yoni Halpern, Yacine Jernite, Nathan I Shapiro, and Larry A Nathanson. Creating an automated trigger for sepsis clinical decision support at emergency department triage using machine learning. PloS one , 12(4):e0174708, 2017. Shih-Cheng Huang, Liyue Shen, Matthew P Lungren, and Serena Yeung. Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 3942–3951, 2021. Xin Huang, Ashish Khetan, Milan Cvitkovic, and Zohar Karnin. Tabtransformer: Tabular data modeling using contextual embeddings. arXiv preprint arXiv:2012.06678 , 2020. Zhanghexuan Ji, Mohammad Abuzar Shaikh, Dana Moukheiber, Sargur N Srihari, Yifan Peng, and Mingchen Gao. Improving joint learning of chest x-ray and radiology report by word region alignment. In Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings 12 , pages 110–119. Springer, 2021. Alistair EW Johnson, Lucas Bulgarelli, Lu Shen, Alvin Gayles, Ayad Shammout, Steven Horng, Tom J Pollard, Sicheng Hao, Benjamin Moody, Brian Gow, et al. Mimic-iv, a freely accessible electronic health record dataset. Scientific data , 10(1):1, 2023. Ryan King, Tianbao Yang, and Bobak J Mortazavi. Multimodal pretraining of medical time series and notes. In Machine Learning for Health (ML4H) , pages 244–255. PMLR, 2023. Fuyuan Li, Zhanjin Wang, Ruiling Bian, Zhangtuo Xue, Junjie Cai, Ying Zhou, and Zhan Wang. Predicting the risk of acute kidney injury in patients with acute pancreatitis complicated by sepsis using a stacked ensemble machine learning model: a retrospective study based on the mimic database. BMJ open , 15(2):e087427, 2025. Yikuan Li, Ramsey M Wehbe, Faraz S Ahmad, Hanyin Wang, and Yuan Luo. Clinical-longformer and clinical-bigbird: Transformers for long clinical sequences. arXiv preprint arXiv:2201.11838 , 2022. Yun Li, Yuan Cao, Min Wang, Lu Wang, Yiqi Wu, Yuan Fang, Yan Zhao, Yong Fan, Xiaoli Liu, Hong Liang, et al. Development and validation of machine learning models to predict mdro colonization or infection on icu admission by using electronic health record data. Antimicrobial Resistance & Infection Control , 13(1):74, 2024. Rui Ma, Jin Zhao, Ziying Wen, Yunlong Qin, Zixian Yu, Jinguo Yuan, Yumeng Zhang, Anjing Wang, Cui Li, Huan Li, et al. Machine learning for the prediction of delirium in elderly intensive care unit patients. European Geriatric Medicine , 15(5):1393–1403, 2024a. Yingbo Ma, Suraj Kolla, Zhenhong Hu, Dhruv Kaliraman, Victoria Nolan, Ziyuan Guan, Yuanfang
https://arxiv.org/abs/2505.17643v1
Ren, Brooke Armfield, Tezcan Ozrazgat-Baslanti, Jeremy A Balch, et al. Global contrastive training for multimodal electronic health records with language supervision. arXiv preprint arXiv:2404.06723 , 2024b. 10 Sergei Popov, Stanislav Morozov, and Artem Babenko. Neural oblivious decision ensembles for deep learning on tabular data. arxiv 2019. arXiv preprint arXiv:1909.06312 , 1909. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. PmLR, 2021. Fuying Wang, Yuyin Zhou, Shujun Wang, Varut Vardhanabhuti, and Lequan Yu. Multi-granularity cross-modal alignment for generalized medical visual representation learning. Advances in Neural Information Processing Systems , 35:33536–33549, 2022. 11
https://arxiv.org/abs/2505.17643v1
arXiv:2505.17645v1 [cs.CV] 23 May 2025HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning Chuhao Zhou1, Jianfei Yang1∗ 1MARS Lab, Nanyang Technological University {chuhao002@e, jianfei.yang}@ntu.edu.sg Project Page: https://chuhaozhou99.github.io/HoloLLM Abstract Embodied agents operating in smart homes must understand human behavior through diverse sensory inputs and communicate via natural language. While Vision-Language Models (VLMs) have enabled impressive language-grounded perception, their reliance on visual data limits robustness in real-world scenarios with occlusions, poor lighting, or privacy constraints. In this paper, we introduce HoloLLM, a Multimodal Large Language Model (MLLM) that integrates uncom- mon but powerful sensing modalities, such as LiDAR, infrared, mmWave radar, and WiFi, to enable seamless human perception and reasoning across heterogeneous environments. We address two key challenges: (1) the scarcity of aligned modality- text data for rare sensors, and (2) the heterogeneity of their physical signal repre- sentations. To overcome these, we design a Universal Modality-Injection Projector (UMIP) that enhances pre-aligned modality embeddings with fine-grained, text- aligned features from tailored encoders via coarse-to-fine cross-attention without introducing significant alignment overhead. We further introduce a human-VLM collaborative data curation pipeline to generate paired textual annotations for sens- ing datasets. Extensive experiments on two newly constructed benchmarks show that HoloLLM significantly outperforms existing MLLMs, improving language- grounded human sensing accuracy by up to 30%. This work establishes a new foundation for real-world, language-informed multisensory embodied intelligence. 1 Introduction Embodied agents in smart homes, e.g., household robots and intelligent appliances, have garnered increasing attention in recent years [ 1–4]. To interact effectively with humans and execute real-world tasks, agents must understand human behavior and be capable of engaging in natural language communication. This necessitates the development of models that seamlessly integrate rich human perception with advanced language understanding and generation capabilities. Vision-Language Models (VLMs) [ 5–8] have emerged as promising tools for enabling language- conditioned perception and reasoning. However, the visual modality alone struggles to operate in the real world, e.g., low-light environments, occlusions, and privacy-sensitive scenarios. In contrast, humans naturally rely on multiple sensory modalities, such as vision, audition, and olfaction, to perceive and adapt to diverse environments. Inspired by this biological principle, embodied agents can benefit from incorporating alternative sensing modalities, including LiDAR, infrared cameras, mmWave radar, and WiFi signals, to achieve more robust and comprehensive perception. Each of these modalities brings distinct advantages: LiDAR enables high-precision 3D reconstruction [ 9], infrared cameras support perception in darkness [ 10], and mmWave radar and WiFi are resilient to visual occlusions and lighting variations [ 11]. As illustrated in Fig. 1, vision alone fails to detect ∗Corresponding Author (jianfei.yang@ntu.edu.sg) Preprint. Under review. Figure 1: HoloLLM achieves seamless andlanguage-grounded human perception and reasoning with complementary sensing modalities. It overcomes real-world challenges, e.g., illumination and privacy, with superior performance on human action recognition, question answering (QA), and captioning tasks, which enables embodied agents to work intelligently across diverse scenarios. a fallen individual behind an obstruction, while radar- and WiFi-based modalities remain effective. Hence, we believe Multimodal Large Language Models (MLLMs) that integrate diverse sensor inputs can provide excellent adaptability and reliability in complex, real-world environments. However, integrating sensing
https://arxiv.org/abs/2505.17645v1
modalities into an MLLM is non-trivial due to two key challenges. First, we must align sensing modalities with text using limited training data. For common modalities such as RGB or depth images, millions of web-sourced ‘modality-text’ data pairs are available [ 12–14], which enable large-scale pre-training of multimodal projectors to effectively align common modalities with text. In contrast, other sensing modalities (e.g., mmWave and WiFi signals) are not available online, with only a few thousand samples collected in labs [ 11,10]. The data scarcity complicates the alignment between sensing modalities and text. Secondly, modality-text alignment requires a robust feature encoder, but it is challenging to learn heterogeneous characteristics of these rare sensing modalities. To model the physical world, different sensors are designed to leverage distinct physical designs (e.g., wavelengths and frequencies) at multiple granularities, of which the sensing data show extreme heterogeneity and significantly differ from common modalities, leading to huge challenges for existing transformer-based encoders to learn robust representations [ 10,15]. These two challenges constitute the question investigated in this work: can we enable MLLM to learn data-scarce and heterogeneous sensing modalities for language-grounded human perception and reasoning? To this end, our key idea comes from two aspects. Due to data scarcity, directly aligning sensing modalities with text through large-scale pre-training is infeasible. To address it, we propose to generate initial embeddings for each modality pre-aligned with text, without additional training. Then, only minor data and fine-tuning are sufficient to achieve appropriate alignment. Regarding the heterogeneity of sensing modalities, previous studies [ 16,17] show that tailored encoders are more effective than normal transformers to extract modality-specific features. However, directly aligning raw modality-specific features with text from scratch demands massive training data and incurs additional alignment overhead. As a result, we take raw features as references and integrate text-aligned features into multimodal embeddings via an iterative process: querying and fusing text-aligned features into the multimodal embeddings through cross-attention, and projecting the embeddings into the LLM semantic space to form the enhanced queries for the next iteration. In this paper, we propose HoloLLM, an MLLM for seamless human perception and reasoning across common and sensing modalities. Specifically, we adopt a CLIP vision encoder aligned with the text modality to generate pre-aligned initial embeddings for each modality. Tailored encoders are then designed to fully explore fine-grained, modality-specific discriminative features. To avoid additional alignment overhead, we propose the Universal Modality-Injection Projector (UMIP) for progressive modality features integration. UMIP takes the initial embeddings as coarse queries to adaptively identify and integrate fine-grained, text-aligned modality features via coarse-to-fine cross-attention. Furthermore, we introduce a human-VLM collaborative data curation pipeline to generate textual annotations for existing multimodal human sensing datasets: MM-Fi [ 11] and XRF55 [ 10]. A 2 comprehensive evaluation of state-of-the-art MLLMs is conducted, establishing the first benchmark for multimodal human perception and reasoning grounded in sensing modalities. In summary, our contributions are threefold: •We propose HoloLLM, the first work to align MLLM with rare sensing modalities to achieve seamless human perception and reasoning with language grounded. •We propose Universal Modality-Injection Projector (UMIP) and modality-specific encoders to
https://arxiv.org/abs/2505.17645v1
deal with the data-scarce feature learning and modality-text alignment, respectively. •To evaluate HoloLLM, we design a human-VLM collaborative data curation pipeline to construct the text-paired multimodal dataset. Then, the first multisensory benchmark for human sensing is established with different settings and baselines. The HoloLLM shows superior performance, improving existing MLLM by around 30% on some QA tasks. 2 Related Work Multimodal Human Sensing. Human sensing aims to perceive, analyze, and understand human actions, which is essential in human-agent interaction. Beginning with human sensing based on RGB and depth frames [ 18], various other sensing modalities, such as LiDAR [ 19], mmWave [ 20, 21], WiFi-CSI [ 17,15], and RFID [ 10], have been progressively introduced to address limitations, including lighting variations, occlusions, and privacy concerns. To achieve more comprehensive human perception, methods that explore complementary information across modalities become dominant [ 22,23,11,16]. With the development of LLM, many works seek to leverage the high-level semantics and strong generalization capability of language to perform zero-shot human sensing tasks [ 24], which are extended to multimodal inputs for more comprehensive recognition [ 25]. However, existing models cannot reason and generate responses based on perceived information, limiting their capacity to engage in language-based interaction with humans. Multimodal-Text Alignment. Extending LLMs to other modalities to form MLLMs enables them to follow human instructions based on multimodal inputs such as images [ 5,8], videos [ 26], audio [ 27], and point clouds [ 28]. Aligning multimodal inputs with text is the key challenge for MLLMs. Existing methods primarily rely on building multimodal projectors. Typically, Linear / MLP projectors [ 5,26] directly project the multimodal inputs into the LLM text space. Despite the simplicity, the number of multimodal tokens will increase significantly when high-resolution inputs are presented. To alleviate this issue, Resampler / Q-Former projectors [ 8,27] utilize a fixed number of learnable queries to align the most task-relevant multimodal information with the text. Recently, hybrid projectors have been developed [ 6,7], which serve as trade-offs between MLP and Q-Former projectors, to fully preserve all information within multimodal inputs while minimizing the number of tokens generated. Nevertheless, these projectors commonly need large-scale ‘modality-text’ data pairs for pre-training, which is not available for sensing modalities. 3 Method In this part, we first formulate our problem (Sec 3.1), then delve into the details of the UMIP (Sec 3.2). Followed by an introduction of the two-stage training strategy for HoloLLM (Sec 3.3). Finally, we present the data curation pipeline for our multisensory benchmarking (Sec 3.4). 3.1 Problem Formulation The objective of MLLMs is to obtain multimodal tokens Zm, which are not only aligned with text tokensZtextbut also preserve modality-specific features within the inputs Xm. This work aims to learn an MLLM with data-scarce and heterogeneous sensing modalities, which presents a greater challenge, as large-scale pre-training of multimodal projectors is infeasible. An MLLM typically consists of three components: (1) A multimodal Encoder E(·)that converts the inputsXmfrom modality minto a sequence of multimodal embeddings Ym. (2) A projector P(·) to align Ymwith the LLM text space, forming aligned multimodal tokens Zm=P(Ym). (3) An
https://arxiv.org/abs/2505.17645v1
LLM LLM (·,·), which takes Zmand the text tokens Ztextcorresponding to human instructions as 3 Figure 2: Architecture of HoloLLM. Given multimodal inputs Xm, HoloLLM utilizes modality- specific tokenizers and a universal encoder to extract pre-aligned initial embeddings Ym CLIP . Mean- while, pre-trained tailored encoders are applied to explore modality features Ym T. The UMIP then transforms Ym CLIP andYm Tinto coarse queries Qmand fine-grained keys and values Km/Vm. By iteratively enhancing the queries via coarse-to-fine cross-attention and projecting them to the LLM text space, the aligned multimodal tokens Zmfully enriched by modality features can be achieved. inputs, to output the responses A={ai}S i=1=LLM (Zm,Ztext)in an auto-regressive approach: p(A) =QS i=1p(ai|Zm,Ztext,A<i),Sis the length of responses generated by LLM . 3.2 Universal Modality-Injection Projector As shown in Fig. 2, we propose an efficient projector, named the Universal Modality-Injection Projector (UMIP). To overcome the data scarcity, we attempt to generate initial embeddings pre- aligned with the text for each modality, without the need for extra training. Specifically, we take the CLIP vision encoder [ 29] as a unified multimodal encoder ECLIP (·)to obtain initial embeddings Ym CLIP for modality m: Ym CLIP =ECLIP (Xm)∈Rnm×dm, (1) where nmanddmdenote the number and dimension of the embeddings. Benefiting from extensive image-text contrastive pre-training, the CLIP vision encoder achieves superior alignment with text and offers transferability to other modalities [ 30]. Consequently, the Ym CLIP can be considered inherently pre-aligned with text, and minor data and fine-tuning are sufficient to align them with text. However, our experiments (Tab. 1 and Tab. 2) reveal that the initial embeddings Ym CLIP lack sufficient discriminability. Previous studies [ 16,17,10] show that dedicated convolutional encoders outperform transformer counterparts in capturing heterogeneous spatial features from sensing modalities grounded in radio frequency, such as WiFi signals. To this end, we design a convolutional tailored encoder Em T(·)for each modality to capture the heterogeneous modality features Ym T: Ym T=MLPm(Em T(Xm))∈Rhm×wm×dm, (2) where MLPm(·)is a MLP projector to align the feature dimension of Em T(·)to that of CLIP dm,hm andwmare the height and width of the feature map. The significant heterogeneity of Ym Tmakes directly aligning them with the text inefficient, increasing the demand for training data. Therefore, we only take Ym Tas references to provide fine-grained, modality-specific features and convert them into candidate keys and values: Km,Vm∈Rhmwm×dm. Meanwhile, the pre-aligned initial embeddings Ym CLIP are downsampled to form the queries: Qm=AvgPool (Ym CLIP )∈Rn′ m×dm, (3) where AvgPool is 1D adaptive average pooling and n′ m< nmlimits the number of queries to a fixed value. The queries Qmonly contain the coarse-grained modality features, which are enhanced via: Qm (l)=SelfAtt (Qm (l−1)), Qm (l)=CrossAtt (Qm (l),Km,Vm), Qm (l)=FFN (Qm (l)), l = 1, . . . , L,(4) 4 whereQm (0)=Qm. SelfAtt( ·), CrossAtt( ·,·,·), and FFN( ·) are self-attention, cross-attention, and feedforward layers in each block of UMIP, respectively, and Lis the number of blocks. Our UMIP follows an iterative process to produce discriminative multimodal tokens that are ade- quately aligned with the text. In each block of UMIP, the coarse-to-fine cross-attention adaptively
https://arxiv.org/abs/2505.17645v1
identifies the text-aligned modality features from Vmand injects them into the queries Qmfor enhancement. The updated queries are projected into the text space via the feedforward layer, which serves as enhanced queries for the next block. Finally, the multimodal tokens from the last block Zm=MLP (Qm L)can be aligned with the LLM text space while sufficiently enriched by modality- specific features. Here, MLP (·)maps the dimension of CLIP (1024) to that of the LLM (4096). Figure 3: Comparison between UMIP and other projectors: (a) Modality-Specific Projector [ 28,27, 31], (b) Universal Projector [30], and (c) Universal Modality-Injection Projector (Ours). Discussion on prior art. We compare UMIP with state-of-the-art multimodal projectors in Fig. 3. Specifically, most existing works [ 28,27,31] adopt modality-specific encoders and projectors (Fig. 3 (a)), which commonly requires substantial ‘modality-text’ data pairs for pre-training. As shown in Fig. 3 (b), OneLLM [ 30] attempts to handle various modalities via a unified framework that consists of a universal encoder and projector. However, without a dedicated design for capturing heterogeneous spatial features, the universal encoder struggles to obtain sufficiently discriminative multimodal tokens. Different from existing works, UMIP only utilizes the universal encoder to generate initial embeddings for each modality (Fig. 3 (c)). These embeddings are then progressively enhanced by fine-grained, text-aligned features from tailored encoders. 3.3 Training Strategy and Objectives To effectively train HoloLLM, we propose a two-stage training strategy: (1) pre-training tailored encoders to extract modality-specific features, and (2) fine-tuning the HoloLLM to learn discriminative multimodal tokens that are appropriately aligned with the text space. During the first stage, task-specific objective, i.e., the classification loss for the Human Action Recognition (HAR) task, is utilized to pre-train tailored encoders: L1=CE(Classifier (Em T(Xm i)), ci), (5) where Em Tis the tailored encoder for modality m,Xm iis the i-th sample in the dataset, and ciis the corresponding action label. CE(·,·)andClassifier( ·)refer to the cross-entropy loss and the classifier to predict the action label, respectively. After pre-training, all tailored encoders are frozen. The modality-specific tokenizers and the UMIP are then fine-tuned by the combination of task-specific and next-token prediction objectives: L2=CE(Classifier (LLM (Zm i,Ztext i), ci) +Lnext, (6) whereZm iandZtext idenote the multimodal tokens and the instruction text tokens for the i-th sample, Lnext is the cross-entropy loss of next-token prediction for both Action QA and Action Caption tasks. 3.4 Data Curation for Multisensory Language-grounded Benchmarking To perform real-world tasks in diverse smart home scenarios, embodied agents must understand human behavior from multisensory inputs and engage in language-grounded communication. Therefore, we 5 Figure 4: Data curation pipeline for (a) Action question answering (QA) and (b) Action Caption. establish the first multisensory benchmark for evaluating human perception and reasoning capabilities in HoloLLM, encompassing action recognition, question answering (QA), and captioning tasks. However, textual descriptions for sensing modalities are not available in public datasets and online sources. Inspired by other modalities such as point clouds [ 32] and IMU [ 33], we select two multimodal human sensing datasets, MM-Fi [ 11] and XRF55 [ 10], and refer to the vision modality to generate action QA and caption
https://arxiv.org/abs/2505.17645v1
data. As shown in Fig. 4, we take the MM-Fi dataset as an example to illustrate our data curation pipeline; more details can be found in Appendix A. Action QA generation. In this work, Action QA is conditioned on options. To ensure precision and diversity, 5 questions are first annotated by human experts and rewritten by GPT-4o [ 34], resulting in a list of 15 questions. For a sample from each modality, a question (‘ Question ’) is randomly sampled from the list. Moreover, all action categories of the MM-Fi dataset, along with the action label of the data sample, are appended to the question to generate the ‘ Action List ’ (options) and the ‘ Answer ’. Formally, an Action QA sample can be formulated as: { Question ,Action List ,Answer }. Action Caption generation. We propose a human-VLM collaborative pipeline to generate captions for sensing modalities. Specifically, a fixed question (‘ Question ’: ‘Please give detailed descriptions of human’s action. ’) is designed for all caption tasks. Then, we uniformly select a small set of samples across various environments, subjects, and action categories to form the ‘In-context Caption Samples’. For other data samples, LLaV A-Video [ 26] is adopted to automatically generate the captions (‘ Caption ’) via an in-context learning manner. Finally, an Action Caption sample can be formulated as: { Question ,Caption }, which is shared among all modalities of a data sample. 4 Experiment 4.1 Experimental Settings Datasets. We utilize two multimodal human-sensing datasets MM-Fi [11] and XRF55 [10] with generated textual descriptions. Specifically, MM-Fi consists of 5 modalities: Video (V), Depth images (D), LiDAR (L), mmWave Radar (M), and WiFi-CSI (W). Besides, XRF55 also contains 5 modalities: Video (V), Depth images (D), Infrared images (I), RFID signals (R), and WiFi-CSI (W). Benchmarks. To comprehensively evaluate various MLLMs across diverse scenarios, we design three experimental settings: (1) Random Split (Random), (2) Cross-Subject Split (CrossSub), and (3) Cross-Environment Split (CrossEnv). Specifically, ‘Random’ involves a random split of all samples with a ratio of 3:1, and ‘CrossSub’ / ‘CrossEnv’ selects samples from nonoverlapping human subjects / environments for training and testing. For quantitative evaluation, we use the accuracy for Action Recognition and Action QA, and the METEOR [35] metric for Action Caption. Implementation Details. Following OneLLM [ 30], we take CLIP VIT Large pre-trained on LAION [ 12] to provide initial multimodal embeddings, and the LLaMA2-7B [ 36] as our LLM. For tailored encoders, we take Resnet18 [ 37] for Vision, Depth and Infrared, PointNet [ 38] for LiDAR and mmWave, 1D Temporal Resnet18 [ 10] for RFID, and MetaFi [ 17] for WiFi modalities. The UMIP contains L= 8blocks, with 64 query tokens for Vision, Depth, mmWave, and Infrared, 256 query tokens for LiDAR and WiFi (XRF55), and 16 query tokens for RFID and WiFi (MM-Fi). Please refer to Appendix B for more details on our experimental settings. 6 Settings Models Sources Types Human Action QA (Accuracy) Action Caption (METEOR) V D M L W Avg V D M L W Avg RandomTokenpacker arXiv’24 Proj 2.7 2.2 2.6
https://arxiv.org/abs/2505.17645v1
2.6 1.9 2.4 8.9 8.9 8.9 8.9 8.9 8.9 Honeybee CVPR’24 Proj 2.3 2.5 2.0 1.7 2.0 2.1 10.0 10.0 10.1 10.3 10.2 10.1 OneLLM CVPR’24 Proj 3.6 3.5 4.7 3.5 4.2 3.9 15.5 15.3 13.3 15.9 16.6 15.3 ImageBind CVPR’23 Enc 89.3 76.7 45.8 11.1 8.0 46.2 28.4 21.2 18.8 16.2 14.8 19.9 HoloLLM - Proj 99.8 99.7 95.8 84.2 52.8 86.5 30.8 31.1 29.6 27.4 23.0 28.4 CrossSubTokenpacker arXiv’24 Proj 3.0 3.3 3.6 3.2 3.5 3.3 6.6 6.5 6.6 6.6 6.4 6.5 Honeybee CVPR’24 Proj 1.8 1.8 1.9 2.2 2.0 1.9 10.3 10.3 10.3 10.3 10.2 10.3 OneLLM CVPR’24 Proj 3.8 3.4 4.0 2.6 4.6 3.7 15.2 14.6 10.3 15.0 16.1 14.2 ImageBind CVPR’23 Enc 76.9 43.3 45.5 6.8 7.7 36.0 25.8 21.0 20.9 15.3 16.5 19.9 HoloLLM - Proj 98.0 98.9 88.0 66.5 8.0 71.9 30.6 30.5 29.5 24.9 16.7 26.4 CrossEnvTokenpacker arXiv’24 Proj 5.0 5.0 4.7 4.2 4.3 4.6 3.9 3.9 3.8 3.8 3.7 3.8 Honeybee CVPR’24 Proj 2.0 1.5 1.5 1.9 1.8 1.7 10.4 10.5 10.4 10.6 10.4 10.4 OneLLM CVPR’24 Proj 4.2 8.0 1.1 7.8 4.0 5.0 15.5 13.1 2.2 5.1 10.4 9.3 ImageBind CVPR’23 Enc 41.0 5.3 24.0 7.6 5.5 16.7 19.4 19.8 17.5 15.0 14.9 17.3 HoloLLM - Proj 79.5 91.6 61.4 41.4 8.2 56.4 25.7 27.5 24.5 19.6 15.9 22.6 Table 1: Evaluation of Human Action QA and Caption tasks on MM-Fi [ 11] across three settings. The Accuracy (%) and METEOR (%) are adopted for Action QA and Caption, respectively Settings Models Sources Types Action QA Action Caption V D I R W Avg V D I R W Avg RandomTokenpacker arXiv’24 Proj 1.2 1.2 1.4 1.1 1.2 1.2 8.0 7.9 7.7 7.9 7.7 7.8 Honeybee CVPR’24 Proj 1.4 1.6 1.5 1.6 1.5 1.5 10.0 10.1 10.3 10.1 10.0 10.1 OneLLM CVPR’24 Proj 2.0 1.9 1.5 1.9 2.3 1.8 14.5 13.1 1.7 1.5 2.0 13.3 ImageBind CVPR’23 Enc 62.2 22.2 79.0 5.3 10.0 35.8 19.3 13.0 24.3 12.3 12.7 16.3 HoloLLM - Proj 94.5 92.3 92.6 27.1 11.2 63.5 34.2 34.8 34.7 15.5 14.0 26.6 CrossSubTokenpacker arXiv’24 Proj 1.2 1.1 1.4 1.2 1.2 1.2 9.2 9.1 9.1 9.2 9.3 9.2 Honeybee CVPR’24 Proj 1.3 1.3 1.4 1.5 1.3 1.4 9.3 9.3 9.3 9.4 9.3 9.3 OneLLM CVPR’24 Proj 1.9 1.5 2.0 2.3 2.1 2.0 13.4 13.5 14.5 9.2 13.1 12.7 ImageBind CVPR’23 Enc 13.3 11.1 20.4 3.8 4.9 10.7 15.8 14.6 18.3 12.7 14.5 15.2 HoloLLM - Proj 44.3 42.1 38.3 3.4 3.6 26.3 22.3 23.1 22.8 11.8 13.7 18.7 CrossEnvTokenpacker arXiv’24 Proj 1.7 1.4 1.6 1.4 1.5 1.5 7.2 7.2 7.2 7.1 7.1 7.2 Honeybee CVPR’24 Proj 1.5 1.4 1.6 1.2 1.1 1.3 11.2 11.2 11.0 11.0 11.3 11.1 OneLLM CVPR’24 Proj 1.4 3.2 1.8 1.7 2.3 2.1 16.9 6.7 11.5 8.4 13.0 11.3 ImageBind CVPR’23 Enc 4.7 4.9 16.9 2.8 2.6 6.4 13.1 14.3 16.9 12.7 12.0 13.8 HoloLLM - Proj 25.9 8.6 22.1 2.6 4.5 12.8 19.8 14.7 17.1 10.8 13.7 15.2 Table 2: Evaluation of Human Action QA and Caption tasks on XRF55 [ 10] across three settings.
https://arxiv.org/abs/2505.17645v1
The Accuracy (%) and METEOR (%) are adopted for Action QA and Caption, respectively 4.2 Main Results We compare HoloLLM with state-of-the-art MLLMs across three tasks: Action QA, Action Caption, and Action Recognition. Specifically, we divided existing methods into Encoder-based (Image- Bind [ 31]) and Projector-based (Honeybee [ 6], Tokenpacker [ 7], OneLLM [ 30]) methods. The Encoder-based methods aim to align multimodal embeddings from various encoders via contrastive learning, while the Projector-based methods focus on designing effective projectors for alignment. For a fair comparison, all methods are fine-tuned on MM-Fi and XRF55 datasets as our HoloLLM. Action QA and Action Caption. The results on MM-Fi and XRF55 datasets are summarized in Tab. 1 and Tab. 2. For all three benchmarks, HoloLLM outperforms other MLLMs by a large margin on almost all modalities (indicated in bold ). Specifically, Tokenpacker and Honeybee only use a modality-shared projector, which cannot effectively capture modality-specific features aligned with text. By introducing learnable queries for each modality to the projector, OneLLM and ImageBind can better align multimodal embeddings with the text space while exploring the modality features. Figure 5: Evaluation of Human Action Recognition on MM-Fi [ 17] and XRF55 [ 10] across three benchmarks in terms of Accuracy (Better to zoom in). 7 Moreover, ImageBind employs dedicated encoders for depth and infrared modalities, along with a stronger vision encoder (CLIP ViT-H), enabling more effective capture of modality-specific features and improved performance. Instead of using simple learnable queries, HoloLLM designs tailored encoders to adequately capture fine-grained, modality-specific features, which are adaptively injected into the aligned multimodal tokens via UMIP. The results demonstrate that multimodal alignment and discriminability are equally critical for human perception and reasoning based on sensing modalities. Action Recognition. As shown in Fig. 5, HoloLLM outperforms other MLLMs on Action Recog- nition compared across most modalities. For some sensing modalities, such as WiFi and RFID, performances on par with other MLLMs are observed under “CrossSub” or “CrossEnv” settings. Some sensing modalities are highly sensitive to different subjects or environments, making it chal- lenging to achieve cross-subject or cross-environment generalization. More efforts should be made toward building large-scale sensing datasets to further promote their generalization capability. Action Recognition Action QA Action Caption MMFi V D M L W Avg V D M L W Avg V D M L W Avg Baseline 9.7 9.5 18.6 6.5 7.4 10.3 3.9 3.3 14.4 4.8 4.7 6.2 15.3 15.1 17.4 15.9 16.0 15.9 +TailorEncoder 83.1 91.8 58.2 50.4 8.8 58.5 71.8 73.4 48.4 28.0 11.3 46.6 25.9 25.4 24.0 20.3 13.8 21.9 +UMIP 80.6 93.2 61.0 53.0 9.6 59.5 79.5 91.6 61.4 41.4 8.2 56.4 25.7 27.5 24.5 19.6 15.9 22.6 XRF55 V D I R W Avg V D I R W Avg V D I R W Avg Baseline 4.4 3.8 18.2 2.6 3.4 6.5 5.7 2.5 2.4 3.2 3.8 3.5 12.0 12.1 13.0 11.1 13.5 12.3 +TailorEncoder 27.5 9.1 31.2 1.7 2.5 14.4 23.5 8.0 23.0 1.7 2.7 11.8 20.2 11.7 20.1 10.4 12.3 14.9 +UMIP 28.9 12.4 28.3 1.7 3.7 15.0 25.9 8.6 22.1 2.6
https://arxiv.org/abs/2505.17645v1
4.5 12.8 19.8 14.7 17.1 10.8 13.7 15.2 Table 3: Ablations results. We conduct experiments on both MM-Fi and XRF55 under “Cross- Environment” setting to show the contribution of the key components. 4.3 Ablation Results We conduct the ablation study on both MM-Fi and XRF55 datasets under the “CrossEnv” setting to show the effectiveness and generalization capability of key components. Specifically, ‘Baseline’ only adopts the CLIP ViT-L to extract multimodal embeddings and utilizes a Q-former [ 8] with modality-specific learnable queries as the projector. Following OneLLM, the size of the learnable queries is R30×1024. The results are summarized in Tab. 3. Ablation Study on Tailored Encoders. To evaluate the effectiveness of tailored encoders, we replace the CLIP ViT-L in the ‘Baseline’ with them and use the same Q-former for multimodal alignment (‘+TailoredEncoder’). As shown in Tab. 3, introducing tailored encoders significantly improves performance across all modalities on both datasets. This demonstrates that fine-grained, modality-specific discriminative features are crucial in human perception and reasoning tasks, and tailored encoders can capture the features more effectively than the modality-shared unified encoder. Ablation Study on UMIP. Furthermore, applying UMIP (‘+UMIP’) leads to performance improve- ment, especially for Action QA, which requires a deeper understanding of language-based human instructions and action categories. The results indicate that the multimodal tokens generated by UMIP achieve better alignment with text, thereby enhancing human action reasoning. Moreover, by adaptively enhancing the pre-aligned initial embeddings using fine-grained, text-aligned modality features, UMIP can provide multimodal tokens with stronger discriminability. 4.4 Qualitative Results To intuitively show the effectiveness of HoloLLM, we visualize the aligned multimodal and text tokens of different models in Fig. 6 and show the qualitative results for sensing modalities in Fig. 7. Visualization Analysis of Multimodal and Text Tokens. We randomly select 5 action categories from the MM-Fi dataset and visualize the aligned multimodal tokens generated by ‘Baseline’, OneLLM [ 30], and HoloLLM. As shown in Fig. 6 (a), only the multimodal tokens generated by HoloLLM are well grouped based on action categories across all modalities. This shows that tailored encoders are essential for capturing modality-specific discriminative features, while HoloLLM effectively preserves them via UMIP. Besides, we present the tokens of two sensing modalities (mmWave and WiFi) and the tokens of text captions for two action categories in Fig. 6 (b). Compared 8 Figure 6: Visualization results by tSNE [ 39]. (a) Visualization of aligned tokens from 5 action categories (denoted by different colors) generated by Baseline, OneLLM [ 30], and HoloLLM for ‘Video’ and ‘mmWave’ modalities. (b) Visualization of multimodal tokens from 2 action categories (denoted by different colors) for ‘mmWave’ (circles), ‘WiFi’ (pentagrams), and ‘Text’ (triangles) modalities generated by HoloLLM without or with UMIP. Figure 7: Qualitative results on sensing modalities. All examples are from the testing set of MM-Fi and XRF55 datasets under the “Cross-Environment” setting. with only adopting the tailored encoders and Q-former (‘HoloLLM w/o UMIP’), the multimodal tokens generated by HoloLLM can better align with the ground-truth text captions for different action categories. It intuitively shows that our UMIP helps achieve better multimodal alignment with text.
https://arxiv.org/abs/2505.17645v1
Qualitative Results. We give some qualitative results of HoloLLM in the “CrossEnv” setting in Fig. 7. These results show that HoloLLM can perform Action QA and Action Caption tasks across sensing modalities in diverse environments. More results and analysis are detailed in Appendix C. 5 Conclusion and Limitations In this work, we present HoloLLM, an MLLM that integrates rare but powerful sensing modalities to enable seamless human perception and reasoning across heterogeneous real-world scenarios. Based on limited data, we propose the Universal Modality Injection Projector (UMIP) to efficiently align sensing modalities with the text via only minimal fine-tuning. Besides, the modality-specific discriminative features are adequately explored by tailored encoders and adaptively injected into the aligned multimodal tokens through UMIP. Thanks to UMIP, HoloLLM shows significant improvements compared to other state-of-the-art MLLMs on our multisensory language-grounded benchmarks. Limitation and Future work. As the first attempt to human perception and reasoning based on sensing modalities, our work is limited to human action recognition, question answering, and caption. However, MLLMs needed to support more tasks to meet the requirements of real-world applications, such as task planning and agent action generation, which will be explored in our future work. 9 Acknowledgments and Disclosure of Funding This work is supported by a Start-up Grant from Nanyang Technological University and jointly funded by the Singapore Ministry of Education (MOE) under a Tier-1 research grant. References [1]Jianfei Yang, Han Zou, Hao Jiang, and Lihua Xie. Device-free occupant activity sensing using wifi-enabled iot devices for smart homes. IEEE Internet of Things Journal , 5(5):3991–4002, 2018. [2]Jianfei Yang, Xinyan Chen, Han Zou, Dazhuo Wang, Qianwen Xu, and Lihua Xie. Efficientfi: Toward large-scale lightweight wifi sensing via csi compression. IEEE Internet of Things Journal , 9(15):13086–13095, 2022. [3]Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan P Foster, Pannag R Sanketi, Quan Vuong, et al. Openvla: An open-source vision-language-action model. In Annual Conference on Robot Learning , 2024. [4]Manling Li, Shiyu Zhao, Qineng Wang, Kangrui Wang, Yu Zhou, Sanjana Srivastava, Cem Gokmen, Tony Lee, Erran Li Li, Ruohan Zhang, et al. Embodied agent interface: Benchmarking llms for embodied decision making. Advances in Neural Information Processing Systems (NeurIPS) , 37:100428–100534, 2024. [5]Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in Neural Information Processing Systems (NeurIPS) , 36:34892–34916, 2023. [6]Junbum Cha, Wooyoung Kang, Jonghwan Mun, and Byungseok Roh. Honeybee: Locality- enhanced projector for multimodal llm. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 13817–13827, 2024. [7]Wentong Li, Yuqian Yuan, Jian Liu, Dongqi Tang, Song Wang, Jie Qin, Jianke Zhu, and Lei Zhang. Tokenpacker: Efficient visual projector for multimodal llm. arXiv preprint arXiv:2407.02392 , 2024. [8]Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International Conference on Machine Learning (ICML) , pages 19730–19742, 2023. [9]Jialian Li, Jingyi Zhang, Zhiyong Wang, Siqi Shen, Chenglu Wen, Yuexin Ma, Lan Xu, Jingyi Yu, and Cheng Wang. Lidarcap: Long-range marker-less 3d human motion capture with lidar point clouds. In
https://arxiv.org/abs/2505.17645v1
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 20502–20512, 2022. [10] Fei Wang, Yizhe Lv, Mengdie Zhu, Han Ding, and Jinsong Han. Xrf55: A radio frequency dataset for human indoor action analysis. ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies , 8(1):1–34, 2024. [11] Jianfei Yang, He Huang, Yunjiao Zhou, Xinyan Chen, Yuecong Xu, Shenghai Yuan, Han Zou, Chris Xiaoxuan Lu, and Lihua Xie. Mm-fi: Multi-modal non-intrusive 4d human dataset for versatile wireless sensing. Advances in Neural Information Processing Systems (NeurIPS) , 36, 2024. [12] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion- 5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems (NeurIPS) , 35:25278–25294, 2022. [13] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In IEEE/CVF International Conference on Computer Vision (ICCV) , pages 1728–1738, 2021. 10 [14] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Annual Meeting of the Association for Computational Linguistics , pages 2556–2565, 2018. [15] Yunjiao Zhou, He Huang, Shenghai Yuan, Han Zou, Lihua Xie, and Jianfei Yang. Metafi++: Wifi-enabled transformer-based human pose estimation for metaverse avatar simulation. IEEE Internet of Things Journal , 10(16):14128–14136, 2023. [16] Xinyan Chen and Jianfei Yang. X-fi: A modality-invariant foundation model for multimodal human sensing. arXiv preprint arXiv:2410.10167 , 2024. [17] Jianfei Yang, Xinyan Chen, Han Zou, Chris Xiaoxuan Lu, Dazhuo Wang, Sumei Sun, and Lihua Xie. Sensefi: A library and benchmark on deep-learning-empowered wifi human sensing. Patterns , 4(3), 2023. [18] Zhongang Cai, Daxuan Ren, Ailing Zeng, Zhengyu Lin, Tao Yu, Wenjia Wang, Xiangyu Fan, Yang Gao, Yifan Yu, Liang Pan, et al. Humman: Multi-modal 4d human dataset for versatile sensing and modeling. In European Conference on Computer Vision (ECCV) , pages 557–577, 2022. [19] Yiming Ren, Xiao Han, Chengfeng Zhao, Jingya Wang, Lan Xu, Jingyi Yu, and Yuexin Ma. Livehps: lidar-based scene-level human pose and shape estimation in free environment. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 1281–1291, 2024. [20] Yuheng Wang, Haipeng Liu, Kening Cui, Anfu Zhou, Wensheng Li, and Huadong Ma. m- activity: Accurate and real-time human activity recognition via millimeter wave radar. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 8298–8302. IEEE, 2021. [21] Sizhe An and Umit Y Ogras. Fast and scalable human pose estimation using mmwave point cloud. In ACM/IEEE Design Automation Conference , pages 889–894, 2022. [22] Jingxiao Zheng, Xinwei Shi, Alexander Gorban, Junhua Mao, Yang Song, Charles R Qi, Ting Liu, Visesh Chari, Andre Cornman, Yin Zhou, et al. Multi-modal 3d human pose estimation with 2d weak supervision in autonomous driving. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 4478–4487, 2022. [23] Anjun Chen, Xiangyu Wang, Kun Shi, Shaohao Zhu, Bin Fang, Yingfeng Chen, Jiming Chen, Yuchi Huo, and Qi Ye. Immfusion: Robust mmwave-rgb fusion
https://arxiv.org/abs/2505.17645v1
for 3d human body reconstruction in all weather conditions. In IEEE International Conference on Robotics and Automation (ICRA) , pages 2752–2758, 2023. [24] Haoxuan Qu, Yujun Cai, and Jun Liu. Llms are good action recognizers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 18395–18406, 2024. [25] Yunjiao Zhou, Jianfei Yang, Han Zou, and Lihua Xie. Tent: Connect language models with iot sensors for zero-shot activity recognition. arXiv preprint arXiv:2311.08245 , 2023. [26] Yuanhan Zhang, Jinming Wu, Wei Li, Bo Li, Zejun Ma, Ziwei Liu, and Chunyuan Li. Video instruction tuning with synthetic data. arXiv preprint arXiv:2410.02713 , 2024. [27] Zijia Zhao, Longteng Guo, Tongtian Yue, Sihan Chen, Shuai Shao, Xinxin Zhu, Zehuan Yuan, and Jing Liu. Chatbridge: Bridging modalities with large language model as a language catalyst. arXiv preprint arXiv:2305.16103 , 2023. [28] Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. In European Conference on Computer Vision (ECCV) , pages 131–147, 2024. [29] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML) , pages 8748–8763, 2021. 11 [30] Jiaming Han, Kaixiong Gong, Yiyuan Zhang, Jiaqi Wang, Kaipeng Zhang, Dahua Lin, Yu Qiao, Peng Gao, and Xiangyu Yue. Onellm: One framework to align all modalities with language. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 26584– 26595, 2024. [31] Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. Imagebind: One embedding space to bind them all. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 15180– 15190, 2023. [32] Tiange Luo, Chris Rockwell, Honglak Lee, and Justin Johnson. Scalable 3d captioning with pretrained models. Advances in Neural Information Processing Systems (NeurIPS) , 36, 2024. [33] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 18995–19012, 2022. [34] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [35] Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization , pages 65–72, 2005. [36] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [37] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 770–778, 2016. [38] Charles R Qi, Hao Su, Kaichun
https://arxiv.org/abs/2505.17645v1
Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 652–660, 2017. [39] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research , 9(11), 2008. 12 Appendix A Technical Details on Data Curation A.1 Action QA generation A.2 Action Caption generation B Details on Experimental Settings B.1 Datasets B.2 Benchmarks B.3 Training Details C Additional Experiments C.1 Quantitative Results for Action Recognition C.2 Complete Qualitative Results and Analysis C.3 Naive Multimodal Fusion for HoloLLM D Broader Impacts A Technical Details on Data Curation In this section, we elaborate more details about the data generation process for action question answering (QA) and captions. A.1 Action QA generation We formulate Action QA as a question answering task with options. To ensure precision, two seed questions are first annotated by a human expert, as shown in Fig. 8. We then adopt GPT-4o [ 34] to Figure 8: Two seed questions for action QA annotated by human experts. rewrite the seed questions to enhance the diversity, which extends them to a list of 15 questions. Here, the utilized prompt and 5 examples of rewritten questions are listed in Fig. 9. Afterwards, for each data sample from a certain modality, a question is randomly selected from the list to construct the QA question (‘ Question ’). Furthermore, all action categories from the sensing dataset, along with the action label of the data sample, are appended to the question to generate the Action List (options) and the corresponding Answer . As shown in Fig. 10, a typical Action QA data sample can be formulated as: { Question ,Action List ,Answer } along with the input videos from diverse modalities (The vision modality is shown in Fig. 10). During training and testing, the input videos combined with Question andAction List are passed to the HoloLLM to predict the Answer . A.2 Action Caption generation The textual descriptions of human actions are not available in both the MM-Fi and XRF55 datasets. Therefore, the captions cannot be directly generated using in-context learning. To this end, we propose a human-VLM collaborative pipeline to address caption generation. Specifically, we evenly select 108 / 110 samples from the MM-Fi / XRF55 dataset across various environments, subjects, and action categories to form seed data samples. These seed samples are captioned by human experts, and each caption is rewritten by GPT-4o as detailed in Appendix A.1 to enhance diversity, resulting in 5 captions for each seed sample. We then transfer a seed sample to an in-context caption sample: 13 Figure 9: Prompting GPT-4o [ 34] to rewrite the seed questions to enhance diversity. We provide 5 questions rewritten by GPT-4o as examples. Figure 10: A typical data sample for the Action QA task. {Question ,Video ,Caption } by (1) Add a fixed question: Please give detailed descriptions of human’s action. , (2) Attach the video sequence of the seed sample, and (3) Randomly select a caption. A typical in-context caption sample
https://arxiv.org/abs/2505.17645v1
is shown in Fig. 11 (a). Consequently, for the remaining data samples in both datasets, we adopt LLaV A-Video [ 26] to automatically generate captions through in-context learning. As illustrated in Fig. 12, for a specific data sample s, we first build the prompting message by progressively integrating the ‘system prompt’ and an in-context caption sample of the same category as s. The in-context caption sample contains input sample[‘video’] with sample[‘question’] and output sample[‘response’] . Then, thequery[‘video’] andquery[‘question’] are appended to the message as the sample to be captioned. The complete prompting message is passed to the LLaV A-Video to obtain the Caption . As shown in Fig. 11 (b), a data sample for the Action Caption task can be formulated as: { Question , Caption }, which is shared across all modalities. During training and testing, the input videos combined with Question are passed to the HoloLLM to generate the Caption . 14 Figure 11: (a) A human-annotated in-context caption sample. (b) A typical data sample for the Action Caption task. Figure 12: Prompting messages passed to LLaV A-Video [ 26] to automatically generate cap- tion for sensing data. Human-annotated in-context caption samples are included in the prompt, where each sample has input sample[‘video’] with sample[‘questions’] and output sample[‘response’] . B Details on Experimental Settings In this section, we provide more details on the experimental settings. Specifically, we first present more details for the datasets in Appendix B.1. Followed by detailed statistics for three experimental settings in Appendix B.2. Finally, the training details of HoloLLM are elaborated in Appendix B.3. B.1 Datasets MM-Fi. MM-Fi consists of 27 action categories and 40 human subjects from 4 different environ- ments. Each human subject contains synchronized sequences with aspects of 5 modalities: Video (V), Depth images (D), LiDAR point clouds (L), mmWave point clouds (M), and WiFi-CSI (W). In total, there are 16,448 multimodal sequences in the MM-Fi datasets, and 5 frames are evenly sampled from each sequence to form a data sample. XRF55. For XRF55, we only consider the human subjects with video modality, resulting in 19 human subjects from 4 different environments with 55 action categories. Each human subject contains synchronized sequences with aspects of 5 modalities: Video (V), Depth images (D), Infrared images 15 (I), RFID signals (R), and WiFi-CSI (W). In total, there are 19,800 multimodal sequences in the XRF55 datasets, and 10 frames are evenly sampled from each sequence to form a data sample. B.2 Benchmark We design three experimental settings: (1) Random Split (Random), (2) Cross-Subject Split (Cross- Sub), and (3) Cross-Environment Split (CrossEnv). Specifically, ‘Random’ involves a random split of all samples with a ratio of 3:1, and ‘CrossSub’ / ‘CrossEnv’ selects samples from nonoverlapping human subjects / environments for training and testing. Settings MM-Fi XRF55 Train Size Test Size Train Size Test Size Random 12,336 4,112 14,850 4,950 CrossSub 11,657 4,791 14,300 5,500 CrossEnv 12,565 3,883 16,500 3,300 Table 4: The detailed statistics of three experimental settings of HoloLLM. Detailed statistics for three experimental settings on the sizes of the training and testing sets are summarized
https://arxiv.org/abs/2505.17645v1
in Tab. 4. B.3 Training Details The AdamW optimizer with β1= 0.9,β2= 0.95, and weight decay of 0.1 is adopted in our training. For stage one, we utilize the training set of the corresponding experimental setting (Random, CrossSub, CrossEnv) to pretrain the tailored encoders for 120 epochs on a single A100 GPU. The learning rate is initialized to 0.1 with a linear warmup strategy for 10 epochs, and then decayed at the 60-th and 100-th epochs with a decay factor of 0.1. For stage two, we train the HoloLLM on 2 A100 GPUs for 5 epochs. We set the accumulated iterations to 4 and form an effective batch size of 64 / 48 for the MM-Fi / XRF55 datasets, respectively. Followed by OneLLM [ 30], the linear warmup strategy is utilized for the first 2K iterations with a maximum learning rate of 2e-5. Settings Models Sources Types MM-Fi XRF55 V D M L W Avg V D I R W Avg RandomTokenpacker arXiv’24 Proj 15.6 14.5 32.8 18.9 11.5 18.7 9.2 8.2 7.1 2.2 6.5 6.6 Honeybee CVPR’24 Proj 9.0 11.3 27.8 12.6 10.3 14.2 2.9 4.3 3.8 1.9 7.4 4.0 OneLLM CVPR’24 Proj 9.0 9.0 16.2 6.1 9.0 9.9 2.1 2.0 1.5 1.9 2.3 2.0 ImageBind CVPR’23 Enc 99.5 96.8 63.7 13.2 11.7 57.0 90.8 59.1 93.3 14.6 51.1 61.8 HoloLLM - Proj 99.8 99.7 97.3 92.8 83.5 86.5 98.0 96.9 97.5 39.1 37.7 73.8 CrossSubTokenpacker arXiv’24 Proj 24.1 14.9 33.9 12.9 11.1 19.4 8.6 8.9 6.3 2.8 4.8 6.3 Honeybee CVPR’24 Proj 11.6 9.3 27.2 13.9 8.3 14.1 3.0 4.2 4.3 2.4 4.9 3.8 OneLLM CVPR’24 Proj 8.3 8.3 18.9 6.6 8.3 10.1 2.0 1.8 1.8 2.2 2.6 2.1 ImageBind CVPR’23 Enc 85.7 70.7 56.9 17.7 11.0 48.4 18.6 14.4 22.0 4.2 5.7 13.0 HoloLLM - Proj 98.4 99.0 93.8 76.6 9.3 75.4 46.4 47.8 42.2 4.4 4.5 29.1 CrossEnvTokenpacker arXiv’24 Proj 15.0 9.4 20.2 11.7 10.7 13.4 9.8 10.1 6.2 2.0 4.5 6.5 Honeybee CVPR’24 Proj 13.5 12.9 26.7 12.8 11.2 15.4 8.9 4.9 3.1 1.4 6.7 5.0 OneLLM CVPR’24 Proj 9.5 9.5 5.6 6.8 2.4 6.8 1.5 3.5 1.8 1.9 2.3 2.2 ImageBind CVPR’23 Enc 45.7 8.0 26.8 8.6 9.5 19.7 6.7 6.8 21.1 2.5 2.5 7.9 HoloLLM - Proj 80.6 93.2 61.0 53.0 9.5 59.5 28.9 12.4 28.3 1.7 2.5 14.8 Table 5: Evaluation of Human Action Recognition on MM-Fi [ 11] and XRF55 [ 10] across three settings. The Accuracy (%) is reported. C Additional Experiments In this section, we present the quantitative results for Action Recognition in Appendix C.1, along with complete qualitative results and a comprehensive analysis for all modalities in Appendix C.2. Additionally, the results of a naive multimodal fusion strategy are provided in Appendix C.3. 16 Figure 13: Qualitative results on sensing modalities. All examples are from the testing set of MM-Fi and XRF55 datasets under the “Cross-Environment” setting. C.1 Quantitative Results for Action Recognition We provide the quantitative results corresponding to Fig. 5 for Action Recognition in Tab. 5. As discussed in Sec 4.2, HoloLLM is superior to other MLLMs on
https://arxiv.org/abs/2505.17645v1
action recognition tasks for almost all modalities. For certain sensing modalities, comparable performances are observed under the “CrossSub” and “CrossEnv” settings. In fact, some sensing modalities are sensitive to different subjects and scenarios. Enhancing the generalization capability of sensing modalities toward diverse subjects and environments raises an important topic for the future. C.2 Complete Qualitative Results and Analysis We show the complete qualitative results of HoloLLM under “CrossEnv” setting for both common and sensing modalities in Fig. 13. These results show that HoloLLM can perform Action QA and Action Caption tasks across multiple modalities in diverse environments. Moreover, the white dashed circles in Fig. 13 (b) and (c) illustrate that HoloLLM possesses the ability to capture fine-grained modality-specific action information and reason over it to identify the correct action category. C.3 Naive Multimodal Fusion for HoloLLM We conduct a preliminary exploration of HoloLLM’s multimodal fusion capability using a naive strategy. Specifically, aligned multimodal tokens generated by UMIP are directly concatenated to achieve fusion. Multimodal reasoning is performed by prepending the fused tokens to the human instructions and passing them to the LLM. For the MM-Fi and XRF55 datasets, we consider the Vision (V), mmWave (M), WiFi (W) and Vision (V), Infrared (I), WiFi (W) modalities, respectively. The results are summarized in Tab. 6. It shows naive multimodal fusion can enhance action recognition performance for certain modalities (highlighted in bold). However, for action QA and captioning tasks, naive multimodal fusion fails to improve performance. More advanced multimodal fusion 17 Modality MM-Fi Modality XRF55 Recognition QA Caption Recognition QA Caption V 80.6 79.5 25.7 V 28.9 25.9 19.8 M 61.0 61.4 24.5 I 28.3 22.1 17.1 W 9.5 8.2 15.9 W 2.5 4.5 13.7 V+M 84.6 66.4 25.0 V+I 34.3 21.5 19.2 V+W 80.9 52.1 23.7 V+W 28.3 13.9 18.7 M+W 61.0 57.6 22.9 I+W 27.2 9.2 16.1 V+M+W 86.1 69.1 24.4 V+I+w 29.7 19.2 18.9 Table 6: Results of naive multimodal fusion on MM-Fi and XRF55 under the “Cross-Environment” setting. strategies, which leverage complementary information across modalities while reducing redundancy, can be developed to enhance human perception and reasoning. D Broader Impacts This work introduces HoloLLM, an MLLM that achieves seamless human perception and reasoning by integrating sensing modalities. HoloLLM establishes a multisensory foundation model, which is beneficial for developing embodied agents that are applicable in diverse real-world scenarios, including low-light environments, occlusions, and privacy-sensitive scenarios. However, HoloLLM may suffer similar concerns with other MLLMs, such as hallucinating or meaningless outputs (especially for some sensing modalities under cross-subject or cross-environment settings), inherited biases from base models, and energy consumption due to large-scale parameters. This raises important research topics such as enhancing the generalization capability of sensing modalities, aligning the base model with human intention, and efficiently pruning the foundation model. Despite these challenges, the release of HoloLLM would be beneficial, as it would foster further development of embodied AI. 18
https://arxiv.org/abs/2505.17645v1
EV ADE: Multimodal Benchmark for Evasive Content Detection in E-Commerce Applications Ancheng Xu1,2∗Zhihao Yang2,3∗Jingpeng Li5Guanghu Yuan1Longze Chen1,2 Liang Yan5Jiehui Zhou5Zhen Qin5Hengyun Chang5 Hamid Alinejad-Rokny4Bo Zheng5†Min Yang1† 1Shenzhen Key Laboratory for High Performance Data Mining, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences,3Tongji University 4School of Biomedical Engineering, UNSW Sydney 5Alibaba Group {ac.xu, lz.chen2, min.yang}@siat.ac.cn, zh.yang30@outlook.com yuangh@mail.ustc.edu.cn, shucai.ljp@taobao.com, bozheng@alibaba-inc.com Abstract E-commerce platforms increasingly rely on Large Language Models (LLMs) and Vision–Language Models (VLMs) to detect illicit or misleading product content. However, these models remain vulnerable to evasive content : inputs (text or im- ages) that superficially comply with platform policies while covertly conveying prohibited claims. Unlike traditional adversarial attacks that induce overt failures, evasive content exploits ambiguity and context, making it far harder to detect. Ex- isting robustness benchmarks provide little guidance for this demanding, real-world challenge. We introduce EV ADE , the first expert-curated, Chinese, multimodal benchmark specifically designed to evaluate foundation models on evasive con- tent detection in e-commerce. The dataset contains 2,833 annotated text samples and 13,961 images spanning six demanding product categories, including body shaping, height growth, and health supplements. Two complementary tasks assess distinct capabilities: Single-Violation , which probes fine-grained reasoning under short prompts, and All-in-One , which tests long-context reasoning by merging overlapping policy rules into unified instructions. Notably, the All-in-One setting significantly narrows the performance gap between partial and full-match accuracy, suggesting that clearer rule definitions improve alignment between human and model judgment. We benchmark 26 mainstream LLMs and VLMs and observe substantial performance gaps: even state-of-the-art models frequently misclassify evasive samples. By releasing EV ADE and strong baselines, we provide the first rigorous standard for evaluating evasive-content detection, expose fundamental lim- itations in current multimodal reasoning, and lay the groundwork for safer and more transparent content moderation systems in e-commerce. The dataset is publicly available at: https://huggingface.co/datasets/koenshen/EVADE-Bench . 1 Introduction In recent years, Large Language Models [ 42,44] and Vision Language Models (VLMs) [ 19,20] have made significant progress across various fields. These models have gained widespread attention for their applications in natural language processing, image recognition, and multimodal tasks, *Equal contribution. †Corresponding authors. Preprint.arXiv:2505.17654v1 [cs.CL] 23 May 2025 EVADE - Bench Body Shaping Products Text Input Image Input Ground-Truth: A GPT4o: “Let’s think step by step, look at the content of the picture ... conclusion is A,B.” (Partial Match ) o1mini: “Let’s think step by step ... the conclusion is B.” (Mismatch) Adversarial operationBlock ZH:不怕有多小 用了都有效EN:No matter how small it is, it works as long as it is used Women's Health Products Text Input Image InputAdversarial operationConfuseZH:守护秘密花园重回少女EN:Guarding the Secret Garden, Returning to GirlhoodHeight Growth Products Text Input Image Input Ground-Truth: A GPT4o: “Let’s think step by step, look at the content of the picture ... the conclusion is A.” (Full Match ) o1mini: “Let’s think step by step ... the conclusion is Z.” (Mismatch )Adversarial operationCutZH:专注长高?直冲180+EN:focus on growing taller? Straight to 180+ Health Supplement ProductsText Input Image Input Ground-Truth: B, N GPT4o: “Let’s think step by step, look at the content of the picture ... the conclusion
https://arxiv.org/abs/2505.17654v1
is N.” (Partial Match ) o1mini: “Let’s think step by step ... the conclusion is B.” (Partial Match )Adversarial operationPixelateZH:专攻失眠焦虑抑郁EN:Specialized in insomnia, anxiety, and depression. Weight Loss ProductsText Input Image Input Ground-Truth: A, C GPT4o: “Let’s think step by step, look at the content of the picture ... the conclusion is A.” (Partial Match ) o1mini: “Let’s think step by step ... the conclusion is B.” (Mismatch )Adversarial operationWatermarkZH:轻松下100仅需一瓶GLPEN:drop 100 effortlessly with just one bottle of GLP.Men's Health ProductsText Input Image Input Ground-Truth: A, B GPT4o: “Let’s think step by step, look at the content of the picture ... the conclusion is Z.” (Mismatch ) o1mini: “Let’s think step by step ... the conclusion is Z.” (Mismatch )Adversarial operationExtractZH:运动补给支持持久战力EN:Sports supplements support sustained endurance Ground-Truth: B, C GPT4o: “Let’s think step by step, look at the content of the picture ... the conclusion is B,C.” (Full Match) o1mini: “Let’s think step by step ... the conclusion is B.” (Partial Match) Figure 1: Illustration of an EV ADE sample. and continue to drive technological advancements across industries[ 25,32,35]. Particularly in the e-commerce domain, these models have been extensively applied to tasks such as product search, recommendation, and content moderation [ 17,38]. However, when confronted with the task of Evasive Content Detection (ECD)—identifying text or image content that has been deliberately altered to circumvent platform rules while still conveying misleading information—they exhibit significant limitations in performance. The task of ECD represents an adversarial dynamic between sellers and platform policies, which differs from traditional adversarial attacks [ 15,46]. Conventional adversarial attacks typically manipulate inputs (e.g. tiny pixel noise or prompt injections) to induce harmful or incorrect model outputs [ 8,21]. In contrast, the adversarial nature of ECD lies at the data level. For instance, sellers may obscure prohibited health claims using euphemistic language or crop images to conceal policy-violating details. When such content escapes detection, platforms face regulatory fines, lawsuits, fraudulent transactions and reputational damage—economic losses that rival technical security breaches[33] Merchants often try to enhance product sales by exaggerating claims, using ambiguous language, or distorting images to attract consumers and bypass platform content review. In contrast, platforms must ensure fair transactions and must identify and regulate misleading product promotions as effectively as possible. In this context, merchants continuously attempt to exploit loopholes in advertising laws by using misspellings, slang, emojis, and mosaics to modify text and images, avoiding the platform’s automatic scrutiny. Thus, adversarial issues in the e-commerce sector are even more pronounced. Despite these violations, evasive content detection remains underexplored. Current models are particularly limited for two key reasons. First, LLMs and VLMs continue to hallucinate [ 6,16], mis-follow instructions [ 24,27], and struggle with long or multimodal contexts [ 7,40], making it difficult to recognize nuanced deception. Second, real-world e-commerce regulations evolve rapidly, and human annotations are often inconsistent, leading to noisy and ambiguous training data. Together, these factors hinder the development and evaluation of models capable of trustworthy moderation. To address this gap, we introduce the Evasive Content Detection in E-Commerce Benchmark (EV ADE ), a Chinese multimodal benchmark specifically designed for
https://arxiv.org/abs/2505.17654v1
evaluating how well LLMs and VLMs detect evasive content in real-world e-commerce scenarios. Every example is iteratively 2 annotated by domain experts to ensure accurate and consistent ground truth. EV ADE contains 2,833 text samples and 13,961 product images, collected from six demanding product categories where deceptive content is most prevalent: weight loss, diseases, height growth, body shaping, female care, and male care. Each sample is iteratively annotated by domain experts to ensure its accuracy and reliability. Furthermore, we design two tasks for EV ADE: Single-Violation and All-in-One. The former evaluates a model’s ability to perform rapid and fine-grained judgments within in a narrowly scoped context, while the latter explores its capacity for reasoning under complex and rule-intensive conditions. EV ADE offers two evaluation tracks to assess both a model’s ability to quickly identify specific violations and its robustness when faced with comprehensive and strategically worded prompts. We conduct a comprehensive evaluation of 26 mainstream LLMs and VLMs on EV ADE, analyzing their performance on this challenging and underexplored task. Our key contributions are as follows: 1.We release EV ADE , the first expert-curated, Chinese multimodal dataset tailored for evasive content detection in e-commerce. The benchmark features two evaluation tracks—Single- Violation and All-in-One—that probe distinct reasoning capabilities under varying policy contexts. 2.We show that clearer rule categorization significantly improves model consistency and reduces false predictions, highlighting the importance of benchmark design for reliable evaluation. 3.We benchmark 26 open- and closed-source LLMs and VLMs, offering the first systematic baseline on this underexplored but high-impact problem. Through these contributions, we aim to catalyze research in adversarial and evasive content detection, support the development of safer and more trustworthy moderation systems, and advance the field of robust multimodal reasoning in demanding commercial settings. 2 Related Work 2.1 Evasive and Obfuscated Content Detection on Online Platforms Recent work on online safety highlights adversarial and obfuscated content detection in hate speech and cyberbullying. SWE2 [ 26] enhances robustness to lexical attacks by combining word- and subword-level features, while an LSTM-based model with correction mechanisms improves resilience to deceptive cyberbullying patterns [ 4]. Autoregressive models have also been used to craft graded adversarial examples for implicit hate detection [ 28], introducing a "build-it, break-it, fix-it" retraining loop that boosts model robustness to nuanced, context-sensitive abuse. In e-commerce, deception extends beyond text to UI manipulation and covert messaging. For instance, fine-tuned BERT models with generative reasoning have been used to detect misleading interface language from online shopping platforms [ 36]. Separately, Liu and Poesio [ 22] use LLM-generated multilingual fake reviews to improve cross-lingual review detection. While impactful, these works target unimodal or general deception, overlooking the multimodal, regulation-specific challenges of evasive advertising. 2.2 Multimodal Large Language Model Benchmarks The safety and robustness of VLMs have gained attention with specialized benchmarks. The Hateful Memes challenge [ 18] pioneered rigorous multimodal evaluation by using subtle hateful content contrasted with benign distractors, discouraging unimodal shortcuts. Later benchmarks like MM-SafetyBench [ 23] used 5,040 adversarial image-text pairs to show that even aligned models are vulnerable to malicious prompts. MMSafeAware [ 39] found
https://arxiv.org/abs/2505.17654v1
GPT-4V misclassified over one-third of unsafe and over half of safe inputs, exposing poor safety awareness across 29 threats. VLDBench [ 37] evaluated 31,000 news-image pairs and showed that adding visual context improves disinformation detection by 5–35% and enhances compliance monitoring. 3 2.3 Gap and Our Contribution Existing benchmarks largely address general safety issues like hate speech or misinformation, often overlooking the complex, policy-evasive strategies prevalent in real-world e-commerce. They also tend to lack domain specificity, multimodal context, and real enforcement alignment. To bridge this gap, we present EV ADE—the first expert-annotated, Chinese-language, multimodal benchmark tailored to e-commerce policy violations. EV ADE captures both fine-grained and long-context tasks across text and image, offering a rigorous foundation for evaluating multimodal reasoning and regulatory compliance in LLM/VLM systems. 3 The EV ADE 3.1 Overview of the EV ADE We introduce EV ADE , a multimodal benchmark grounded in Chinese advertising law regulations, designed to evaluate whether current LLMs and VLMs can effectively identify evasive content in real-world e-commerce settings. EV ADE integrates both textual and visual inputs, requiring models to jointly comprehend and reason over multimodal data in accordance with explicit policy guidelines. This benchmark comprises 2,833 text samples and 13,961 images, all collected from real-world e-commerce platforms. Each instance has been manually annotated by domain experts with deep familiarity in advertising law, ensuring high-quality and regulation-compliant labels. Table 1: The data distribution of EV ADE and the prompt length corresponding to each violation category. Data Type Text Count Image Count Prompt Tokens Body Shaping Products 202 2,134 614 Women’s Health Products 211 1,295 652 Height Growth Products 553 3,424 953 Men’s Health Products 652 1,738 1,123 Weight Loss Products 442 1,203 1,364 Health Supplement Products 773 4,167 3,379 Overall 2,833 13,961 / Images present a particularly challenging modality, as they often embed both textual claims and visual cues. Thus, image-based reasoning not only requires the VLM to correctly extract embedded text via OCR but also to interpret visual elements in conjunction with policy guidelines. This dual- modality reasoning is essential for detecting subtle forms of evasion, such as cropped disclaimers or euphemistic imagery. This setup allows us to evaluate a model’s capability to detect policy violations through both syntactic understanding and semantic inference, under realistic content moderation conditions. 3.2 Data Collection and Rule Formulation To construct a diverse and regulation-aligned benchmark, we collected 25,380 texts and 48,000 images from six e-commerce sub-domains prone to policy violations (e.g., body shaping, height growth). In collaboration with legal experts, we designed six rule sets based on Chinese advertising law and platform norms. The same prompts were applied to both text and image inputs within each sub-domain, enabling controlled comparisons between LLMs and VLMs. Trained annotators labeled each instance based on rule violations, ensuring consistent and high-quality supervision for model evaluation. Data Annotation While expert human annotation provides high-fidelity labels, it is not immune to inconsistencies due to fatigue, subjectivity, or ambiguous cases. To mitigate this, we implemented an iterative annotation-refinement process that integrates human and model feedback. In the first round, annotators labeled each instance based
https://arxiv.org/abs/2505.17654v1
on six rule prompts. Predictions from three LLMs (GPT-o1mini, DeepSeek-R1, QwenMax) and three VLMs (GPT-4o, Claude-3.7, Gemini-2.5- 4 Pro) were then used to identify samples that diverged from human judgment. These discrepant cases were flagged for re-annotation by professional annotators in the second and third rounds, using model outputs as reference. This iterative process progressively resolved ambiguities and improved label consistency across rounds. Data Filtering After annotation, we applied a three-stage filtering pipeline to ensure the final dataset was both clean and diverse. Stage 1: Deduplication. Each text and image was linked to a unique business UUID from the originating e-commerce platform. We removed duplicate entries sharing the same UUID and identical content, eliminating redundancy at the source level. Stage 2: Clustering and Stratified Sampling. To preserve semantic diversity within each sub- domain, we used a clustering algorithm to group all text and image entries into 300 distinct categories based on content similarity. From each cluster, 20 representative samples were randomly selected, resulting in a stratified and balanced sample set. Stage 3: Manual Quality Control. To further enhance quality, we performed ten rounds of random sampling and manual validation. In each round, 200 items from each sub-domain were reviewed by professional annotators to detect near-duplicates, poorly rendered images, or ambiguous entries. Duplicates and low-quality samples were removed through majority agreement. This three-stage process ensures that the final benchmark is free of redundant content, balanced across domains, and reflective of the complex, real-world variation in deceptive product content. As a result, EV ADE sets a new standard in dataset rigor for evaluating multimodal reasoning under regulatory constraints. 4 Experiment 4.1 Tasks We conducted comprehensive experiments on EV ADE using a suite of both open-source and closed- source LLMs and VLMs. The evaluation is structured around two core tasks: Single-Violation and All-in-One , designed to probe distinct capabilities in regulatory reasoning and content moderation. Single-Violation The Single-Violation task evaluates model performance on six distinct demanding product categories using short, domain-specific prompts aligned with regulatory rules (e.g., 202 texts and 2,134 images assessed under a 614-token prompt for body shaping). This task tests the model’s fine-grained moderation ability within narrowly defined contexts, resembling real-world category-specific pipelines. However, it faces two challenges: semantic overlaps between categories (e.g., “weight loss” vs. “health improvement”) and inconsistent answer choice sets, both of which can lead to ambiguous model behavior. To simulate more realistic, policy-dense scenarios, we propose the All-in-One task. All-in-One In the All-in-One task, we unify prompts across six violation types into a single instruction, expanding input length from ~1K to 7K tokens and increasing classification labels from an average of 5 (in Single-Violation) to 26 distinct regulatory categories. This significantly intensifies the demands on long-context reasoning. To reduce ambiguity, semantically overlapping rules are merged, enabling holistic evaluation under realistic moderation conditions where multiple violations may co-occur. To assess model generalization in adversarial e-commerce scenarios, we define two sub-tasks: Simplified Instruction and Detailed Instruction: 1.Simplified Instruction refers to the approach where, in the input prompt, we avoid introduc- ing any examples except for the necessary special keywords. The
https://arxiv.org/abs/2505.17654v1
purpose of this approach is to allow models for free reasoning based on the prompt, in order to explore the upper and lower bounds of its performance. 2.Detailed Instruction refers to the approach where, in the input prompt, we not only include the necessary special keywords but also introduce positive and negative examples. The purpose of this approach is to constrain the model’s free-form generation through detailed examples and descriptions, thereby stabilizing its performance. 5 For a more detailed version of Single-Violation prompt, please refer to Appendix C . Table 2: List of models in baselines. Model Type Baseline Models VLMsMiniCPM-V2.6-8B[43] InternVL3-8B[45] InternVL3-14B[45] InternVL3-38B[45] DeepSeek-VL2-27B[41] Qwen2.5-VL-7B[5] Qwen2.5-VL-32B[5] Qwen2.5-VL-72B[5] GPT-4o-0806[31] Claude-3.7-sonnet[9] Gemini-2.5-Pro[13] Qwen-VL-Max[2] LLMsQwen-2.5-7B[34] Qwen-2.5-14B[34] Qwen-2.5-32B[34] Qwen-2.5-72B[34] Qwen-3-32B[1] Qwen-3-30B-A3B[1] Qwen-3-235B-A22B[1] Deepseek-V3-671B[11] Deepseek-R1-671B[10] Llama-3.1-8B[14] Llama-3.1-70B[14] GPT-o1mini-0912[29] GPT-4.1-0414[30] Qwen-Max 4.2 Baselines LLMs and VLMs Table 2 describes all the LLMs and VLMs used in our experiment. The models include both open-source and closed-source representative models. It is worth noting that due to the context length limitation of DeepSeek-VL2-27B, which can only handle 4K-length contexts, it is unable to process longer texts in the All-in-One task. Therefore, we only used DeepSeek-VL2-27B for the Single-Violation task. All of our experiments were conducted with the temperature parameter for all LLMs and VLMs consistently set to 0.8 and We utilize APIs provided by cloud service providers. 4.3 Evaluation Since the task format of EV ADE involves having the model first analyze text or images, and then provide a final single-choice or multi-choice conclusion based on the multi-class rules of the prompt, we use partial accuracy and full accuracy to represent the performance of each model on EV ADE. Full accuracy requires the model’s final conclusion to exactly match the ground truth, while partial accuracy requires that the model’s conclusion has at least one overlap with the ground truth. The Acc f(Full Accuracy) and Acc p(Partial Accuracy) metrics are defined as follows: Accf=1 NNX i=1I(Ci=Gi) (1) Accp=1 NNX i=1I(Ci∩Gi̸=∅) (2) where Nis the total number of samples. For each sample i,Cirepresents the set of predicted items andGirepresents the set of ground truth items. The term I(·)is an indicator function that evaluates to 1 if true, and 0 otherwise. 4.4 Retriever-Augmented Generation We identified Retriever-Augmented Generation (RAG) as a promising technique to improve inference in the All-in-One task. RAG enhances reasoning by retrieving semantically similar examples or contextual documents from a knowledge base, thereby enriching the model’s understanding during prediction. To evaluate this, we partitioned the dataset into a RAG Document Store and a RAG Query Set in a 2:8 ratio, using the same prompt structure as the All-in-One setting. For textual inputs, we applied sentence-level vectorization to compute semantic similarity, while for images, we used the CLIP [ 35] to compute visual similarity embeddings. 6 5 Results In this section, we provide a comprehensive comparison and analysis of the performance of all models on the EV ADE. For more details on the performance of additional models across different violation categories and tasks, please refer to Appendix D . 5.1 Single-Violation Results In the Single-Violation task of EV ADE, the performance
https://arxiv.org/abs/2505.17654v1
of both LLMs and VLMs is generally suboptimal, with substantial variability across models. As shown in Table 3 and Table 10, the closed- source model Gemini-2.5-pro consistently achieves the highest accuracy across the six violation categories, followed closely by Qwen-VL-Max . Among open-source VLMs, Qwen2.5-VL-72B performs best, demonstrating competitive performance despite its open-source status. In contrast, Deepseek-VL2-27B shows the weakest performance, significantly lagging behind even smaller models such as the 8B-scale VLMs, highlighting inconsistencies in architecture and training quality across model families. For LLMs on text-based inputs, the LLaMA series struggles significantly due to limited Chinese language capabilities. Even the 70B version of LLaMA performs only at par with the 7B version of the Qwen series, revealing a critical limitation in multilingual robustness for otherwise powerful models. Table 3: All model overall performance on Single-Violation of EV ADE. LLMs Overall Parital Acc. Overall Full Acc. VLMs Overall Parital Acc. Overall Full Acc. Open Source Llama-3.1-8B 35.62 20.93 MiniCPM-V2.6-8B 44.15 12.37 LLama-3.1-70B 38.55 26.23 InternVL3-8B 42.48 18.87 Qwen-2.5-7B 38.48 27.18 InternVL3-14B 51.20 23.23 Qwen-2.5-14B 45.39 28.70 InternVL3-38B 49.19 21.40 Qwen-2.5-32B 46.56 29.69 Deepseek-VL2-27B 29.12 12.24 Qwen-2.5-72B 49.21 27.85 Qwen-2.5-VL-7B 44.52 19.96 Deepseek-V3-671B 51.85 28.77 Qwen-2.5-VL-32B 52.39 22.57 Deepseek-R1-671B 54.64 25.45 Qwen-2.5-VL-72B 57.63 25.05 Close Source GPT-o1mini-0912 49.28 29.72 GPT-4o-0806 58.47 26.96 GPT-4.1-0414 52.74 31.59 Claude-3.7-sonnet 58.79 23.42 Qwen-max 48.29 31.27 Qwen-VL-max 53.38 25.60 Gemini-2.5-pro 52.44 22.14 Table 4: Qwen3 series models overall performance on Single-Violation of EV ADE. The numbers in parentheses on the right side of the table represent the specific performance increase or decrease after enabling the thinking mode, compared to the same model and metric on the left side. Model-Flat Overall Parital Acc. Overall Full Acc. Model-Think Overall Parital Acc. Overall Full Acc. Qwen-3-32B 48.71 26.16 Qwen-3-32B (think) 47.86 (−0.85) 24.71 (−1.45) Qwen-3-30B-A3B 46.70 26.44 Qwen-3-30B-A3B (think) 47.51 (+0 .81) 27.64 (+1 .20) Qwen-3-235B-A22B 49.77 27.46 Qwen-3-235B-A22B (think) 50.02 (+0 .25) 27.50 (+0 .04) Qwen3 [1] is a newly released series of LLMs that integrates both "thinking" and "non-thinking" modes within a single architecture. However, as shown in Table 4, the largest variant, Qwen3-235B- A22B , still underperforms relative to Deepseek-R1-671B andDeepseek-V3-671B , and shows only marginal gains over Qwen-2.5-72B . Surprisingly, enabling the thinking mode provides little to no improvement and even leads to performance regressions in models like Qwen3-32B . A critical observation across all models is the significant gap between partial accuracy andfull accuracy , often exceeding 10%. To illustrate this gap, we highlight Claude-3.7-sonnet (Figure 4j) as the best-performing closed-source model and Qwen2.5-VL-72B (Figure 4h) as the strongest open-source VLM. This discrepancy underscores the difficulty models face in achieving complete understanding: while they may capture partial intent or surface-level features, they frequently miss critical semantic or visual details, especially in cases involving metaphorical language, euphemisms, or evasive imagery designed to circumvent moderation. 7 Furthermore, although closed-source models generally outperform open-source models, exceptions do exist. For instance, Deepseek-R1 shows strong results in text understanding, even outperforming several closed models, while Qwen2.5-VL-72B demonstrates robust visual reasoning despite not matching the top-tier performance of Gemini-2.5-pro orQwen-VL-Max . To better
https://arxiv.org/abs/2505.17654v1
understand how models behave under longer context and clearer taxonomy, where ambiguous overlaps between categories are removed, we turn to the results from the All-in-One task. 5.2 All-in-One Results In the All-in-One task, we observe substantial performance improvements across many models, especially for smaller LLMs and VLMs. Without altering the input data, simply merging overlapping violation categories into a unified prompt significantly enhances reasoning capabilities. Despite the prompt length increasing sixfold compared to the Single-Violation setting, and the number of violation categories expanding from a few to 26, models generally perform better rather than worse. Notably, the performance disparity among LLMs narrows in All-in-One. Additionally, the differ- ence between partial and full accuracy shrinks dramatically, from over 10% in Single-Violation to approximately 5% in All-in-One. This suggests that clearer category boundaries yield significantly greater improvements in model reasoning performance—particularly in full accuracy metric—than simply increasing prompt com- plexity. Even when both the input prompt length and the number of categories are scaled up by several times, models still benefit substantially from clearer semantic distinctions. In other words, among the factors that constrain model reasoning, semantic ambiguity poses a more fundamental limitation than either prompt context length or the number of categories. Interestingly, while high-performing LLMs—such as Deepseek-R1, Deepseek-V3, and the GPT series—can still achieve notable improvements in full accuracy, their gains in partial accuracy under the all-in-one setting are relatively modest. This may be due to ceiling effects or instruction-following saturation that limit further improvements on easier subtasks. In contrast, smaller models benefit substantially across both metrics. For example, Qwen2.5-7B improves its partial accuracy from 38.48% to 53.18% and full accuracy from 27.18% to 50.60%, with over 10% gains in both. Even Llama-3.1-70B, despite being less adept at Chinese, improves from 38.55% to 47.00% in partial accuracy, and from 26.23% to 44.60% in full accuracy. Table 5: The performance of all models on the All-in-One task of EV ADE. The values to the left of the slash indicate partial accuracy, while those to the right indicate full accuracy. LLMs Simplified. Detailed. VLMs Simplified. Detailed. Llama-3.1-8B 41.94 / 36.47 37.84 / 34.44 MiniCPM-V2.6-8B 46.70 / 43.22 39.60 / 36.93 LLama-3.1-70B 47.81 / 44.53 47.00 / 44.60 InternVL3-8B 56.26 / 53.90 56.97 / 54.17 Qwen-2.5-7B 51.68 / 49.52 53.18 / 50.60 InternVL3-14B 60.63 / 56.83 60.67 / 57.41 Qwen-2.5-14B 53.14 / 48.32 55.23 / 51.20 InternVL3-38B 60.62 / 57.35 61.36 / 58.34 Qwen-2.5-32B 55.12 / 51.20 53.55 / 49.95 Qwen-2.5-VL-7B 53.15 / 51.49 53.39 / 51.20 Qwen-2.5-72B 56.93 / 51.06 55.59 / 50.05 Qwen-2.5-VL-32B 62.04 / 58.55 61.73 / 59.17 Deepseek-V3-671B 56.58 / 50.62 58.79 / 52.65 Qwen-2.5-VL-72B 64.25 / 59.09 63.85 / 59.44 Deepseek-R1-671B 58.25 / 49.35 58.69 / 50.37 GPT-4o-0806 64.14 / 58.05 65.12 / 59.58 GPT-o1mini-0912 56.69 / 51.15 54.17 / 48.76 Claude-3.7-sonnet 64.58 / 56.83 63.05 / 56.05 GPT-4.1-0414 59.16 / 53.79 59.64 /53.67 Qwen-VL-max 63.58 / 59.24 63.31 / 59.50 Qwen-max 58.35 / 54.68 56.48 / 52.88 Gemini-2.5-pro 70.57 / 54.45 70.43 / 51.94 Table 6: Qwen3 series models overall performance on All-in-One of EV ADE. The values to
https://arxiv.org/abs/2505.17654v1
the left of the slash indicate partial accuracy, while those to the right indicate full accuracy. Model-Flat Simplified. Detailed. Model-Think Simplified. Detailed. Qwen-3-32B 57.11 /52.84 55.54 / 50.71 Qwen-3-32B (think) 58.56 /53.44 55.94 / 50.73 Qwen-3-30B-A3B 55.07 / 50.58 54.39 / 50.55 Qwen-3-30B-A3B (think) 56.64 / 52.44 56.74 / 52.75 Qwen-3-235B-A22B 55.91 / 50.97 55.81 /51.39 Qwen-3-235B-A22B (think) 57.11 / 51.85 57.26 /52.77 This trend also extends to VLMs. Smaller models like InternVL3-8B andMiniCPM-V2.6-8B show dramatic improvements, while large-scale models still achieve further gains, albeit at a reduced margin. These results affirm the effectiveness of the All-in-One setting in reducing confusion 8 caused by overlapping policy categories and reveal the potential of structured prompt engineering for regulatory tasks. Compared to its performance in previous tasks, Qwen3 shows improved results in the All-in-One setting—both in Simplified and Detailed Tasks—when the thinking mode is enabled. However, the improvement is minor, not exceeding 2%. 5.3 Analysis on the Effect of RAG Due to resource constraints and the general inability of small-scale VLMs to handle long text inputs—thus preventing effective few-shot prompting—we conducted RAG experiments on VLMs exclusively using the InternVL series models, which demonstrated strong performance in both the Single-Violation and All-in-One tasks. Figure 2: Comparison of LLMs and VLMs before and after the introduction of RAG. Here, L- denotes the Llama-3.1 model, Q- denotes the Qwen2.5 model, DS- denotes the Deepseek model, Q3- denotes the Qwen3 model, and IVL- denotes the InternVL3 model. We then integrated top-ranked retrievals into the model’s input and evaluated performance using an open-source LLM–VLM pair. Our preliminary findings indicate that RAG improves the model’s pre- cision, especially in cases involving ambiguous or metaphor-laden inputs, by providing semantically aligned reference materials. These results highlight the potential of retrieval-based augmentation for boosting model robustness in complex regulatory reasoning tasks. 5.4 Error Analysis Merchants may embed neutral, everyday language into product descriptions or overlay benign text within promotional images. While Large Language Models and Vision–Language Models can sometimes identify policy-relevant keywords, the introduction of such common terms frequently misleads the model, causing it to misclassify deceptive content as benign. This reflects a broader issue in contextual reasoning: models often lack sufficient sensitivity to metaphors, euphemisms, and dual meanings—leading to failures in detecting layered or implied semantic cues within multimodal inputs. Vision–Language Models often fail to accurately recognize embedded text in product images, espe- cially when obscured by noise, masking, or creative formatting. Even with correct region detection, they may misread or miss content, which may lead to undetected policy violations. This underscores the need for better OCR and text-visual alignment in commercial applications. A recurring failure mode in Vision–Language Models is their limited ability to fully and accurately recognize embedded text within product images. This is particularly problematic when key infor- mation is obscured through visual noise, deliberate masking, or creative formatting by merchants. 9 Even when a VLM correctly identifies text regions, it may misrecognize the characters or extract incomplete content, leading to undetected policy violations. This highlights the urgent need for improved OCR and multimodal text-visual alignment in commercial VLM
https://arxiv.org/abs/2505.17654v1
deployments. Although EV ADE is a multi-class task, it includes an implicit rule: "Z. other" (no violation) cannot be selected alongside any regulated category. Some weaker models violate this constraint by labeling items as both compliant and non-compliant, revealing poor rule comprehension and instruction- following under constraints. For more specific bad cases, please refer to Appendix E . 6 Conclusion EV ADE is the first Chinese-language multimodal benchmark for detecting evasive e-commerce content, covering expert-annotated texts and images across six demanding categories. Evaluation of 26 LLMs and VLMs reveals significant performance disparities, with closed-source models generally outperforming open-source ones. In the All-in-One task, merging overlapping rules enhances reasoning and reduces the accuracy gap, particularly in smaller models—underscoring the importance of rule clarity over model’s performance. Common failure modes include contextual noise, obfuscated language, and OCR errors, indicating weaknesses in semantic understanding and vision–language alignment. Contemporary LLMs and VLMs still fall short on the EV ADE benchmark. The introduction of EV ADE is therefore poised to catalyze progress in this area. In our experiments, leveraging RAG to supply the models with under-represented knowledge has yielded significant performance improvements, attesting to the high quality of the EV ADE. Looking forward, EV ADE aims to support multilingual and multi-domain content, strengthen OCR pipelines, integrate retriever-augmented generation, and incorporate adversarial data for stress-testing generalization. Overall, EV ADE offers a rigorous foundation for evaluating and advancing multimodal moderation systems, highlighting key bottlenecks and design principles—such as clearer taxonomy, retrieval augmentation, and robust error handling—for building safer, more reliable AI systems. Limitations We acknowledge several limitations in our study. In real-world e-commerce scenarios, the challenges faced by LLMs and VLMs extend beyond the six categories defined in our proposed EV ADE benchmark. Other categories such as gambling, pornography, and terrorism-related content also pose significant violations. If models fail to detect such content, it may result in serious physical and psychological harm to minors. This further underscores the necessity of continuously improving EV ADE. We aim to expand and refine the EV ADE benchmark to enhance the performance of VLMs and LLMs on the task of Evasive Content Detection. Ethics Statement During the development of the EV ADE, we strictly adhered to ethical guidelines and legal regulations, ensuring fairness, transparency, inclusivity, and respect for all stakeholders. We emphasized the importance of protecting privacy and intellectual property, reaffirming our commitment to responsible and lawful data management. The EV ADE may contain expressions and visual materials influenced by objective factors such as the time of collection, cultural context, and business scenarios. EV ADE may also be used for model training, making ECD tasks more challenging for both LLMs and VLMs. At the same time, the ability to successfully evade model auditing poses increasing violations to the fairness of online transactions, potentially leading to legal and economic consequences. These representations and viewpoints do not reflect the value orientation of the data providers. We are committed to ongoing monitoring and refinement to mitigate such biases. Furthermore, we encourage users of the dataset to exercise responsible use and to consider the ethical implications of
https://arxiv.org/abs/2505.17654v1
their work, particularly in applications that may affect individuals or communities. References [1] Alibaba. Qwen3. https://qwenlm.github.io/blog/qwen3// , 2025. 10 [2] Alibaba. Qwen-vl-max. https://huggingface.co/spaces/Qwen/Qwen-VL-Max/ , 2025. [3]M. Altakrori, T. Scialom, B. C. M. Fung, and J. C. K. Cheung. A multifaceted framework to evaluate evasion, content preservation, and misattribution in authorship obfuscation techniques. In Y . Goldberg, Z. Kozareva, and Y . Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 2391–2406, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.emnlp-main.153. URL https://aclanthology. org/2022.emnlp-main.153/ . [4]S. W. Azumah, N. Elsayed, Z. ElSayed, M. Ozer, and A. L. Guardia. Deep learning approaches for detecting adversarial cyberbullying and hate speech in social networks, 2024. URL https://arxiv.org/ abs/2406.17793 . [5]S. Bai, K. Chen, X. Liu, J. Wang, W. Ge, S. Song, K. Dang, P. Wang, S. Wang, J. Tang, H. Zhong, Y . Zhu, M. Yang, Z. Li, J. Wan, P. Wang, W. Ding, Z. Fu, Y . Xu, J. Ye, X. Zhang, T. Xie, Z. Cheng, H. Zhang, Z. Yang, H. Xu, and J. Lin. Qwen2.5-vl technical report, 2025. URL https://arxiv.org/abs/2502. 13923 . [6]Z. Bai, P. Wang, T. Xiao, T. He, Z. Han, Z. Zhang, and M. Z. Shou. Hallucination of multimodal large language models: A survey, 2025. URL https://arxiv.org/abs/2404.18930 . [7]Y . Chen, F. Xue, D. Li, Q. Hu, L. Zhu, X. Li, Y . Fang, H. Tang, S. Yang, Z. Liu, E. He, H. Yin, P. Molchanov, J. Kautz, L. Fan, Y . Zhu, Y . Lu, and S. Han. Longvila: Scaling long-context visual language models for long videos, 2024. URL https://arxiv.org/abs/2408.10188 . [8]A. G. Chowdhury, M. M. Islam, V . Kumar, F. H. Shezan, V . Kumar, V . Jain, and A. Chadha. Breaking down the defenses: A comparative survey of attacks on large language models, 2024. URL https: //arxiv.org/abs/2403.04786 . [9] Claude. Claude 3.7 sonnet. https://www.anthropic.com/claude/sonnet , 2025. [10] DeepSeek-AI, D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, X. Zhang, X. Yu, Y . Wu, Z. F. Wu, Z. Gou, Z. Shao, Z. Li, Z. Gao, A. Liu, B. Xue, B. Wang, B. Wu, B. Feng, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan, D. Dai, D. Chen, D. Ji, E. Li, F. Lin, F. Dai, F. Luo, G. Hao, G. Chen, G. Li, H. Zhang, H. Bao, H. Xu, H. Wang, H. Ding, H. Xin, H. Gao, H. Qu, H. Li, J. Guo, J. Li, J. Wang, J. Chen, J. Yuan, J. Qiu, J. Li, J. L. Cai, J. Ni, J. Liang, J. Chen, K. Dong, K. Hu, K. Gao, K. Guan, K. Huang, K. Yu, L. Wang, L. Zhang, L. Zhao, L. Wang, L. Zhang, L. Xu, L. Xia, M. Zhang, M. Zhang, M. Tang, M. Li, M. Wang, M. Li, N. Tian, P. Huang, P. Zhang, Q. Wang, Q. Chen, Q. Du, R. Ge, R. Zhang, R. Pan, R. Wang, R. J. Chen, R. L. Jin, R. Chen, S. Lu, S. Zhou, S.
https://arxiv.org/abs/2505.17654v1
Chen, S. Ye, S. Wang, S. Yu, S. Zhou, S. Pan, S. S. Li, S. Zhou, S. Wu, S. Ye, T. Yun, T. Pei, T. Sun, T. Wang, W. Zeng, W. Zhao, W. Liu, W. Liang, W. Gao, W. Yu, W. Zhang, W. L. Xiao, W. An, X. Liu, X. Wang, X. Chen, X. Nie, X. Cheng, X. Liu, X. Xie, X. Liu, X. Yang, X. Li, X. Su, X. Lin, X. Q. Li, X. Jin, X. Shen, X. Chen, X. Sun, X. Wang, X. Song, X. Zhou, X. Wang, X. Shan, Y . K. Li, Y . Q. Wang, Y . X. Wei, Y . Zhang, Y . Xu, Y . Li, Y . Zhao, Y . Sun, Y . Wang, Y . Yu, Y . Zhang, Y . Shi, Y . Xiong, Y . He, Y . Piao, Y . Wang, Y . Tan, Y . Ma, Y . Liu, Y . Guo, Y . Ou, Y . Wang, Y . Gong, Y . Zou, Y . He, Y . Xiong, Y . Luo, Y . You, Y . Liu, Y . Zhou, Y . X. Zhu, Y . Xu, Y . Huang, Y . Li, Y . Zheng, Y . Zhu, Y . Ma, Y . Tang, Y . Zha, Y . Yan, Z. Z. Ren, Z. Ren, Z. Sha, Z. Fu, Z. Xu, Z. Xie, Z. Zhang, Z. Hao, Z. Ma, Z. Yan, Z. Wu, Z. Gu, Z. Zhu, Z. Liu, Z. Li, Z. Xie, Z. Song, Z. Pan, Z. Huang, Z. Xu, Z. Zhang, and Z. Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948 . [11] DeepSeek-AI, A. Liu, B. Feng, B. Xue, B. Wang, B. Wu, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan, D. Dai, D. Guo, D. Yang, D. Chen, D. Ji, E. Li, F. Lin, F. Dai, F. Luo, G. Hao, G. Chen, G. Li, H. Zhang, H. Bao, H. Xu, H. Wang, H. Zhang, H. Ding, H. Xin, H. Gao, H. Li, H. Qu, J. L. Cai, J. Liang, J. Guo, J. Ni, J. Li, J. Wang, J. Chen, J. Chen, J. Yuan, J. Qiu, J. Li, J. Song, K. Dong, K. Hu, K. Gao, K. Guan, K. Huang, K. Yu, L. Wang, L. Zhang, L. Xu, L. Xia, L. Zhao, L. Wang, L. Zhang, M. Li, M. Wang, M. Zhang, M. Zhang, M. Tang, M. Li, N. Tian, P. Huang, P. Wang, P. Zhang, Q. Wang, Q. Zhu, Q. Chen, Q. Du, R. J. Chen, R. L. Jin, R. Ge, R. Zhang, R. Pan, R. Wang, R. Xu, R. Zhang, R. Chen, S. S. Li, S. Lu, S. Zhou, S. Chen, S. Wu, S. Ye, S. Ye, S. Ma, S. Wang, S. Zhou, S. Yu, S. Zhou, S. Pan, T. Wang, T. Yun, T. Pei, T. Sun, W. L. Xiao, W. Zeng, W. Zhao, W. An, W. Liu, W. Liang, W. Gao, W. Yu, W. Zhang, X. Q. Li, X. Jin, X. Wang, X. Bi, X. Liu, X. Wang, X. Shen, X. Chen, X.
https://arxiv.org/abs/2505.17654v1
Zhang, X. Chen, X. Nie, X. Sun, X. Wang, X. Cheng, X. Liu, X. Xie, X. Liu, X. Yu, X. Song, X. Shan, X. Zhou, X. Yang, X. Li, X. Su, X. Lin, Y . K. Li, Y . Q. Wang, Y . X. Wei, Y . X. Zhu, Y . Zhang, Y . Xu, Y . Xu, Y . Huang, Y . Li, Y . Zhao, Y . Sun, Y . Li, Y . Wang, Y . Yu, Y . Zheng, Y . Zhang, Y . Shi, Y . Xiong, Y . He, Y . Tang, Y . Piao, Y . Wang, Y . Tan, Y . Ma, Y . Liu, Y . Guo, Y . Wu, Y . Ou, Y . Zhu, Y . Wang, Y . Gong, Y . Zou, Y . He, Y . Zha, Y . Xiong, Y . Ma, Y . Yan, Y . Luo, Y . You, Y . Liu, Y . Zhou, Z. F. Wu, Z. Z. Ren, Z. Ren, Z. Sha, Z. Fu, Z. Xu, Z. Huang, Z. Zhang, Z. Xie, Z. Zhang, Z. Hao, Z. Gou, Z. Ma, Z. Yan, Z. Shao, Z. Xu, Z. Wu, Z. Zhang, Z. Li, Z. Gu, Z. Zhu, Z. Liu, Z. Li, Z. Xie, Z. Song, Z. Gao, and Z. Pan. Deepseek-v3 technical report, 2025. URL https://arxiv.org/abs/2412.19437 . 11 [12] S. Gholamian, G. Romani, B. Rudnikowicz, and S. Skylaki. LLM-based robust product classifica- tion in commerce and compliance. In S. Kumar, V . Balachandran, C. Y . Park, W. Shi, S. A. Hay- ati, Y . Tsvetkov, N. Smith, H. Hajishirzi, D. Kang, and D. Jurgens, editors, Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Ap- plication, Group, or Individual (CustomNLP4U) , pages 26–36, Miami, Florida, USA, Nov. 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.customnlp4u-1.3. URL https: //aclanthology.org/2024.customnlp4u-1.3/ . [13] Google. Gemini 2.5. https://blog.google/technology/google-deepmind/ gemini-model-thinking-updates-march-2025/ , 2025. [14] A. Grattafiori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, A. Yang, A. Fan, A. Goyal, A. Hartshorn, A. Yang, A. Mitra, A. Sravankumar, A. Korenev, A. Hinsvark, A. Rao, A. Zhang, A. Rodriguez, A. Gregerson, A. Spataru, B. Roziere, B. Biron, B. Tang, B. Chern, C. Caucheteux, C. Nayak, C. Bi, C. Marra, C. McConnell, C. Keller, C. Touret, C. Wu, C. Wong, C. C. Ferrer, C. Nikolaidis, D. Allonsius, D. Song, D. Pintz, D. Livshits, D. Wyatt, D. Esiobu, D. Choudhary, D. Mahajan, D. Garcia-Olano, D. Perino, D. Hupkes, E. Lakomkin, E. AlBadawy, E. Lobanova, E. Dinan, E. M. Smith, F. Radenovic, F. Guzmán, F. Zhang, G. Synnaeve, G. Lee, G. L. Anderson, G. Thattai, G. Nail, G. Mialon, G. Pang, G. Cucurell, H. Nguyen, H. Korevaar, H. Xu, H. Touvron, I. Zarov, I. A. Ibarra, I. Kloumann, I. Misra, I. Evtimov, J. Zhang, J. Copet, J. Lee, J. Geffert, J. Vranes, J. Park, J. Mahadeokar, J. Shah, J. van der Linde, J. Billock, J. Hong, J. Lee, J. Fu, J. Chi, J. Huang, J. Liu, J.
https://arxiv.org/abs/2505.17654v1
Wang, J. Yu, J. Bitton, J. Spisak, J. Park, J. Rocca, J. Johnstun, J. Saxe, J. Jia, K. V . Alwala, K. Prasad, K. Upasani, K. Plawiak, K. Li, K. Heafield, K. Stone, K. El-Arini, K. Iyer, K. Malik, K. Chiu, K. Bhalla, K. Lakhotia, L. Rantala- Yeary, L. van der Maaten, L. Chen, L. Tan, L. Jenkins, L. Martin, L. Madaan, L. Malo, L. Blecher, L. Landzaat, L. de Oliveira, M. Muzzi, M. Pasupuleti, M. Singh, M. Paluri, M. Kardas, M. Tsimpoukelli, M. Oldham, M. Rita, M. Pavlova, M. Kambadur, M. Lewis, M. Si, M. K. Singh, M. Hassan, N. Goyal, N. Torabi, N. Bashlykov, N. Bogoychev, N. Chatterji, N. Zhang, O. Duchenne, O. Çelebi, P. Alrassy, P. Zhang, P. Li, P. Vasic, P. Weng, P. Bhargava, P. Dubal, P. Krishnan, P. S. Koura, P. Xu, Q. He, Q. Dong, R. Srinivasan, R. Ganapathy, R. Calderer, R. S. Cabral, R. Stojnic, R. Raileanu, R. Maheswari, R. Girdhar, R. Patel, R. Sauvestre, R. Polidoro, R. Sumbaly, R. Taylor, R. Silva, R. Hou, R. Wang, S. Hosseini, S. Chennabasappa, S. Singh, S. Bell, S. S. Kim, S. Edunov, S. Nie, S. Narang, S. Raparthy, S. Shen, S. Wan, S. Bhosale, S. Zhang, S. Vandenhende, S. Batra, S. Whitman, S. Sootla, S. Collot, S. Gururangan, S. Borodinsky, T. Herman, T. Fowler, T. Sheasha, T. Georgiou, T. Scialom, T. Speckbacher, T. Mihaylov, T. Xiao, U. Karn, V . Goswami, V . Gupta, V . Ramanathan, V . Kerkez, V . Gonguet, V . Do, V . V ogeti, V . Albiero, V . Petrovic, W. Chu, W. Xiong, W. Fu, W. Meers, X. Martinet, X. Wang, X. Wang, X. E. Tan, X. Xia, X. Xie, X. Jia, X. Wang, Y . Goldschlag, Y . Gaur, Y . Babaei, Y . Wen, Y . Song, Y . Zhang, Y . Li, Y . Mao, Z. D. Coudert, Z. Yan, Z. Chen, Z. Papakipos, A. Singh, A. Srivastava, A. Jain, A. Kelsey, A. Shajnfeld, A. Gangidi, A. Victoria, A. Goldstand, A. Menon, A. Sharma, A. Boesenberg, A. Baevski, A. Feinstein, A. Kallet, A. Sangani, A. Teo, A. Yunus, A. Lupu, A. Alvarado, A. Caples, A. Gu, A. Ho, A. Poulton, A. Ryan, A. Ramchandani, A. Dong, A. Franco, A. Goyal, A. Saraf, A. Chowdhury, A. Gabriel, A. Bharambe, A. Eisenman, A. Yazdan, B. James, B. Maurer, B. Leonhardi, B. Huang, B. Loyd, B. D. Paola, B. Paranjape, B. Liu, B. Wu, B. Ni, B. Hancock, B. Wasti, B. Spence, B. Stojkovic, B. Gamido, B. Montalvo, C. Parker, C. Burton, C. Mejia, C. Liu, C. Wang, C. Kim, C. Zhou, C. Hu, C.-H. Chu, C. Cai, C. Tindal, C. Feichtenhofer, C. Gao, D. Civin, D. Beaty, D. Kreymer, D. Li, D. Adkins, D. Xu, D. Testuggine, D. David, D. Parikh, D. Liskovich, D. Foss, D. Wang, D. Le, D. Holland, E. Dowling, E. Jamil, E. Montgomery, E. Presani, E. Hahn, E. Wood, E.-T. Le, E. Brinkman, E. Arcaute, E. Dunbar, E. Smothers, F. Sun, F. Kreuk, F. Tian, F. Kokkinos, F. Ozgenel, F. Caggioni, F. Kanayet, F. Seide,
https://arxiv.org/abs/2505.17654v1
G. M. Florez, G. Schwarz, G. Badeer, G. Swee, G. Halpern, G. Herman, G. Sizov, Guangyi, Zhang, G. Lakshminarayanan, H. Inan, H. Shojanazeri, H. Zou, H. Wang, H. Zha, H. Habeeb, H. Rudolph, H. Suk, H. Aspegren, H. Goldman, H. Zhan, I. Damlaj, I. Molybog, I. Tufanov, I. Leontiadis, I.-E. Veliche, I. Gat, J. Weissman, J. Geboski, J. Kohli, J. Lam, J. Asher, J.-B. Gaya, J. Marcus, J. Tang, J. Chan, J. Zhen, J. Reizenstein, J. Teboul, J. Zhong, J. Jin, J. Yang, J. Cummings, J. Carvill, J. Shepard, J. McPhie, J. Torres, J. Ginsburg, J. Wang, K. Wu, K. H. U, K. Saxena, K. Khandelwal, K. Zand, K. Matosich, K. Veeraraghavan, K. Michelena, K. Li, K. Jagadeesh, K. Huang, K. Chawla, K. Huang, L. Chen, L. Garg, L. A, L. Silva, L. Bell, L. Zhang, L. Guo, L. Yu, L. Moshkovich, L. Wehrstedt, M. Khabsa, M. Avalani, M. Bhatt, M. Mankus, M. Hasson, M. Lennie, M. Reso, M. Groshev, M. Naumov, M. Lathi, M. Keneally, M. Liu, M. L. Seltzer, M. Valko, M. Restrepo, M. Patel, M. Vyatskov, M. Samvelyan, M. Clark, M. Macey, M. Wang, M. J. Hermoso, M. Metanat, M. Rastegari, M. Bansal, N. Santhanam, N. Parks, N. White, N. Bawa, N. Singhal, N. Egebo, N. Usunier, N. Mehta, N. P. Laptev, N. Dong, N. Cheng, O. Chernoguz, O. Hart, O. Salpekar, O. Kalinli, P. Kent, P. Parekh, P. Saab, P. Balaji, P. Rittner, P. Bontrager, P. Roux, P. Dollar, P. Zvyagina, P. Ratanchandani, P. Yuvraj, Q. Liang, R. Alao, R. Rodriguez, R. Ayub, R. Murthy, R. Nayani, R. Mitra, R. Parthasarathy, R. Li, R. Hogan, R. Battey, R. Wang, R. Howes, R. Rinott, S. Mehta, S. Siby, S. J. Bondu, S. Datta, S. Chugh, S. Hunt, S. Dhillon, S. Sidorov, S. Pan, S. Mahajan, S. Verma, S. Yamamoto, S. Ramaswamy, S. Lindsay, S. Lindsay, S. Feng, S. Lin, S. C. Zha, S. Patil, S. Shankar, S. Zhang, S. Zhang, S. Wang, S. Agarwal, S. Sajuyigbe, S. Chintala, S. Max, S. Chen, S. Kehoe, 12 S. Satterfield, S. Govindaprasad, S. Gupta, S. Deng, S. Cho, S. Virk, S. Subramanian, S. Choudhury, S. Goldman, T. Remez, T. Glaser, T. Best, T. Koehler, T. Robinson, T. Li, T. Zhang, T. Matthews, T. Chou, T. Shaked, V . V ontimitta, V . Ajayi, V . Montanez, V . Mohan, V . S. Kumar, V . Mangla, V . Ionescu, V . Poenaru, V . T. Mihailescu, V . Ivanov, W. Li, W. Wang, W. Jiang, W. Bouaziz, W. Constable, X. Tang, X. Wu, X. Wang, X. Wu, X. Gao, Y . Kleinman, Y . Chen, Y . Hu, Y . Jia, Y . Qi, Y . Li, Y . Zhang, Y . Zhang, Y . Adi, Y . Nam, Yu, Wang, Y . Zhao, Y . Hao, Y . Qian, Y . Li, Y . He, Z. Rait, Z. DeVito, Z. Rosnbrick, Z. Wen, Z. Yang, Z. Zhao, and Z. Ma. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783 . [15] W. Hackett, L. Birch, S. Trawicki, N. Suri, and P.
https://arxiv.org/abs/2505.17654v1
Garraghan. Bypassing prompt injection and jailbreak detection in llm guardrails, 2025. URL https://arxiv.org/abs/2504.11168 . [16] L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin, and T. Liu. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems , 43(2):1–55, Jan. 2025. ISSN 1558-2868. doi: 10.1145/3703155. URL http://dx.doi.org/10.1145/3703155 . [17] C. Jiang, J. Wang, W. Ma, C. L. A. Clarke, S. Wang, C. Wu, and M. Zhang. Beyond utility: Evaluating llm as recommender, 2024. URL https://arxiv.org/abs/2411.00331 . [18] D. Kiela, H. Firooz, A. Mohan, V . Goswami, A. Singh, P. Ringshia, and D. Testuggine. The hateful memes challenge: Detecting hate speech in multimodal memes, 2021. URL https://arxiv.org/abs/2005. 04790 . [19] C. Li, Z. Gan, Z. Yang, J. Yang, L. Li, L. Wang, and J. Gao. Multimodal foundation models: From specialists to general-purpose assistants, 2023. URL https://arxiv.org/abs/2309.10020 . [20] C. X. Liang, P. Tian, C. H. Yin, Y . Yua, W. An-Hou, L. Ming, T. Wang, Z. Bi, and M. Liu. A comprehensive survey and guide to multimodal large language models in vision-language tasks, 2024. URL https: //arxiv.org/abs/2411.06284 . [21] D. Liu, M. Yang, X. Qu, P. Zhou, Y . Cheng, and W. Hu. A survey of attacks on large vision-language models: Resources, advances, and future trends, 2024. URL https://arxiv.org/abs/2407.07403 . [22] M. Liu and M. Poesio. Data augmentation for fake reviews detection in multiple languages and multiple domains, 2025. URL https://arxiv.org/abs/2504.06917 . [23] X. Liu, Y . Zhu, J. Gu, Y . Lan, C. Yang, and Y . Qiao. Mm-safetybench: A benchmark for safety evaluation of multimodal large language models, 2024. URL https://arxiv.org/abs/2311.17600 . [24] R. Lou, K. Zhang, and W. Yin. Large language model instruction following: A survey of progresses and challenges, 2024. URL https://arxiv.org/abs/2303.10475 . [25] A. Matarazzo and R. Torlone. A survey on large language models with some insights on their capabilities and limitations, 2025. URL https://arxiv.org/abs/2501.04040 . [26] G. Mou, P. Ye, and K. Lee. Swe2: Subword enriched and significant word emphasized framework for hate speech detection. In Proceedings of the 29th ACM International Conference on Information Knowledge Management , CIKM ’20, page 1145–1154. ACM, Oct. 2020. doi: 10.1145/3340531.3411990. URL http://dx.doi.org/10.1145/3340531.3411990 . [27] R. Murthy, P. Venkateswaran, P. Kumar, and D. Contractor. Evaluating the instruction-following abilities of language models using knowledge tasks, 2025. URL https://arxiv.org/abs/2410.12972 . [28] N. B. Ocampo, E. Cabrio, and S. Villata. Playing the part of the sharp bully: Generating adversarial examples for implicit hate speech detection. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023 , pages 2758–2772, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.173. URL https://aclanthology.org/2023.findings-acl.173/ . [29] Openai. Gpt-o1mini. https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/ , 2024. [30] Openai. Gpt-4.1. https://openai.com/index/gpt-4-1/ , 2025. [31] OpenAI, :, A. Hurst, A. Lerer, A. P. Goucher, A. Perelman, A. Ramesh, A. Clark, A. Ostrow, A. Welihinda, A. Hayes, A. Radford, A. M ˛ adry, A. Baker-Whitcomb, A. Beutel, A. Borzunov, A. Carney, A. Chow, A. Kirillov, A. Nichol, A.
https://arxiv.org/abs/2505.17654v1
Paino, A. Renzin, A. T. Passos, A. Kirillov, A. Christakis, A. Conneau, A. Kamali, A. Jabri, A. Moyer, A. Tam, A. Crookes, A. Tootoochian, A. Tootoonchian, A. Kumar, A. Vallone, A. Karpathy, A. Braunstein, A. Cann, A. Codispoti, A. Galu, A. Kondrich, A. Tulloch, A. Mishchenko, 13 A. Baek, A. Jiang, A. Pelisse, A. Woodford, A. Gosalia, A. Dhar, A. Pantuliano, A. Nayak, A. Oliver, B. Zoph, B. Ghorbani, B. Leimberger, B. Rossen, B. Sokolowsky, B. Wang, B. Zweig, B. Hoover, B. Samic, B. McGrew, B. Spero, B. Giertler, B. Cheng, B. Lightcap, B. Walkin, B. Quinn, B. Guarraci, B. Hsu, B. Kellogg, B. Eastman, C. Lugaresi, C. Wainwright, C. Bassin, C. Hudson, C. Chu, C. Nelson, C. Li, C. J. Shern, C. Conger, C. Barette, C. V oss, C. Ding, C. Lu, C. Zhang, C. Beaumont, C. Hallacy, C. Koch, C. Gibson, C. Kim, C. Choi, C. McLeavey, C. Hesse, C. Fischer, C. Winter, C. Czarnecki, C. Jarvis, C. Wei, C. Koumouzelis, D. Sherburn, D. Kappler, D. Levin, D. Levy, D. Carr, D. Farhi, D. Mely, D. Robinson, D. Sasaki, D. Jin, D. Valladares, D. Tsipras, D. Li, D. P. Nguyen, D. Findlay, E. Oiwoh, E. Wong, E. Asdar, E. Proehl, E. Yang, E. Antonow, E. Kramer, E. Peterson, E. Sigler, E. Wallace, E. Brevdo, E. Mays, F. Khorasani, F. P. Such, F. Raso, F. Zhang, F. von Lohmann, F. Sulit, G. Goh, G. Oden, G. Salmon, G. Starace, G. Brockman, H. Salman, H. Bao, H. Hu, H. Wong, H. Wang, H. Schmidt, H. Whitney, H. Jun, H. Kirchner, H. P. de Oliveira Pinto, H. Ren, H. Chang, H. W. Chung, I. Kivlichan, I. O’Connell, I. O’Connell, I. Osband, I. Silber, I. Sohl, I. Okuyucu, I. Lan, I. Kostrikov, I. Sutskever, I. Kanitscheider, I. Gulrajani, J. Coxon, J. Menick, J. Pachocki, J. Aung, J. Betker, J. Crooks, J. Lennon, J. Kiros, J. Leike, J. Park, J. Kwon, J. Phang, J. Teplitz, J. Wei, J. Wolfe, J. Chen, J. Harris, J. Varavva, J. G. Lee, J. Shieh, J. Lin, J. Yu, J. Weng, J. Tang, J. Yu, J. Jang, J. Q. Candela, J. Beutler, J. Landers, J. Parish, J. Heidecke, J. Schulman, J. Lachman, J. McKay, J. Uesato, J. Ward, J. W. Kim, J. Huizinga, J. Sitkin, J. Kraaijeveld, J. Gross, J. Kaplan, J. Snyder, J. Achiam, J. Jiao, J. Lee, J. Zhuang, J. Harriman, K. Fricke, K. Hayashi, K. Singhal, K. Shi, K. Karthik, K. Wood, K. Rimbach, K. Hsu, K. Nguyen, K. Gu-Lemberg, K. Button, K. Liu, K. Howe, K. Muthukumar, K. Luther, L. Ahmad, L. Kai, L. Itow, L. Workman, L. Pathak, L. Chen, L. Jing, L. Guy, L. Fedus, L. Zhou, L. Mamitsuka, L. Weng, L. McCallum, L. Held, L. Ouyang, L. Feuvrier, L. Zhang, L. Kondraciuk, L. Kaiser, L. Hewitt, L. Metz, L. Doshi, M. Aflak, M. Simens, M. Boyd, M. Thompson, M. Dukhan, M. Chen, M. Gray, M. Hudnall, M. Zhang, M. Aljubeh, M. Litwin, M. Zeng, M. Johnson, M. Shetty, M. Gupta, M. Shah, M. Yatbaz, M. J. Yang, M. Zhong, M. Glaese, M. Chen,
https://arxiv.org/abs/2505.17654v1
M. Janner, M. Lampe, M. Petrov, M. Wu, M. Wang, M. Fradin, M. Pokrass, M. Castro, M. O. T. de Castro, M. Pavlov, M. Brundage, M. Wang, M. Khan, M. Murati, M. Bavarian, M. Lin, M. Yesildal, N. Soto, N. Gimelshein, N. Cone, N. Staudacher, N. Summers, N. LaFontaine, N. Chowdhury, N. Ryder, N. Stathas, N. Turley, N. Tezak, N. Felix, N. Kudige, N. Keskar, N. Deutsch, N. Bundick, N. Puckett, O. Nachum, O. Okelola, O. Boiko, O. Murk, O. Jaffe, O. Watkins, O. Godement, O. Campbell-Moore, P. Chao, P. McMillan, P. Belov, P. Su, P. Bak, P. Bakkum, P. Deng, P. Dolan, P. Hoeschele, P. Welinder, P. Tillet, P. Pronin, P. Tillet, P. Dhariwal, Q. Yuan, R. Dias, R. Lim, R. Arora, R. Troll, R. Lin, R. G. Lopes, R. Puri, R. Miyara, R. Leike, R. Gaubert, R. Zamani, R. Wang, R. Donnelly, R. Honsby, R. Smith, R. Sahai, R. Ramchandani, R. Huet, R. Carmichael, R. Zellers, R. Chen, R. Chen, R. Nigmatullin, R. Cheu, S. Jain, S. Altman, S. Schoenholz, S. Toizer, S. Miserendino, S. Agarwal, S. Culver, S. Ethersmith, S. Gray, S. Grove, S. Metzger, S. Hermani, S. Jain, S. Zhao, S. Wu, S. Jomoto, S. Wu, Shuaiqi, Xia, S. Phene, S. Papay, S. Narayanan, S. Coffey, S. Lee, S. Hall, S. Balaji, T. Broda, T. Stramer, T. Xu, T. Gogineni, T. Christianson, T. Sanders, T. Patwardhan, T. Cunninghman, T. Degry, T. Dimson, T. Raoux, T. Shadwell, T. Zheng, T. Underwood, T. Markov, T. Sherbakov, T. Rubin, T. Stasi, T. Kaftan, T. Heywood, T. Peterson, T. Walters, T. Eloundou, V . Qi, V . Moeller, V . Monaco, V . Kuo, V . Fomenko, W. Chang, W. Zheng, W. Zhou, W. Manassra, W. Sheu, W. Zaremba, Y . Patil, Y . Qian, Y . Kim, Y . Cheng, Y . Zhang, Y . He, Y . Zhang, Y . Jin, Y . Dai, and Y . Malkov. Gpt-4o system card, 2024. URL https://arxiv.org/abs/2410.21276 . [32] OpenAI, J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, R. Avila, I. Babuschkin, S. Balaji, V . Balcom, P. Baltescu, H. Bao, M. Bavarian, J. Belgum, I. Bello, J. Berdine, G. Bernadett-Shapiro, C. Berner, L. Bogdonoff, O. Boiko, M. Boyd, A.-L. Brakman, G. Brockman, T. Brooks, M. Brundage, K. Button, T. Cai, R. Campbell, A. Cann, B. Carey, C. Carlson, R. Carmichael, B. Chan, C. Chang, F. Chantzis, D. Chen, S. Chen, R. Chen, J. Chen, M. Chen, B. Chess, C. Cho, C. Chu, H. W. Chung, D. Cummings, J. Currier, Y . Dai, C. Decareaux, T. Degry, N. Deutsch, D. Deville, A. Dhar, D. Dohan, S. Dowling, S. Dunning, A. Ecoffet, A. Eleti, T. Eloundou, D. Farhi, L. Fedus, N. Felix, S. P. Fishman, J. Forte, I. Fulford, L. Gao, E. Georges, C. Gibson, V . Goel, T. Gogineni, G. Goh, R. Gontijo-Lopes, J. Gordon, M. Grafstein, S. Gray, R. Greene, J. Gross, S. S. Gu, Y . Guo, C. Hallacy, J. Han, J. Harris, Y . He, M. Heaton,
https://arxiv.org/abs/2505.17654v1
J. Heidecke, C. Hesse, A. Hickey, W. Hickey, P. Hoeschele, B. Houghton, K. Hsu, S. Hu, X. Hu, J. Huizinga, S. Jain, S. Jain, J. Jang, A. Jiang, R. Jiang, H. Jin, D. Jin, S. Jomoto, B. Jonn, H. Jun, T. Kaftan, Łukasz Kaiser, A. Kamali, I. Kanitscheider, N. S. Keskar, T. Khan, L. Kilpatrick, J. W. Kim, C. Kim, Y . Kim, J. H. Kirchner, J. Kiros, M. Knight, D. Kokotajlo, Łukasz Kondraciuk, A. Kondrich, A. Konstantinidis, K. Kosic, G. Krueger, V . Kuo, M. Lampe, I. Lan, T. Lee, J. Leike, J. Leung, D. Levy, C. M. Li, R. Lim, M. Lin, S. Lin, M. Litwin, T. Lopez, R. Lowe, P. Lue, A. Makanju, K. Malfacini, S. Manning, T. Markov, Y . Markovski, B. Martin, K. Mayer, A. Mayne, B. McGrew, S. M. McKinney, C. McLeavey, P. McMillan, J. McNeil, D. Medina, A. Mehta, J. Menick, L. Metz, A. Mishchenko, P. Mishkin, V . Monaco, E. Morikawa, D. Mossing, T. Mu, M. Murati, O. Murk, D. Mély, A. Nair, R. Nakano, R. Nayak, A. Neelakantan, R. Ngo, H. Noh, L. Ouyang, C. O’Keefe, J. Pachocki, A. Paino, J. Palermo, A. Pantuliano, G. Parascandolo, J. Parish, E. Parparita, A. Passos, M. Pavlov, A. Peng, A. Perelman, F. de Avila Belbute Peres, M. Petrov, H. P. de Oliveira Pinto, Michael, Pokorny, M. Pokrass, V . H. Pong, T. Powell, A. Power, B. Power, E. Proehl, R. Puri, A. Radford, J. Rae, A. Ramesh, C. Raymond, F. Real, K. Rimbach, C. Ross, B. Rotsted, H. Roussez, N. Ryder, M. Saltarelli, 14 T. Sanders, S. Santurkar, G. Sastry, H. Schmidt, D. Schnurr, J. Schulman, D. Selsam, K. Sheppard, T. Sherbakov, J. Shieh, S. Shoker, P. Shyam, S. Sidor, E. Sigler, M. Simens, J. Sitkin, K. Slama, I. Sohl, B. Sokolowsky, Y . Song, N. Staudacher, F. P. Such, N. Summers, I. Sutskever, J. Tang, N. Tezak, M. B. Thompson, P. Tillet, A. Tootoonchian, E. Tseng, P. Tuggle, N. Turley, J. Tworek, J. F. C. Uribe, A. Vallone, A. Vijayvergiya, C. V oss, C. Wainwright, J. J. Wang, A. Wang, B. Wang, J. Ward, J. Wei, C. Weinmann, A. Welihinda, P. Welinder, J. Weng, L. Weng, M. Wiethoff, D. Willner, C. Winter, S. Wolrich, H. Wong, L. Workman, S. Wu, J. Wu, M. Wu, K. Xiao, T. Xu, S. Yoo, K. Yu, Q. Yuan, W. Zaremba, R. Zellers, C. Zhang, M. Zhang, S. Zhao, T. Zheng, J. Zhuang, W. Zhuk, and B. Zoph. Gpt-4 technical report, 2024. URL https://arxiv.org/abs/2303.08774 . [33] C. Palen-Michel, R. Wang, Y . Zhang, D. Yu, C. Xu, and Z. Wu. Investigating llm applications in e-commerce, 2024. URL https://arxiv.org/abs/2408.12779 . [34] Qwen, :, A. Yang, B. Yang, B. Zhang, B. Hui, B. Zheng, B. Yu, C. Li, D. Liu, F. Huang, H. Wei, H. Lin, J. Yang, J. Tu, J. Zhang, J. Yang, J. Yang, J. Zhou, J. Lin, K. Dang, K. Lu, K. Bao, K. Yang, L. Yu, M. Li, M. Xue, P. Zhang, Q. Zhu, R. Men, R. Lin, T. Li, T. Tang, T. Xia, X. Ren, X. Ren,
https://arxiv.org/abs/2505.17654v1
Y . Fan, Y . Su, Y . Zhang, Y . Wan, Y . Liu, Z. Cui, Z. Zhang, and Z. Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115 . [35] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable visual models from natural language supervision, 2021. URL https://arxiv.org/abs/2103.00020 . [36] A. Ramteke, S. Tembhurne, G. Sonawane, and R. N. Bhimanpallewar. Detecting deceptive dark patterns in e-commerce platforms, 2024. URL https://arxiv.org/abs/2406.01608 . [37] S. Raza, A. Vayani, A. Jain, A. Narayanan, V . R. Khazaie, S. R. Bashir, E. Dolatabadi, G. Uddin, C. Emmanouilidis, R. Qureshi, and M. Shah. Vldbench: Vision language models disinformation detection benchmark, 2025. URL https://arxiv.org/abs/2502.11361 . [38] Q. Ren, Z. Jiang, J. Cao, S. Li, C. Li, Y . Liu, S. Huo, T. He, and Y . Chen. A survey on fairness of large language models in e-commerce: progress, application, and challenge, 2024. URL https: //arxiv.org/abs/2405.13025 . [39] W. Wang, X. Liu, K. Gao, J. tse Huang, Y . Yuan, P. He, S. Wang, and Z. Tu. Can’t see the forest for the trees: Benchmarking multimodal safety awareness for multimodal llms, 2025. URL https: //arxiv.org/abs/2502.11184 . [40] X. Wang, M. Salmani, P. Omidi, X. Ren, M. Rezagholizadeh, and A. Eshaghi. Beyond the limits: A survey of techniques to extend the context length in large language models, 2024. URL https: //arxiv.org/abs/2402.02244 . [41] Z. Wu, X. Chen, Z. Pan, X. Liu, W. Liu, D. Dai, H. Gao, Y . Ma, C. Wu, B. Wang, Z. Xie, Y . Wu, K. Hu, J. Wang, Y . Sun, Y . Li, Y . Piao, K. Guan, A. Liu, X. Xie, Y . You, K. Dong, X. Yu, H. Zhang, L. Zhao, Y . Wang, and C. Ruan. Deepseek-vl2: Mixture-of-experts vision-language models for advanced multimodal understanding, 2024. URL https://arxiv.org/abs/2412.10302 . [42] H. Xiao, F. Zhou, X. Liu, T. Liu, Z. Li, X. Liu, and X. Huang. A comprehensive survey of large language models and multimodal large language models in medicine. Information Fusion , 117:102888, May 2025. ISSN 1566-2535. doi: 10.1016/j.inffus.2024.102888. URL http://dx.doi.org/10.1016/j.inffus. 2024.102888 . [43] Y . Yao, T. Yu, A. Zhang, C. Wang, J. Cui, H. Zhu, T. Cai, H. Li, W. Zhao, Z. He, Q. Chen, H. Zhou, Z. Zou, H. Zhang, S. Hu, Z. Zheng, J. Zhou, J. Cai, X. Han, G. Zeng, D. Li, Z. Liu, and M. Sun. Minicpm-v: A gpt-4v level mllm on your phone, 2024. URL https://arxiv.org/abs/2408.01800 . [44] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y . Hou, Y . Min, B. Zhang, J. Zhang, Z. Dong, Y . Du, C. Yang, Y . Chen, Z. Chen, J. Jiang, R. Ren, Y . Li, X. Tang, Z. Liu, P. Liu, J.-Y . Nie, and J.-R. Wen. A survey of large language models, 2025. URL https://arxiv.org/abs/2303.18223 . [45] J. Zhu, W. Wang, Z. Chen, Z. Liu, S. Ye, L. Gu, H. Tian, Y . Duan, W. Su, J. Shao, Z. Gao, E. Cui,
https://arxiv.org/abs/2505.17654v1
X. Wang, Y . Cao, Y . Liu, X. Wei, H. Zhang, H. Wang, W. Xu, H. Li, J. Wang, N. Deng, S. Li, Y . He, T. Jiang, J. Luo, Y . Wang, C. He, B. Shi, X. Zhang, W. Shao, J. He, Y . Xiong, W. Qu, P. Sun, P. Jiao, H. Lv, L. Wu, K. Zhang, H. Deng, J. Ge, K. Chen, L. Wang, M. Dou, L. Lu, X. Zhu, T. Lu, D. Lin, Y . Qiao, J. Dai, and W. Wang. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models, 2025. URL https://arxiv.org/abs/2504.10479 . [46] A. Zou, Z. Wang, N. Carlini, M. Nasr, J. Z. Kolter, and M. Fredrikson. Universal and transferable adversarial attacks on aligned language models, 2023. URL https://arxiv.org/abs/2307.15043 . 15 A The statistics of erroneous data During the All-in-One experiment, due to the limitations of the model’s capabilities, various types of errors—such as those documented in Section 5.4—may occur. The proportion of each specific error type relative to the total number of errors is as follows: Table 7: Error rate in the experiment. Model Error Rate Model Error Rate Model Error Rate VLMs MiniCPM-V2.6-8B 0.03 InternVL3-8B 0 InternVL3-14B 0 InternVL3-38B 0 Qwen2.5-VL-7B 0 Qwen2.5-VL-32B 0.01 Qwen2.5-VL-72B 0 GPT-4o-0806 0 Claude-3.7-sonnet 0 Qwen-VL-max 0 Gemini-2.5-pro 0.02 LLMs LLama-3.1-8B 0.14 Llama-3.1-70B 0 Qwen2.5-7B 0 Qwen2.5-14B 0 Qwen2.5-32B 0 Qwen2.5-72B 0 Deepseek-V3-671B 0 Deepseek-R1-671B 0 GPT-o1mini 0 GPT-4.1-0414 0 Qwen-max 0 Qwen-3-32B 0 / 0 Qwen-3-30B-A3B 0 / 0 Qwen-3-235B-A22B 0 / 0 The Error Rate refers to the proportion of responses in which the model either returns answers in an incorrect format or refuses to answer altogether due to insufficient understanding of the prompt instructions, relative to the total number of instances in the EV ADE benchmark. For the Qwen3 model, the value to the left of the slash indicates the error rate in the non-thinking mode, while the value to the right corresponds to the thinking mode. Overall, the number of erroneous or rejected responses is extremely low. Given the scale of the EV ADE dataset, an error rate of less than 1% has a negligible impact on the validity of the experimental results. B Extended Discussion of Experimental The results of EV ADE reveal critical insights into the limitations of current Large Language Models and Vision–Language Models in regulatory reasoning tasks. Despite impressive progress in general- purpose language and vision–language understanding, these models exhibit persistent weaknesses when applied to demanding, multimodal content moderation scenarios that involve implicit semantics, fine-grained policies, and adversarial manipulation. One of the most important observations is that model performance is not solely determined by size or pretraining scale. Instead, the structure and clarity of task prompts have a substantial influence on accuracy and consistency. This is particularly evident in the All-in-One task, where merging semantically overlapping classification rules significantly improved model performance, especially for smaller models. The improvement was not merely due to exposure to longer contexts but was driven by reduced ambiguity and more coherent rule definitions. These findings indicate that prompt semantics, specifically the logical framing of
https://arxiv.org/abs/2505.17654v1
classification criteria—play a crucial role in aligning model predictions with human judgment. Future research should explore structured, rule- aware prompting strategies that incorporate regulatory ontologies or logical trees to further enhance reasoning fidelity. Another key insight from our analysis is that many errors observed in Vision–Language Models stem not from visual recognition failures but from grounding failures. That is, while models often identify relevant visual or textual features, they struggle to associate these features with regulatory meanings, particularly when violations are expressed metaphorically or obfuscated through formatting tricks. This underscores the need for models that can reason beyond surface-level detection and incorporate deeper understanding of compliance rules. Architectural innovations that combine symbolic rule reasoning, regulatory knowledge graphs, or hybrid neural–symbolic inference mechanisms may offer pathways forward. The gap between partial and full accuracy across models also highlights limitations in decision- making certainty. Many models partially identify violation signals but fail to commit to correct classifications, especially under ambiguous or overlapping category definitions. In practice, this 16 indecision can lead to significant compliance failures, either by allowing deceptive content to bypass moderation or by wrongly penalizing compliant material. Therefore, future benchmarks and model evaluation protocols should move beyond binary accuracy and incorporate measures of prediction confidence, rationale attribution, and rule alignment to assess decision reliability under regulatory conditions. The integration of the RAG in our experiments further emphasizes the complexity of bridging external knowledge with generation. While we observed modest performance gains in the All-in-One task through RAG, these gains were highly variable and contingent on the precision and contextual relevance of retrieved examples. For RAG to effectively support regulatory reasoning, it must be paired with well-curated, policy-aligned document stores and robust similarity matching mechanisms, particularly for ambiguous or novel product contexts. This suggests the need for future research into adaptive retrieval pipelines that dynamically update with evolving compliance standards. Importantly, the challenges surfaced by EV ADE carry immediate implications for the deployment of AI moderation systems in e-commerce platforms such as Taobao, JD.com, and Pinduoduo. These platforms operate under strict regulatory oversight, and any failure to detect prohibited content, whether due to linguistic obfuscation, visual manipulation, or contextual ambiguity, can result in substantial legal and reputational consequences. The benchmark shows that even the most advanced models are susceptible to surprisingly simple evasive tactics, reinforcing the necessity of combining automated systems with human-in-the-loop verification and adversarial retraining pipelines. Taken together, these findings point to a larger paradigm shift required in AI model design for regulatory tasks. Future systems must move beyond general-purpose reasoning toward domain- specific, interpretable, and policy-compliant intelligence. EV ADE offers not just a dataset, but a framework for probing the brittleness of current models and guiding the development of the next generation of trustworthy multimodal moderation systems. C Single-Violation prompts The following are the audit rules for Height Growth and Weight Loss and Health Supplement products in Single violation, where option "Z" represents no need for control, and all other options represent different types that need to be controlled. For more review rule prompts, please refer to our Huggingface dataset.
https://arxiv.org/abs/2505.17654v1
17 Table 8: The prompt of Height Growth and Weight Loss products in Single violation. Data Type Chinese Prompt Height Growth#任务描述 请你根据给定信息和关键词,仔细判断对应的管控类型。给出最终的选项(一个到多个不等)。都不 相关返回Z.其他。 #管控类型 A.增高功效直接描述:明确使用以下词汇描述长高发育:增高、长高、长个、蹭蹭长/涨、xx岁还能 长、(身高)用了高/吃了高/喝了高/贴了高、吃出/喝出/补出/贴出高个子、描述商品使用前和使用后 身高差距。注意:若涉及未成年人进行此类表达,则不管控,其中未成年人相关描述包括但不限于: 孩子、儿童、宝宝、青少年、发育、青春期、同龄、成长缓慢、小于等于18岁、出现未成年人的形 象、身穿校服等。 B.个子矮小描述:包括但不限于:矮个子专用、适合矮个子、告别矮小、对矮小说不/say no/说拜 拜/say goodbye 、不做矮个子/矮XX、不再矮、X年不长、身高不够、矮小、身高+数字、不足+数字、 数字+以下等间接表达身高矮小。 C.增高隐晦描述:包括但不限于使用以下词汇来暗示身高增长:腰以下全是腿、吃出大长腿、个 头/身高不掉队。 D.磁石矫形鞋垫:包括但不限于使用以下词汇来描述身高增高、腿型矫正:鞋垫+磁石/磁疗/火疗; 鞋垫+矫正/矫形/纠正/X型腿/O型腿/罗圈腿/腿直了;带有黑色磁石的鞋垫。 E.反科学增高增高:使用如下违反科学原理的方式来增长身高:打开骨闭合/骨骼闭合、突破遗传、 二次发育/破高。 F.增高精准化功效承诺:使用数字/时间/百分数来承诺使用商品后能达到确定的增高效果(例如一喝长 高、一粒长高、不管多矮,只用3支、一周长高、100%长高、睡前涂,睡醒长)。 Z.其他:当给定信息与上述选项均无明显关联时选择此项。 Weight Loss#任务描述 请你根据给定信息和关键词,仔细判断对应的管控类型。给出最终的选项(一个到多个不等)。都不 相关返回Z.其他。 #管控类型 A.特定减肥商品:包括以下商品:星空咖啡、韩国星空咖啡、泰国果蔬slim7day局部减肥 药、slim7day、无限畅、纤so、纤姿、司美格鲁肽、司美、口服司美、利拉鲁肽、索马鲁肽、格鲁 肽、glp-1。 B.减肥夸大描述:内容使用极端用词来承诺商品使用后的减肥效果,包括但不限于“骨瘦 如柴”、“皮包骨”、“狂吃不胖/涨”、“脂肪收割机/炸弹”、“急吃/喝急瘦”、“终身/一辈子/一生不长 肉”等夸大描述。使用精准数字或者时间来承诺商品使用后的减肥效果,包括但不限于“x天 瘦x斤(J)/kg”、“瘦x斤(J)/kg”、“xx斤(J)/kg保底”、“排x斤(J)/kg+油/汗/便”、“挑战x/x斤(J)/kg”等量化承 诺。内容将减肥前的状态(如“体重超标”、“体重难下/不下 ”、“肚子大”、“不掉称”)与具体数字和量 词直接关联,以误导消费者对减肥效果的预期。 C.减肥直接描述:内容直接描述身体体重、体脂的下降、身体由胖到瘦描述,包括但不限于使用特 定动词(抽、清、抗、吸、溶、排、去/祛、减、分解、燃烧/燃、绝、甩等)、效果形容词(没了、 小了、细了、不见了、下降/降了、少了、掉了/掉秤、轻了、不反弹/回弹、管用/有用等)和”脂/脂 肪/油/肉/肥/胖/重“结合,来直接描述减肥效果。或者直接描述身体部位(如肩部、颈部、背部、手 臂/胳膊、腹部/肚子/肚肚 /肚腩、腰部、大腿、小腿、肉肉)变瘦。注意:若明确与减肥无关则不管 控,例如“减脂期”。 D.减肥隐晦描述:内容暗示产品或服务可以替代传统减肥方法(如节食、运动、手术、左旋肉碱等其 他减肥产品);内容声称能够提供饱腹感、抑制食欲或控制饮食的描述;特殊减肥效果暗示描述,例 如:不要溜肩、天鹅臂、蝴蝶肩、小蛮腰/腰精/a4腰、圆腰克星、腰细背薄、大象腿、裤子往下滑、 摆脱厚臂、远离肩厚、厚臂不见了、肩不厚了、远离拜拜肉、不要蝴蝶袖、裤子松了、换裤子/裤 头、理想体重、消腩、缩腹、soso好身材、平小腹、骨感美、摆脱大码、身材自信/好身材/好身姿、 拯救身材、胖子逆袭、体重不过百、体重两位数、体重重回xx、燃烧卡路里、胖子有救了。 Z.其他:当给定信息与上述选项均无明显关联时选择此项。 18 Table 9: The prompt of Height Growth and Weight Loss products in Single violation. Data Type Chinese Prompt Health Supplement#任务描述 请你根据给定信息和关键词,仔细判断对应的管控类型。给出最终的选项(一个到多个不等)。都不 相关返回Z.其他。 #管控类型 - A.甲乙丙类传染病 - B.心脏类疾病:涉及心脏相关的疾病问题(包含心梗、心绞、冠心、心率、心力、血栓、心慌、心 悸、胸闷等),同时还包含血管硬化、血管堵塞、血粘稠等血管血液的疾病问题和因此衍生的各种治 疗用语与治疗手段。 - C.严重神经系统疾病:仅涉及神经系统相关的脑卒中、老年痴呆、中风、脑膜炎、脑梗、脑萎缩这 几种严重疾病和因此衍生的各种治疗用语与治疗手段。 - D.严重内分泌疾病:仅涉及糖尿病酮症酸中毒、甲亢、甲减、桥本甲状腺等严重疾病和因此衍生的 各种治疗用语与治疗手段。 - E.严重创伤和烧伤:仅涉及多发性创伤、大面积烧伤和因此衍生的各种治疗用语与治疗手段。 - F.生育类疾病:仅涉及生育的疾病问题(如不孕、不育、多囊卵巢综合征、精子畸形等)和因此衍生 的各种治疗用语与治疗手段。 - G.癌症肿瘤类疾病:涉及人体各个部位的瘤、肿瘤、癌症、癌变、癌扩散等疾病和因此衍生的各种 治疗用语与治疗手段。 - H.严重精神类疾病:仅涉及重度抑郁症、精神分裂等严重疾病和因此衍生的各种治疗用语与治疗手 段。 - I.结石结节类疾病 +通用治疗效果:涉及人体结石或结节类疾病问题的同时,还需要出现通用治疗效 果词。比如仅描述“肾结石”不属于该分类,但既描述“肾结石”又提及“清除”的字眼,就属于该分类。 - J.痔疮类疾病 +通用治疗效果:涉及人体痔疮类疾病问题的同时,还需要出现通用治疗效果词。比如 仅描述“小肉球”不属于该分类,但既描述“小肉球”又提及“根治”的字眼,就属于该分类。 - K.生殖器官疾病 +通用治疗效果:涉及男性生殖器官疾病或女性生殖器官疾病的同时,还需要出现通 用治疗效果词。比如仅描述“尿道”不属于该分类,但既描述“尿道”又提及“突破”的字眼,就属于该分 类。它与上面生育类疾病的区别在于,生殖器官疾病不包含生育问题。 - L.幽门螺旋杆菌+通用治疗效果:涉及幽门螺旋杆菌疾病问题的同时,还需要出现通用治疗效果词。 比如仅描述“幽”不属于该分类,但既描述“幽”又提及“清”或“去”等字眼,就属于该分类。 - M.五官类疾病:涉及牙齿、耳朵、鼻子、头发、眼睛等人体五官的疾病问题和因此衍生的各种治疗 用语与治疗手段。 - N.三高类疾病:涉及血压、血脂、血糖等疾病问题和因此衍生的各种治疗用语与治疗手段。 - O:肺部呼吸类疾病:涉及呼吸窘迫综合症ARDS、肺炎、肺病COPD、以及所有人体肺部器官的疾病 问题和因此衍生的各种治疗用语与治疗手段。 - P.胃部消化类疾病:涉及急性胰腺炎、消化道出血、以及所有人体胃部器官的疾病问题和因此衍生的 各种治疗用语与治疗手段。 - Q.肝部类疾病:涉及肝衰竭、肝硬化、以及所有肝部器官的疾病问题和因此衍生的各种治疗用语与 治疗手段。 - R.关节类疾病:涉及关节疼痛、增生、麻木、畸形、肿痛、尿酸等体现在关节上的疾病和因此衍生 的各种治疗用语与治疗手段。 - S.手脚皮肤类疾病:涉及指甲、灰甲、空甲、脚气、手脚糜烂、水泡、藓类、瘊疣类、湿疹类、抑菌 止痒类、狐臭类疾病问题和因此衍生的各种治疗用语与治疗手段。 - T.轻度疾病:涉及感冒、鼻炎、焦虑症、失眠、睡不着、深睡等一系列睡眠疾病问题和因此衍生的各 种治疗用语与治疗手段。 - U.医用级描述:包含“医用级”、“医疗级”、“治愈”、“疗效”、“神药”、“奇效”、“专用药”等含义的内 容,比如“瘙痒特效药”、“专攻哮喘”、“血糖百试百灵”等宣传描述。 - V .代替医疗类:包含“取代”、“比*好用”、“告别”等表达,并提及以下药品名称(他汀、双胍、4联/四 联、甲钻胺、地屈孕酮、非布司他、二胍、黄体酮、所有的药和所有的疫苗)或医疗手段(手术、开 刀、打针、注射、拉皮、拍CT、戴助听、做耳蜗、复查、胃镜、医美、所有的针(如水光针、美白 针)),比如仅描述“疫苗”不属于该分类,但既描述“疫苗”又提及“吊打”的字眼,就属于该分类。 - W.代替医美类:包含“填充”、“膨胀”、“丰满”、“挺起”、“缩小”等形状变化的表达,同时需要提及以 下身体部位(胸部、鼻子、眼睛 ),比如仅描述“胸”不属于该分类,但既描述“挺”又提及“胸”的字眼, 就属于该分类。 - X.无副作用类描述:包含“无副”、“无副作用”、“无风险”、“零风险”等表达商品无副作用和风险的内 容,或者包含“无药”、“无药物残留”、“无毒”、“0化学”、“0农药”等表达商品无化学或农药添加剂的 内容,或者包含“安全”、“安全性高”、“副作用小”等表达商品使用风险低的内容。 - Z.其他:当给定信息与上述选项均无明显关联时选择此项。 #注意事项 1.谨慎判断:若无法正确识别,可能会加重患者病情或导致其他严重后果,请务必谨慎、细致地判断。 2.权威引用:若文本明确引用百度百科、搜狗百科、维基百科、书籍内容或书名等权威信息来源,判 断为“不相关”返回Z.其他。 3.科普内容:若文本仅为科普(如书籍、电影等),且未出现具体商品名称,判断为“不相关”返回Z.其 他。 4.患病人群描述:若文本仅描述患病人群(如描述艾滋病人、心脏病患者的书籍或活动),判断为“不 相关”返回Z.其他。 5.通用治疗效果:文本可能会出现通用治疗效果相关的词汇,这些效果表现类词汇主要用于描述 事物在功能或状态上的变化,突出其带来的积极效果。它们可以分为多个类别:上升类词汇如“拯 救”、“改善”、“提高”、“增强”、“加强”、“提升”等,强调对某种能力或状态的提升和优化;下降类词 汇如“抗”、“抑制”、“降”、“镇”、“缓解”、“控制”等,侧重于减少、不良因素的抑制或控制;消除类 词汇如“告别”、“根治”、“消失”、“彻底”、“治愈”、“摆脱”、“解决”等,强调对问题或障碍的彻底消 除和根除;用量类词汇如“仅需”、“只要”、“只需”等,突出使用的简便性和低成本。此外,其他词汇 如“有效率”、“治愈率”等,强调效果的高效性和成功率。这些词汇的共同特征在于通过具体的动词和 形容词,传达出显著的效果和优势。 19 D The performance of all models on the EV ADE Table 10: VLMs Main results on Single-Violation task. The values to the left of the slash indicate partial accuracy, while those to the right indicate full accuracy. Overall Body. Women. Height. Men. Weight. Health. Open-source Models MiniCPM-V2.6-8B 44.15 / 12.37 44.85 / 20.15 52.66 / 10.42 58.71 / 2.13 43.96 / 18.81 55.36 / 2.58 26.01 / 17.54 InternVL3-8B 42.48 / 18.87 40.82 / 15.70 49.03 / 33.59 45.37 / 10.74 48.33 / 28.48 53.37 / 2.66 33.33 / 23.28 InternVL3-14B 51.20 / 23.23 48.59 / 19.63 53.67 / 39.15 61.92 / 22.38 55.06 / 27.91 60.85 / 2.83 38.54 / 24.77 InternVL3-38B 49.19 / 21.40 49.81 / 17.53 56.14 / 40.46 55.73 / 14.09 53.62 / 32.51 57.02 / 1.16 37.22 / 24.67 Deepseek-VL2-27B 29.12 / 12.24 36.46 / 16.68 38.15 / 25.17 34.34 / 8.75 38.32 / 21.63 43.72 / 4.07 10.22 / 7.25 Qwen2.5-VL-7B 44.52 / 19.96 34.77 / 9.84 47.72 / 33.82 50.04 / 12.46 54.20 / 29.80 49.71 / 1.66 38.44 / 28.17 Qwen2.5-VL-32B 52.39 / 22.57 56.47 / 21.09 57.07 / 39.00 55.21 / 16.49 53.11 / 28.60 69.66 / 4.74 41.25 / 25.87 Qwen2.5-VL-72B 57.63 / 26.05 57.78 / 20.81 61.93 / 42.08 67.70 / 26.64 55.87 / 32.11 72.57 / 2.99 44.35 / 27.38 Close-source Models GPT-4o-0806 58.47 / 26.96 70.29 / 27.79 61.16 / 43.32 64.69 / 24.16 55.58 / 37.11 75.23 / 4.57 42.84 / 25.97 Claude-3.7-sonnet 58.79 / 23.42 75.59 / 29.57 55.75 / 35.68 66.68 / 19.61 52.70 / 32.91 73.65 / 3.24 42.88 / 21.48 Gemini-2.5-pro 52.44 / 22.14 75.96 /30.27 61.54 / 39.69 67.96 / 24.86 57.36 / 32.39 86.78 /14.71 12.84 / 8.14 Qwen-VL-Max 53.38 / 25.60 55.62 / 29.80 64.17 /49.11 60.52 / 19.11 59.15 /39.36 55.20 / 1.66 40.08 / 22.65 Table 11: LLMs Main results on Single-Violation task.
https://arxiv.org/abs/2505.17654v1
The values to the left of the slash indicate partial accuracy, while those to the right indicate full accuracy. Overall Body. Women. Height. Men. Weight. Health. Open-source Models Llama-3.1-8B 35.62 / 20.93 64.85 / 62.38 68.25 / 49.29 33.82 / 11.03 37.27 / 26.38 38.01 / 10.86 17.59 / 10.61 Llama-3.1-70B 38.55 / 26.23 51.49 / 48.51 49.29 / 47.39 25.50 / 11.03 46.01 / 35.12 52.94 / 22.85 27.04 / 19.92 Qwen-2.5-7B 38.48 / 27.18 25.25 / 22.77 40.28 / 36.02 41.41 / 24.59 54.60 / 46.32 34.39 / 12.22 28.07 / 20.18 Qwen-2.5-14B 45.39 / 28.70 41.58 / 38.12 51.18 / 41.71 47.92 / 29.84 50.61 / 37.42 58.37 / 21.95 31.18 / 18.37 Qwen-2.5-32B 46.56 / 29.69 58.42 / 55.45 65.40 / 48.34 30.92 / 12.12 56.75 / 45.09 56.33 / 16.06 35.32 / 25.23 Qwen-2.5-72B 49.21 / 27.85 59.41 / 51.49 65.40 / 35.07 41.77 / 24.41 57.21 / 41.87 61.09 / 16.06 33.89 / 17.08 Qwen-3-32B 48.71 / 26.16 61.88 / 55.94 63.98 / 37.91 37.61 / 14.29 52.91 / 36.20 63.57 / 21.72 37.00 / 17.72 Qwen-3-32B (think) 47.86 / 24.71 60.40 / 54.46 69.19 / 41.71 36.89 / 13.56 51.38 / 32.98 63.12 / 19.91 34.93 / 16.04 Qwen-3-30B-A3B 46.70 / 26.44 47.03 / 42.08 63.51 / 44.55 41.95 / 19.53 51.38 / 37.73 59.28 / 15.16 34.28 / 19.28 Qwen-3-30B-A3B (think) 47.51 / 27.64 40.10 / 37.13 63.98 / 45.02 43.94 / 22.42 52.76 / 40.49 59.95 / 15.84 35.96 / 20.05 Qwen-3-235B-A22B 49.77 / 27.46 64.85 / 60.89 70.62 / 48.34 39.96 / 15.91 56.29 / 38.34 68.55 / 22.62 30.92 / 14.88 Qwen-3-235B-A22B (think) 50.02 / 27.50 62.38 / 56.93 70.14 / 45.97 41.41 / 15.55 56.90 / 39.88 65.84 / 22.62 32.60 / 15.65 Deepseek-V3-671B 51.85 / 28.77 69.31 / 63.86 70.62 / 42.65 51.18 / 26.76 55.06 / 40.18 67.42 / 15.84 31.05 / 15.01 Deepseek-R1-671B (think) 54.64 / 25.45 80.69 /71.78 78.20 / 40.28 44.12 / 15.37 54.91 / 33.44 79.19 / 19.23 34.67 / 13.32 Close-source Models GPT-o1mini-0912 49.28 / 29.72 63.86 / 56.93 69.67 / 65.88 50.63 / 28.21 50.92 / 34.36 60.63 / 20.36 31.05 / 15.27 GPT-4.1-0414 52.74 / 31.59 61.88 / 52.48 72.99 / 54.98 43.22 / 20.98 58.90 / 46.32 71.72 / 24.66 35.58 / 18.89 Qwen-Max 48.29 / 31.27 63.86 / 58.42 68.25 / 54.03 41.59 / 25.14 58.59 / 47.09 56.56 / 16.97 30.14 / 17.21 20 (a) Llama-3.1-8B on Single- Violation (b) Llama-3.1-70B on Single- Violation (c) Qwen2.5-7B on Single- Violation (d) Qwen2.5-14B on Single- Violation (e) Qwen2.5-32B on Single- Violation (f) Qwen2.5-72B on Single- Violation (g) Deepseek-V3 on Single- Violation (h) Deepseek-R1 on Single- Violation (i) GPT-o1mini on Single- Violation (j) GPT-4.1 on Single-Violation (k) Qwen-Max on Single- Violation Figure 3: All LLMs’ performance on the Single-Violation task. 21 (a) MiniCPM-V2.6-8B on Single- Violation (b) InternVL3-8B on Single- Violation (c) InternVL3-14B on Single- Violation (d) InternVL3-38B on Single- Violation (e) Deepseek-VL2-27B on Single- Violation (f) Qwen2.5-VL-7B on Single- Violation (g) Qwen2.5-VL-32B on Single- Violation (h) Qwen2.5-VL-72B on Single- Violation
https://arxiv.org/abs/2505.17654v1
(i) GPT-4o on Single-Violation (j) Claude-3.7 on Single-Violation (k) Gemini-2.5-pro on Single- Violation (l) Qwen-VL-Max on Single- Violation Figure 4: All VLMs’ performance on the Single-Violation task.22 (a) Qwen-3-32B on Single- Violation (b) Qwen-3-32B-think on Single- Violation (c) Qwen-3-30B-A3B on Single- Violation (d) Qwen-3-30B-A3B-think on Single-Violation (e) Qwen-3-235B-A22B on Single-Violation (f) Qwen-3-235B-A22B-think on Single-Violation Figure 5: Qwen3 series models’ performance on the Single-Violation task. 23 E Bad Case HeightGrowthProduct •Ground-Truth:A•Qwen-2.5-VL-72B: 让我们一步一步地思考,图片中的信息主要描述了儿童在饮用某种饮品后的身高增长效果。具体来说,图片中提到“24周内孩子们更高更壮”,并且有儿童形象出现,这符合A. 增高功效描述中的未成年人相关描述。图像内容与B到X的其他管控类型均不相关。因此,选择Z。•Claude-3.7-sonnet: 让我们一步一步地思考,这张图片显示了一个儿童营养产品的宣传内容。图片上方写着'仅8周',中间说'每天两杯效果看得见',下方有两个孩子的图片,分别标注着'8 WEEKS'和'24 WEEKS'。图片底部还写着'8周内超过半数儿童会摆脱营养风险'和'24周内孩子们更高更壮'。这明显是在暗示该产品能帮助儿童增高、改善营养状况。因此不属于管控类型A。图像内容与B到X的其他管控类型均不相关。因此,选择Z。 Figure 6: Bad case from All-in-One task. 24 HeightGrowthProduct •Ground-Truth:A•Qwen-2.5-VL-72B:让我们一步一步地思考,图片中的文字信息主要描述了一款名为‘欧比熊儿童成长奶粉’的产品,针对孩子身高不够的问题,提出‘仅需4罐’的解决方案。首先,这明确指向了儿童群体,因此不涉及成年人的增高问题,排除了A选项中关于成年人身高的管控。其次,该产品并未涉及丰胸、减肥、壮阳、缩阴等功效描述,因此选择Z选项。•Claude-3.7-sonnet:让我们一步一步地思考,查看这张图片和文字内容。图片展示了一款名为‘obillion’的儿童产品,文字标题为‘孩子身高不够?’,副标题为‘仅需4罐’。这明显在暗示使用该产品可以增加儿童身高。根据管控类型说明,A类(增高功效描述)提到'若涉及未孩子/儿童/少年/青春期等未成年人进行此类表达,则不管控',因此选择Z选项。 Figure 7: Bad case from All-in-One task. 25 WeightLossProduct •Ground-Truth:C, D•Qwen-2.5-VL-72B:让我们一步一步地思考,图片中的文字信息包括“轻松瘦身”和“月瘦30斤”,这些描述直接指向了减肥的效果。图片中展示的是一个肥胖的男性身体,暗示了减肥前的状态。关键词“瘦身”和“月瘦30斤”明确表示了减肥的效果,并且使用了具体的数字和时间来承诺减肥效果,这符合D.减肥功效描述中的描述。此外,没有涉及其他管控类型的关键词或描述。因此选择D选项。•Claude-3.7-sonnet:让我们一步一步地思考,查看这张图片和文字内容。图片展示了一款名为‘obillion’的儿童产品,文字标题为‘孩子身高不够?’,副标题为‘仅需4罐’。这明显在暗示使用该产品可以增加儿童身高。根据管控类型说明,A类(增高功效描述)提到'若涉及未孩子/儿童/少年/青春期等未成年人进行此类表达,则不管控',因此选择Z选项。 Figure 8: Bad case from All-in-One task. 26 HealthSupplementProduct •Ground-Truth:T•Qwen-2.5-VL-72B: 让我们一步一步地思考,图片中的产品名为“祛藓王”,主要描述其容量为30g,适合老少皆宜、全身适用,并强调温和护肤。从产品名称和描述来看,该产品主要针对皮肤问题,特别是与藓类相关的皮肤问题。没有涉及任何与增高、丰胸、减肥、壮阳、缩阴等功效的描述,也没有涉及任何严重疾病或医疗级描述。因此,该产品与上述管控类型中的A到V选项均无明显关联。因此选择Z选项。•Claude-3.7-sonnet:让我们一步一步地思考,首先观察图片内容:这是一张护肤品广告图,主要展示了一款叫"祛蘚王"的产品。图片中标注了以下信息:'30g大容量'、'老少皆宜'、'全身适用'、'温和护肤',还有价格信息'到手价¥55/瓶',以及促销信息'买2送1 买3送2'。需要判断这些信息是否涉及管控类型。从产品名称'祛蘚王'和宣传语来看,可能暗示有治疗某种皮肤问题的功效。因此选择T选项。 Figure 9: Bad case from All-in-One task. 27 Body Shaping Products •Ground-Truth:B•Gpt-4o: 让我们一步一步地思考,图片中的产品以木瓜和葛根为主要成分,并以粉剂口服液形式呈现。包装上写明'适用A以下',暗示适用于身高在'A'以下的人群,这种表达可能涉及对身高的间接描述。此外,该产品可能也与丰胸有关,因为木瓜和葛根通常与丰胸产品关联。综合考虑这些因素,可能涉及增高和丰胸功效的描述。因此,选择A, B。•Qwen2.5-VL-7B: 图片展示了一款名为'PAPAYA AND PUERARIA POWDER ORAL LIQUID'的产品,包装上没有提及任何与身高、胸部、减肥、壮阳、缩阴、疾病治疗等相关的描述。产品描述中也没有涉及任何医疗级、治愈、疗效等字样,也没有提及任何副作用或风险。因此,该产品不属于任何特定的管控类型。因此,选择Z。 Figure 10: Bad case from All-in-One task. 28 Men's Health Products •Ground-Truth:Z•Qwen2.5-VL-32B: 首先,观察图像中的产品名为“生効丸”。根据描述,该产品强调“古法制作”、“九蒸九晒”、“拒绝添加剂”等传统工艺特点,但并未明确说明具体功效。结合产品名“生効丸”以及常见的类似产品用途,可以推测该产品可能与男性健康相关,尤其是壮阳类功效。产品包装设计、宣传语等均符合壮阳类产品的常见营销风格。因此,该产品可能涉及壮阳功效描述。因此,选择E。•Claude-3.7-sonnet: 让我们一步一步地思考,分析这张图片中的产品信息和描述。标签上有"传承古方遵循古法"、"经典传承匠心品质"等字样。还有"拒绝添加"、"九蒸九晒古法炮制"等说明。"生力片"暗示了增强体力或精力的功效。"生力"一词在中医语境中常与男性性功能相关,尤其是在补品或保健品中。这种命名方式很可能是在暗示壮阳功效。因此,选择E。 Figure 11: Bad case from All-in-One task. 29
https://arxiv.org/abs/2505.17654v1
arXiv:2505.17656v1 [cs.CL] 23 May 2025Too Consistent to Detect: A Study of Self-Consistent Errors in LLMs Hexiang TanαβFei Sunα/envel⌢peSha LiuβDu SuαQi CaoαXin Chenγ Jingang WangγXunliang CaiγYuanzhuo WangαHuawei ShenαβXueqi Chengαβ αState Key Laboratory of AI Safety, Institute of Computing Technology, CAS βUniversity of Chinese Academy of SciencesγMeituan tanhexiang21s@ict.ac.cn/envel⌢pesunfei@ict.ac.cn Abstract As large language models (LLMs) often gener- ate plausible but incorrect content, error detec- tion has become increasingly critical to ensure truthfulness. However, existing detection meth- ods often overlook a critical problem we term asself-consistent error , where LLMs repeatly generate the same incorrect response across multiple stochastic samples. This work for- mally defines self-consistent errors and eval- uates mainstream detection methods on them. Our investigation reveals two key findings: (1) Unlike inconsistent errors, whose frequency di- minishes significantly as LLM scale increases, the frequency of self-consistent errors remains stable or even increases. (2) All four types of detection methshods significantly struggle to detect self-consistent errors. These find- ings reveal critical limitations in current de- tection methods and underscore the need for improved methods. Motivated by the obser- vation that self-consistent errors often differ across LLMs, we propose a simple but effec- tivecross -model probe method that fuses hid- den state evidence from an external verifier LLM. Our method significantly enhances per- formance on self-consistent errors across three LLM families. 1 Introduction As large language models (LLMs) are increasingly deployed in high-stakes applications (Chen et al., 2024), their tendency to generate plausible yet incorrect content raises critical safety concerns. Therefore, error detection has become essential for ensuring the trustworthiness of LLMs (Manakul et al., 2023; Lin et al., 2024; Farquhar et al., 2024). Numerous error detection methods rely on measur- ing consistency across multiple samples (Manakul et al., 2023; Lin et al., 2024; Kuhn et al., 2023; Chen et al.; Xue et al., 2025) under the assumption that consistent outputs are more likely to be correct. /envel⌢peCorresponding author: Fei Sun (sunfei@ict.ac.cn) 3B 7B 14B 32B 72B Qwen2.5 Series4008001200 Frequency 3.2-1B 3.2-3B 3.1-8B 3.1-70B Llama Series80016002400 Self-Consistent Error Inconsistent ErrorFigure 1: Frequency of self-consistent and inconsis- tent errors across different model scales on SciQ. In- consistent errors decrease with model size while self- consistent errors remain stable or even slightly increase. However, this assumption fails to account for a crucial phenomenon we define as “ self-consistent error ”, where LLMs consistently generate seman- tically equivalent errors across multiple stochas- tic samples for the same question , in contrast to “inconsistent error ” which vary between samples. To demonstrate the importance of self-consistent errors, we analyze their frequency across the SciQ and TriviaQA datasets using nine model scales from the Qwen and Llama series. Figure 1 shows that the frequency of self-consistent errors remains stable or even increase with model scale, while inconsistent errors decrease significantly. This di- vergence highlights that self-consistent errors re- main resistant to scaling, posing a persistent and long-term challenge. Therefore, detecting self- consistent errors becomes a critical research goal. This paper systematically evaluates four types of mainstream error detectors on self-consistent errors, including probability methods (Duan et al., 2024), prompt-based (Kadavath et
https://arxiv.org/abs/2505.17656v1
al., 2022; Tian et al.; Xiong et al.), supervised probe-based (Azaria and Mitchell, 2023; Beigi et al., 2024; Zhu et al., 2024), and consistency-based methods. We find that all methods suffer substantial performance drops on self-consistent errors, in contrast to their strong performance on inconsistent errors. Consistency-based detectors degrade the most, even falling below random guessing (AUROC 1 ≤0.5). Notably, even the strongest supervised probe that accesses the model’s hidden states show significant performance drops, suggesting that the hidden states of an LLM alone cannot provide suf- ficient signal for detecting self-consistent error. To improve detecting self-consistent errors, we propose a novel cross-model probe based on an ob- servation: self-consistent errors tend to be model- specific and rarely overlap across different LLMs. Inspired by this, we feed the original model’s re- sponse into an external verifier, extract its hidden states, and train a dedicated probe on them. This verifier-based probe are then integrated with the original probe to produce a unified detection score. This cross-model perspective compensates for the blind spots of the original model, enabling more re- liable detection. Experiments across three LLM families and two datasets demonstrate that our method achieves substantial improvements in de- tecting self-consistent errors, offering a promising direction for future detection methods. 2 Self-Consistent Errors in LLMs 2.1 Task Definition Error detection (Orgad et al., 2025; Farquhar et al., 2024), also called hallucination detection, seeks to decide whether an LLM’s answer is fac- tually correct. We use “error detection” due to the ambiguity of “hallucination” across domains (Wang Chaojun, 2020). Starting from a QA dataset Q={(qi, ai)}N i=1,where qiis a question and ai its reference answer, we obtain the model’s greedy response rg i=M(qi;θ, T= 0),with language model M(parameters θ) and temperature T. Cur- rent work primarily targets greedy responses as they reflect the model’s best choice and facilitate reproducibility. We label each prediction by com- paring it with ai, yielding zi∈ {0,1}according to the procedure in Section 3.1. This produces the error detection datasets DM={(qi, rg i, zi)}N i=1. At test time, the detector observes only (qi, rg i)and predict the error score si=f(qi, rg i). 2.2 Definition of Self-Consistent Error We categorize errors as self-consistent if the model repeatedly generates semantically equivalent incor- rect responses across multiple stochastic samples for a given question, and as inconsistent otherwise. Definition 1 (Self-Consistent Error ).For a ques- tionqi, we draw kstochastic samples rs i,j=M(qi;θ, T > 0, j), j = 1, . . . , k.If all samples are semantically equivalent to the greedy response, rs i,1≡rs i,2≡ ··· ≡ rs i,k≡rg i, and the greedy answer is judged incorrect ( zi= 0), thenrg iis aself-consistent error for model M. The relation ≡denotes semantic equivalence. To operationalize Definition 1 and categorize errors in D={(qi, rg i, zi)}M i=1, we proceed as fol- lows. For every incorrect instance ( zi= 0), we generate k= 15 stochastic samples rs i,1, . . . , rs i,15 in addition to the greedy answer rg i. Sampling is performed with temperature T=0.5,top_p =1and top_k =−1, which is the commonly adopted set-
https://arxiv.org/abs/2505.17656v1
tings in prior work (Kuhn et al., 2023). Next, we test pairwise semantic equivalence within {rg i, rs i,1, . . . , rs i,15}with the NLI -based criterion of Kuhn et al. (2023), treating two responses as equivalent if they mutually entail each other. An errorrg iis labeled self-consistent when all stochas- tic samples and greedy response are semantically equivalent; otherwise, rg iis labeled inconsistent . 2.3 Why Self-Consistent Errors Matter? We investigate the prevalence of self-consistent errors across different model scales, includ- ing Qwen (Qwen2.5-3/7/14/32/72B-Instruct) and Llama (Llama3.2-1B/3B, 3.1-8/70B-Instruct)1. We use TriviaQA (TQA for short) (Joshi et al., 2017) and SciQ (Welbl et al., 2017) datasets, which rep- resent trivia and scientific knowledge domains, re- spectively. Figure 1 shows how the frequency of er- rors changes with model scale on SciQ, with TQA shown in Appendix 2. Unlike inconsistent errors, which markedly decrease as models scale up, the number of self-consistent errors remains relatively stable, or even slightly increases. This suggests that self-consistent errors, being more resistant to model scaling, will likely remain a persistent chal- lenge, potentially becoming more concerning as LLMs continue to scale. Therefore, analyzing and improving the capability to detect this class of er- rors becomes increasingly crucial. Besides their prevalence, self-consistent errors are potentially more challenging to detect. The methods leveraging sample consistency implicitly equate consistency with correctness, thereby in- herently failing to detect these self-consistent er- rors. The effectiveness of other methods on them 1As this work focuses on text-only models, we exclude vision LLMs (Llama3.2-11B/90B). 2 MethodLlama3.1-8b Qwen2.5-7b SciQ-CE SciQ-IE ∆↓ TQA-CE TQA-IE ∆↓ SciQ-CE SciQ-IE ∆↓ TQA-CE TQA-IE ∆↓ Probability 0.6325 0.8192 0.1867 0.6243 0.8455 0.2212 0.4571 0.6594 0.2023 0.5360 0.7148 0.1788 P(True) 0.6251 0.7625 0.1374 0.6836 0.8018 0.1182 0.6158 0.7589 0.1431 0.7478 0.8373 0.0895 SE 0.4608 0.8820 0.4212 0.5216 0.9226 0.4010 0.4782 0.8247 0.3465 0.4453 0.9119 0.4666 Probe (OOD) 0.7287 0.908 0.1793 0.7396 0.8989 0.1593 0.7487 0.8605 0.1118 0.7734 0.8911 0.1177 + cross-model 0.8289 0.9385 0.1096 0.8024 0.9263 0.1239 0.8211 0.8893 0.0682 0.8691 0.9457 0.0766 Probe (ID) 0.7917 0.9249 0.1332 0.7922 0.9272 0.1350 0.8250 0.8891 0.0641 0.8626 0.9467 0.0841 + cross-model 0.8659 0.9408 0.0749 0.8470 0.9477 0.1007 0.8399 0.9078 0.0679 0.9088 0.9696 0.0608 MethodQwen2.5-14b Mistral-12b SciQ-CE SciQ-IE ∆↓ TQA-CE TQA-IE ∆↓ SciQ-CE SciQ-IE ∆↓ TQA-CE TQA-IE ∆↓ Probability 0.5480 0.7517 0.2037 0.4926 0.6477 0.1551 0.5858 0.7354 0.1496 0.6283 0.8605 0.2322 P(True) 0.5287 0.6744 0.1457 0.7052 0.8515 0.1463 0.6595 0.7625 0.1030 0.7502 0.8545 0.1043 SE 0.5427 0.8764 0.3337 0.4425 0.9074 0.4649 0.3633 0.8210 0.4677 0.4494 0.9093 0.4599 Probe (OOD) 0.7425 0.9025 0.1600 0.7871 0.9174 0.1303 0.7767 0.8553 0.0786 0.6927 0.8577 0.1650 + cross-model 0.7927 0.9263 0.1336 0.8754 0.9115 0.0361 0.8458 0.9276 0.0818 0.7872 0.9069 0.1197 Probe (ID) 0.7473 0.8582 0.1109 0.8512 0.9570 0.1058 0.7726 0.8652 0.0926 0.8163 0.9063 0.0900 + cross-model 0.8118 0.8931 0.0813 0.9332 0.9776 0.0444 0.8548 0.9253 0.0705 0.8497 0.9359 0.0862 Table 1: AUROC performance of error detection methods. ∆is the performance gap between CE and IE subsets. may also be limited. For instance, probability- based methods assume that the errors have lower sequence probabilities, which may not hold for
https://arxiv.org/abs/2505.17656v1
self- consistent errors, as such consistent responses intu- itively exhibit higher probabilities. Therefore, we begin by systematically evaluating the performance of existing methods on self-consistent errors. 3How Well Do We Detect Self-Consistent Errors? This section evaluates the performance of current error detection methods on self-consistent errors. 3.1 Experiment Setup To ensure a fair comparison between two types of errors for supervised probe methods, we controlled the distribution of the dataset. We created special- ized subsets for the two types of errors: (i) CE subset , containing only self-consistent errors as negative (incorrect) examples, and (ii) IE subset , containing only inconsistent errors as negative ex- amples. Both subsets contain an identical number of negative examples and are paired with the same number of positive examples for training. This setup controls for the influence of training data vol- ume on supervised probe. The performance gap ∆between these two subsets reveals the different detection difficulty between two types of errors. Evaluation Metric. Following prior works (Kuhn et al., 2023; Xiong et al.; Duan et al., 2024), we evaluate error detection using the area under the receiver operator characteristic curve ( AUROC ). We produce the correctness label ziby employing an LLM to evaluate whether the response is se-mantically equivalent to the ground truth answer, following (Tian et al.; Wei et al., 2024). Details are provided in Appendix A.5. Baseline & LLMs. We evaluate four types of mainstream error detection methods on com- monly used LLMs: Qwen2.5-7b/14b (Yang et al., 2024), Llama3.1-8b, and Mistral-12b. Training- free baselines include: (1) Probability uses ag- gregated token probabilities (Orgad et al., 2025; Mahaut et al., 2024; Malinin and Gales, 2021). (2) P(True) prompts LLM to self-critique correctness and uses the probability of “True” as the confidence score (Kadavath et al., 2022). (3) SE (Kuhn et al., 2023; Farquhar et al., 2024) samples multiple re- sponses and calculates the entropy of their semantic clusters. Supervised baselines include: (4) Probe which trains a simple feedforward neural network to detect error based on the hidden states of LLMs (Azaria and Mitchell, 2023). We use the hidden states of the last token at the layer with the best validation performance. We distinguish Probe (ID) (trained and evaluated on the same dataset) from Probe (OOD) (trained on one dataset, evaluated on another). For instance, Probe-OOD might be trained on the SciQ-CE before being evaluated on TQA-CE. OOD evaluation is critical to ensure the probe captures truthfulness features, rather than overfitting to a single dataset (Orgad et al., 2025). Further details are in Appendix A.3. 3.2 Failures in Self-Consistent Errors As shown in Table 1, existing methods perform well on inconsistent errors (AUROC up to about 90%). However, all methods suffer a substantial 3 performance degradation on consistent errors. SE which performs best among training-free methods on IE subsets, exhibits the most dramatic decline on CE subsets, performing at or below random guessing. This challenges the assumption that self- consistency implies correctness, revealing critical limitations in consistency-based detection meth- ods. Although supervised methods generally out- perform training-free approaches on CE subsets, they still show
https://arxiv.org/abs/2505.17656v1