Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -73,11 +73,10 @@ configs:
|
|
| 73 |
- split: text_test
|
| 74 |
path: data/text_test-*
|
| 75 |
---
|
| 76 |
-
# EVADE: Multimodal Benchmark for Evasive Content Detection
|
| 77 |
[**🤗 Dataset**](https://huggingface.co/datasets/koenshen/EVADE-Bench) | [**Paper**](https://www.arxiv.org/abs/2505.17654) | [**GitHub**](https://github.com/koenshen/EVADE-Bench)
|
| 78 |
</br>
|
| 79 |
-
E-commerce platforms increasingly rely on Large Language Models
|
| 80 |
-
|
| 81 |
<img src="framework.jpg"/>
|
| 82 |
|
| 83 |
In different splits of the current dataset, image split is equal to the sum of image_train split and image_test split, and the relationship between text split and text_train and text_test is the same.
|
|
|
|
| 73 |
- split: text_test
|
| 74 |
path: data/text_test-*
|
| 75 |
---
|
| 76 |
+
# EVADE-Bench: Multimodal Benchmark for Evaluating and Enhancing Evasive Content Detection
|
| 77 |
[**🤗 Dataset**](https://huggingface.co/datasets/koenshen/EVADE-Bench) | [**Paper**](https://www.arxiv.org/abs/2505.17654) | [**GitHub**](https://github.com/koenshen/EVADE-Bench)
|
| 78 |
</br>
|
| 79 |
+
E-commerce platforms increasingly rely on Large Language Models and Vision-Language Models to detect illicit or misleading product content. However, these models remain vulnerable to evasive content, which refers to inputs that superficially comply with platform policies while covertly conveying prohibited claims. Unlike traditional adversarial attacks that aim to compromise models or elicit harmful outputs, evasive content exploits ambiguity and context to circumvent moderation rather than directly attack systems, making it significantly harder to detect. Existing robustness benchmarks provide little guidance for this high-stakes, real-world challenge. We introduce EVADE-Bench, the first expert-curated, Chinese, multimodal benchmark specifically designed to evaluate foundation models on evasive content detection. The dataset contains 2,833 annotated text samples and 13,961 annotated images across six categories. Our benchmarking of 26 mainstream models reveals that even state-of-the-art models frequently misclassify evasive samples. To explore paths for performance enhancement, we demonstrate that decomposing multimodal reasoning through the decoupling of visual description from logical inference yields significant accuracy gains. By releasing EVADE-Bench, we provide the first rigorous standard for evaluating evasive content and expose limitations in current multimodal reasoning about metaphorical information and complex rules.
|
|
|
|
| 80 |
<img src="framework.jpg"/>
|
| 81 |
|
| 82 |
In different splits of the current dataset, image split is equal to the sum of image_train split and image_test split, and the relationship between text split and text_train and text_test is the same.
|