koenshen commited on
Commit
d272e7f
·
verified ·
1 Parent(s): de8ee76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -73,11 +73,10 @@ configs:
73
  - split: text_test
74
  path: data/text_test-*
75
  ---
76
- # EVADE: Multimodal Benchmark for Evasive Content Detection in E-Commerce Applications
77
  [**🤗 Dataset**](https://huggingface.co/datasets/koenshen/EVADE-Bench) | [**Paper**](https://www.arxiv.org/abs/2505.17654) | [**GitHub**](https://github.com/koenshen/EVADE-Bench)
78
  </br>
79
- E-commerce platforms increasingly rely on Large Language Models (LLMs) and VisionLanguage Models (VLMs) to detect illicit or misleading product content. However, these models remain vulnerable to \emph{evasive content}: inputs (text or images) that superficially comply with platform policies while covertly conveying prohibited claims. Unlike traditional adversarial attacks that induce overt failures, evasive content exploits ambiguity and context, making it far harder to detect. Existing robustness benchmarks provide little guidance for this high-stakes, real-world challenge. We introduce \textbf{EVADE}, the first expert-curated, Chinese, multimodal benchmark specifically designed to evaluate foundation models on evasive content detection in e-commerce. The dataset contains 2,833 annotated text samples and 13,961 images spanning six high-risk product categories, including body shaping, height growth, and health supplements. Two complementary tasks assess distinct capabilities: \emph{Single-Risk}, which probes fine-grained reasoning under short prompts, and \emph{All-in-One}, which tests long-context reasoning by merging overlapping policy rules into unified instructions. Notably, the All-in-One setting significantly narrows the performance gap between partial and exact-match accuracy, suggesting that clearer rule definitions improve alignment between human and model judgment. We benchmark 26 mainstream LLMs and VLMs and observe substantial performance gaps: even state-of-the-art models frequently misclassify evasive samples. Through detailed error analysis, we identify critical challenges including metaphorical phrasing, misspelled or homophonic terms, and optical character recognition (OCR) limitations in VLMs. Retriever-Augmented Generation (RAG) further improves model performance in long-context scenarios, indicating promise for context-aware augmentation strategies. By releasing EVADE and strong baselines, we provide the first rigorous standard for evaluating evasive-content detection, expose fundamental limitations in current multimodal reasoning, and lay the groundwork for safer and more transparent content moderation systems in e-commerce.
80
-
81
  <img src="framework.jpg"/>
82
 
83
  In different splits of the current dataset, image split is equal to the sum of image_train split and image_test split, and the relationship between text split and text_train and text_test is the same.
 
73
  - split: text_test
74
  path: data/text_test-*
75
  ---
76
+ # EVADE-Bench: Multimodal Benchmark for Evaluating and Enhancing Evasive Content Detection
77
  [**🤗 Dataset**](https://huggingface.co/datasets/koenshen/EVADE-Bench) | [**Paper**](https://www.arxiv.org/abs/2505.17654) | [**GitHub**](https://github.com/koenshen/EVADE-Bench)
78
  </br>
79
+ E-commerce platforms increasingly rely on Large Language Models and Vision-Language Models to detect illicit or misleading product content. However, these models remain vulnerable to evasive content, which refers to inputs that superficially comply with platform policies while covertly conveying prohibited claims. Unlike traditional adversarial attacks that aim to compromise models or elicit harmful outputs, evasive content exploits ambiguity and context to circumvent moderation rather than directly attack systems, making it significantly harder to detect. Existing robustness benchmarks provide little guidance for this high-stakes, real-world challenge. We introduce EVADE-Bench, the first expert-curated, Chinese, multimodal benchmark specifically designed to evaluate foundation models on evasive content detection. The dataset contains 2,833 annotated text samples and 13,961 annotated images across six categories. Our benchmarking of 26 mainstream models reveals that even state-of-the-art models frequently misclassify evasive samples. To explore paths for performance enhancement, we demonstrate that decomposing multimodal reasoning through the decoupling of visual description from logical inference yields significant accuracy gains. By releasing EVADE-Bench, we provide the first rigorous standard for evaluating evasive content and expose limitations in current multimodal reasoning about metaphorical information and complex rules.
 
80
  <img src="framework.jpg"/>
81
 
82
  In different splits of the current dataset, image split is equal to the sum of image_train split and image_test split, and the relationship between text split and text_train and text_test is the same.