nielsr HF Staff commited on
Commit
ef8a68a
·
verified ·
1 Parent(s): 914fd81

Improve dataset card: Add metadata, links, description, and sample usage

Browse files

This PR significantly improves the dataset card by:
- Adding `task_categories: ['image-text-to-text']` and `license: cc-by-nc-4.0` to the metadata.
- Including links to the paper, project page.
- Adding the paper abstract for better context.
- Providing a "Sample Usage" snippet derived directly from the GitHub README, demonstrating how to generate CLIP embeddings for the data.
- Structuring the card with clear headings for readability and moving the existing BibTeX citations to a dedicated "Citation" section.

Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -1,3 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ```
2
  @inproceedings{RGCL2024Mei,
3
  title = "Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning",
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ tags:
6
+ - hateful-memes
7
+ - multimodal
8
+ - retrieval-augmented-generation
9
+ - lmm
10
+ ---
11
+
12
+ # Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection Datasets
13
+
14
+ This repository contains the datasets used in the paper [Robust Adaptation of Large Multimodal Models for Retrieval Augmented Hateful Meme Detection](https://huggingface.co/papers/2502.13061).
15
+
16
+ [Project Page](https://rgclmm.github.io/) | [Code](https://github.com/JingbiaoMei/RGCL)
17
+
18
+ ## Abstract
19
+ Recent advances in Large Multimodal Models (LMMs) have shown promise in hateful meme detection, but face challenges like sub-optimal performance and limited out-of-domain generalization. This work proposes a robust adaptation framework for hateful meme detection that enhances in-domain accuracy and cross-domain generalization while preserving the general vision-language capabilities of LMMs. Our approach achieves improved robustness under adversarial attacks compared to supervised fine-tuning (SFT) models and state-of-the-art performance on six meme classification datasets, outperforming larger agentic systems. Additionally, our method generates higher-quality rationales for explaining hateful content, enhancing model interpretability.
20
+
21
+ ## Dataset Preparation
22
+ The datasets consist of image data and corresponding annotation data.
23
+
24
+ ### Image data
25
+ Copy images into `./data/image/dataset_name/All` folder.
26
+ For example: `./data/image/FB/All/12345.png`, `./data/image/HarMeme/All`, `./data/image/Propaganda/All`, etc..
27
+
28
+ ### Annotation data
29
+ Copy `jsonl` annotation file into `./data/gt/dataset_name` folder.
30
+
31
+ ## Sample Usage
32
+ To generate CLIP embeddings for the datasets prior to training, you can use the provided script as follows:
33
+
34
+ ```shell
35
+ python3 src/utils/generate_CLIP_embedding_HF.py --dataset "FB"
36
+ python3 src/utils/generate_CLIP_embedding_HF.py --dataset "HarMeme"
37
+ ```
38
+
39
+ Similarly, to generate ALIGN embeddings:
40
+
41
+ ```shell
42
+ python3 src/utils/generate_ALIGN_embedding_HF.py --dataset "FB"
43
+ python3 src/utils/generate_ALIGN_embedding_HF.py --dataset "HarMeme"
44
+ ```
45
+
46
+ ## Citation
47
+ If our work helped your research, please kindly cite our papers:
48
+
49
  ```
50
  @inproceedings{RGCL2024Mei,
51
  title = "Improving Hateful Meme Detection through Retrieval-Guided Contrastive Learning",