Add dataset card for PowerCLIP pre-processed CC12M

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - zero-shot-image-classification
4
+ ---
5
+
6
+ # PowerCLIP: Powerset Alignment for Contrastive Pre-Training
7
+
8
+ This repository contains the pre-processed CC12M dataset used in the paper [PowerCLIP: Powerset Alignment for Contrastive Pre-Training](https://huggingface.co/papers/2511.23170).
9
+
10
+ PowerCLIP is a contrastive pre-training framework that enhances standard CLIP by optimizing region-to-phrase alignments. This dataset augments the standard CC12M image-text pairs with two additional semantic components:
11
+ 1. **SAM region extraction**: Segment Anything Model (SAM) generated semantic regions per image.
12
+ 2. **Parse-tree phrase extraction**: spaCy-based constituency parsing extracted phrases from captions.
13
+
14
+ ## Resources
15
+ - **Paper:** [PowerCLIP: Powerset Alignment for Contrastive Pre-Training](https://huggingface.co/papers/2511.23170)
16
+ - **GitHub Repository:** [KMasaki0210/PowerCLIP](https://github.com/KMasaki0210/PowerCLIP)
17
+
18
+ ## Dataset Structure
19
+ The dataset is provided in WebDataset format. Each sample in the tar archives contains the following files:
20
+
21
+ ```
22
+ {key}.jpg # Image
23
+ {key}.txt # Caption
24
+ {key}.json # Metadata
25
+ {key}.njson # Parse-tree phrase indices (CSR format)
26
+ {key}.samlens.npy # SAM region lengths (CSR format)
27
+ {key}.samcat.npy # SAM region token indices (CSR format)
28
+ ```
29
+
30
+ ## Citation
31
+ ```bibtex
32
+ @article{masaki2025powerclip,
33
+ title={PowerCLIP: Powerset Alignment for Contrastive Pre-Training},
34
+ author={Masaki, Kobayashi and others},
35
+ journal={arXiv preprint arXiv:2511.23170},
36
+ year={2025}
37
+ }
38
+ ```