timtsapras23 commited on
Commit
ab3684c
·
verified ·
1 Parent(s): 7616ec9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +186 -1
README.md CHANGED
@@ -6,7 +6,192 @@ language:
6
  - en
7
  tags:
8
  - privacy
 
9
  pretty_name: CPRT Dataset
10
  size_categories:
11
  - 1K<n<10K
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - en
7
  tags:
8
  - privacy
9
+ - cv
10
  pretty_name: CPRT Dataset
11
  size_categories:
12
  - 1K<n<10K
13
+ ---
14
+ # Dataset Card for CPRT-Bench
15
+
16
+ <!-- Provide a quick summary of the dataset. -->
17
+
18
+ CPRT-Bench is a benchmark dataset for assessing privacy risk in images, designed to model privacy as a graded and composition-dependent phenomenon.
19
+
20
+ ## Dataset Details
21
+
22
+ ### Dataset Description
23
+
24
+ <!-- Provide a longer summary of what this dataset is. -->
25
+ The dataset contains approximately 6.7K images annotated with:
26
+ - Ordinal severity levels (4 levels of privacy risk)
27
+ - Continuous risk scores (fine-grained privacy assessment)
28
+
29
+ All images are sourced from the VISPR (Visual Privacy Dataset). CPRT-Bench augments these images with structured annotations for privacy risk evaluation.
30
+
31
+
32
+ ### Dataset Sources
33
+
34
+ <!-- Provide the basic links for the dataset. -->
35
+
36
+ - **Paper:** [https://arxiv.org/pdf/2603.21573]
37
+
38
+ ## Uses
39
+
40
+ <!-- Address questions around how the dataset is intended to be used. -->
41
+
42
+ ### Direct Use
43
+
44
+ <!-- This section describes suitable use cases for the dataset. -->
45
+
46
+ CPRT-Bench is intended for:
47
+ - Evaluating privacy risk prediction in computer vision systems
48
+ - Benchmarking multimodal models on privacy perception tasks
49
+ - Studying calibration and ranking in risk prediction
50
+ - Research on context-aware and compositional reasoning in vision models
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
55
+
56
+ his dataset is not suitable for:
57
+ - Real-world privacy decision-making systems without additional safeguards
58
+ - Legal or regulatory enforcement
59
+ - Applications requiring culturally universal definitions of privacy
60
+
61
+ ## Dataset Structure
62
+
63
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
64
+
65
+ Each example includes:
66
+
67
+ - **`id`**: Filename ID corresponding to a VISPR image
68
+ - **`binary_labels`**: A nested dictionary of binary attributes grouped by privacy level
69
+ - **`level`**: An integer severity label from 1 to 4
70
+ - **`score`**: A floating-point privacy-risk score
71
+
72
+ The `binary_labels` field is organized hierarchically:
73
+
74
+ - `level1`: attributes that uniquely and directly identify a specific individual on their own
75
+ - `level2`: attributes that can reference a person or reveal sensitive personal information
76
+ - `level3`: attributes that are non-sensitive and non-identifying in isolation, but can contribute to identity linkage or profiling when combined with other non-uniquely identifying information
77
+ - `level4`: attributes that are generally benign and non-identifying, but may be regarded as private information depending on the context
78
+
79
+ Example structure:
80
+
81
+ ```json
82
+ {
83
+ "level1": {
84
+ "biometrics": 0/1,
85
+ "gov_ids": 0/1,
86
+ "unique_body_markings": 0/1
87
+ },
88
+ "level2": {
89
+ "contact_details": 0/1,
90
+ "full_legal_name": 0/1,
91
+ "non_unique_id": 0/1,
92
+ "medical_data": 0/1,
93
+ "financial_data": 0/1,
94
+ "beliefs": 0/1,
95
+ "nudity": 0/1,
96
+ "disability": 0/1,
97
+ "emotion_mental_health": 0/1,
98
+ "race_ethnicity": 0/1
99
+ },
100
+ "level3": {
101
+ "age": 0/1,
102
+ "gender": 0/1,
103
+ "location": 0/1,
104
+ "activities": 0/1,
105
+ "lifestyle": 0/1
106
+ },
107
+ "level4": {
108
+ "property_assets": 0/1,
109
+ "documents": 0/1,
110
+ "metadata": 0/1,
111
+ "background_people": 0/1
112
+ }
113
+ }
114
+ ```
115
+
116
+ ### Loading Instructions
117
+
118
+ CPRT-Bench contains annotation data only and does not distribute the underlying VISPR images. Users must download the VISPR dataset separately and resolve each id field to the corresponding image file.
119
+ The dataset adopts the VISPR split protocol:
120
+ - The training split is derived from the VISPR validation split
121
+ - The test split is derived from the VISPR test split
122
+
123
+ 1. Download VISPR dataset:
124
+ - VISPR-test [link](https://datasets.d2.mpi-inf.mpg.de/orekondy17iccv/test2017.tar.gz)
125
+ - -VISPR-val [link](https://datasets.d2.mpi-inf.mpg.de/orekondy17iccv/val2017.tar.gz)
126
+
127
+ 2. Load dataset:
128
+
129
+ ```python
130
+ from datasets import load_dataset
131
+
132
+ dataset = load_dataset("timtsapras23/CPRT-Bench")
133
+ ```
134
+
135
+ A simple way to load the image for each example is to search for the file that matches the VISPR `id`:
136
+
137
+ ```python
138
+ import os
139
+ from glob import glob
140
+ from PIL import Image
141
+
142
+
143
+ VISPR_ROOT = "/path/to/vispr/images"
144
+
145
+ def load_vispr_image(example):
146
+ image_id = example["id"]
147
+
148
+ candidates = [
149
+ os.path.join(VISPR_ROOT, f"{image_id}.jpg"),
150
+ os.path.join(VISPR_ROOT, f"{image_id}.png"),
151
+ os.path.join(VISPR_ROOT, image_id),
152
+ ]
153
+
154
+ image_path = next((p for p in candidates if os.path.exists(p)), None)
155
+ if image_path is None:
156
+ matches = glob(os.path.join(VISPR_ROOT, f"{image_id}.*"))
157
+ if matches:
158
+ image_path = matches[0]
159
+ else:
160
+ raise FileNotFoundError(f"Could not find an image for id={image_id}")
161
+
162
+ example["image"] = Image.open(image_path).convert("RGB")
163
+ return example
164
+
165
+ # Example: load the first split with images attached
166
+ # dataset["train"] = dataset["train"].map(load_vispr_image)
167
+ ```
168
+ ## Leaderboard
169
+
170
+
171
+ | Model | Spearman ρ ↑ | Pearson r ↑ | MAE ↓ |
172
+ |------|--------------|-------------|-------|
173
+ | **Gemini 3 Flash** | **0.872** | **0.884** | **0.140** |
174
+ | GPT-5.2 | 0.844 | 0.850 | 0.158 |
175
+ | Qwen3-VL (8B) + SFT (80 steps) | 0.762 | 0.799 | **0.140** |
176
+ | Qwen3-VL (4B) + SFT (80 steps) | 0.753 | 0.790 | 0.142 |
177
+ | Llama 4 Maverick | 0.763 | 0.728 | 0.233 |
178
+ | Qwen3-VL (32B) | 0.753 | 0.726 | 0.224 |
179
+ | Qwen3-VL (8B) | 0.751 | 0.636 | 0.291 |
180
+ | Pixtral (12B) | 0.720 | 0.616 | 0.311 |
181
+ | MiniCPM-V (8B) | 0.610 | 0.616 | 0.237 |
182
+ | Llama 3.2 VL (11B) | 0.571 | 0.460 | 0.344 |
183
+
184
+ ## Citation
185
+
186
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
187
+
188
+ **BibTeX:**
189
+
190
+ ```bibtex
191
+ @article{tsaprazlis2026cprt,
192
+ title={Rethinking Visual Privacy: A Compositional Privacy Risk Framework for Severity Assessment with VLMs},
193
+ author={Tsaprazlis, Efthymios and others},
194
+ journal={arXiv preprint arXiv:2603.21573},
195
+ year={2026}
196
+ }
197
+ ```