Datasets:
Tasks:
Image Classification
Formats:
json
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
privacy
License:
File size: 6,185 Bytes
7616ec9 ab3684c d719349 ab3684c 65a85d0 ab3684c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 | ---
license: mit
task_categories:
- image-classification
language:
- en
tags:
- privacy
pretty_name: CPRT Dataset
size_categories:
- 1K<n<10K
---
# Dataset Card for CPRT-Bench
<!-- Provide a quick summary of the dataset. -->
CPRT-Bench is a benchmark dataset for assessing privacy risk in images, designed to model privacy as a graded and composition-dependent phenomenon.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
The dataset contains approximately 6.7K images annotated with:
- Ordinal severity levels (4 levels of privacy risk)
- Continuous risk scores (fine-grained privacy assessment)
All images are sourced from the VISPR ([Visual Privacy Advisor](https://tribhuvanesh.github.io/vpa/)). CPRT-Bench augments these images with structured annotations for privacy risk evaluation.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Paper:** [https://arxiv.org/pdf/2603.21573]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
CPRT-Bench is intended for:
- Evaluating privacy risk prediction in computer vision systems
- Benchmarking multimodal models on privacy perception tasks
- Studying calibration and ranking in risk prediction
- Research on context-aware and compositional reasoning in vision models
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
his dataset is not suitable for:
- Real-world privacy decision-making systems without additional safeguards
- Legal or regulatory enforcement
- Applications requiring culturally universal definitions of privacy
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Each example includes:
- **`id`**: Filename ID corresponding to a VISPR image
- **`binary_labels`**: A nested dictionary of binary attributes grouped by privacy level
- **`level`**: An integer severity label from 1 to 4
- **`score`**: A floating-point privacy-risk score
The `binary_labels` field is organized hierarchically:
- `level1`: attributes that uniquely and directly identify a specific individual on their own
- `level2`: attributes that can reference a person or reveal sensitive personal information
- `level3`: attributes that are non-sensitive and non-identifying in isolation, but can contribute to identity linkage or profiling when combined with other non-uniquely identifying information
- `level4`: attributes that are generally benign and non-identifying, but may be regarded as private information depending on the context
Example structure:
```json
{
"level1": {
"biometrics": 0/1,
"gov_ids": 0/1,
"unique_body_markings": 0/1
},
"level2": {
"contact_details": 0/1,
"full_legal_name": 0/1,
"non_unique_id": 0/1,
"medical_data": 0/1,
"financial_data": 0/1,
"beliefs": 0/1,
"nudity": 0/1,
"disability": 0/1,
"emotion_mental_health": 0/1,
"race_ethnicity": 0/1
},
"level3": {
"age": 0/1,
"gender": 0/1,
"location": 0/1,
"activities": 0/1,
"lifestyle": 0/1
},
"level4": {
"property_assets": 0/1,
"documents": 0/1,
"metadata": 0/1,
"background_people": 0/1
}
}
```
### Loading Instructions
CPRT-Bench contains annotation data only and does not distribute the underlying VISPR images. Users must download the VISPR dataset separately and resolve each id field to the corresponding image file.
The dataset adopts the VISPR split protocol:
- The training split is derived from the VISPR validation split
- The test split is derived from the VISPR test split
1. Download VISPR dataset:
- VISPR-test [link](https://datasets.d2.mpi-inf.mpg.de/orekondy17iccv/test2017.tar.gz)
- VISPR-val [link](https://datasets.d2.mpi-inf.mpg.de/orekondy17iccv/val2017.tar.gz)
2. Load dataset:
```python
from datasets import load_dataset
dataset = load_dataset("timtsapras23/CPRT-Bench")
```
A simple way to load the image for each example is to search for the file that matches the VISPR `id`:
```python
import os
from glob import glob
from PIL import Image
VISPR_ROOT = "/path/to/vispr/images"
def load_vispr_image(example):
image_id = example["id"]
candidates = [
os.path.join(VISPR_ROOT, f"{image_id}.jpg"),
os.path.join(VISPR_ROOT, f"{image_id}.png"),
os.path.join(VISPR_ROOT, image_id),
]
image_path = next((p for p in candidates if os.path.exists(p)), None)
if image_path is None:
matches = glob(os.path.join(VISPR_ROOT, f"{image_id}.*"))
if matches:
image_path = matches[0]
else:
raise FileNotFoundError(f"Could not find an image for id={image_id}")
example["image"] = Image.open(image_path).convert("RGB")
return example
# Example: load the first split with images attached
# dataset["train"] = dataset["train"].map(load_vispr_image)
```
## Leaderboard
| Model | Spearman ρ ↑ | Pearson r ↑ | MAE ↓ |
|------|--------------|-------------|-------|
| **Gemini 3 Flash** | **0.872** | **0.884** | **0.140** |
| GPT-5.2 | 0.844 | 0.850 | 0.158 |
| Qwen3-VL (8B) + SFT (80 steps) | 0.762 | 0.799 | **0.140** |
| Qwen3-VL (4B) + SFT (80 steps) | 0.753 | 0.790 | 0.142 |
| Llama 4 Maverick | 0.763 | 0.728 | 0.233 |
| Qwen3-VL (32B) | 0.753 | 0.726 | 0.224 |
| Qwen3-VL (8B) | 0.751 | 0.636 | 0.291 |
| Pixtral (12B) | 0.720 | 0.616 | 0.311 |
| MiniCPM-V (8B) | 0.610 | 0.616 | 0.237 |
| Llama 3.2 VL (11B) | 0.571 | 0.460 | 0.344 |
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@article{tsaprazlis2026cprt,
title={Rethinking Visual Privacy: A Compositional Privacy Risk Framework for Severity Assessment with VLMs},
author={Tsaprazlis, Efthymios and others},
journal={arXiv preprint arXiv:2603.21573},
year={2026}
}
``` |