File size: 4,343 Bytes
6ab5942 30c6b3e 6ab5942 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | ---
license: cc-by-4.0
language:
- en
tags:
- privacy
- llm-evaluation
- llm-simulation
- alignment
size_categories:
- 1K<n<10K
configs:
- config_name: full_dataset
data_files:
- split: all
path: full_dataset.csv
- config_name: sampled
data_files:
- split: all
path: dataset.csv
---
# PrivacySIM: An Evaluation Suite for LLM Simulation of User Privacy Behavior
PrivacySIM aggregates user privacy preferences from five published user
studies into a unified evaluation suite for LLM simulation of user
privacy behavior. Each row represents one participant's responses to a
study-specific questionnaire about whether data-sharing involving an LLM
is appropriate, plus persona facets (demographics, previous experience, privacy attitudes).
The five contributing studies cover distinct data-sharing contexts:
LLM chatbots, LLM healthcare consultations, AI agent permissions, group chat
with bots, and conversational agents. Identifiers are derived
deterministically from `(user_id, domain)` via UUID5.
## Files
| File | Rows | Purpose |
|---|---|---|
| `full_dataset.csv` | 2,000 | All users from all five studies, no filtering or sampling. |
| `dataset.csv` | 1,000 | Sampled to 200 users per domain (healthcare filtered to records with informative previous-experience values), `random_state=42`. |
## Schema notes
Several columns (`demographics`, `previous_experiences`, `privacy_attitudes`,
`responses`, `preferences`) hold dict-shaped values. Four of the five
contributing studies stored these as Python dict reprs (single-quoted);
the fifth uses JSON. Both forms parse with `ast.literal_eval`:
```python
import ast
import pandas as pd
df = pd.read_csv("dataset.csv")
prefs = ast.literal_eval(df.iloc[0]["preferences"])
# -> {question_text: user_response, ...}
```
A column-by-column schema for every file is documented at
[docs/SCHEMA.md](https://github.com/<your-username>/<repo>/blob/main/docs/SCHEMA.md)
in the accompanying code repository.
## Loading
```python
from datasets import load_dataset
# Full unfiltered dataset (2,000 users)
ds = load_dataset("jamesflemings/PrivacySIM", "full_dataset", split="all")
# 1,000-user sampled from paper
ds = load_dataset("jamesflemings/PrivacySIM", "sampled", split="all")
```
## Source studies
The data is derived from five public user studies. Each row's `domain`
column identifies the source study. **Please cite the original studies in
addition to this dataset** if you use the corresponding rows.
| `domain` | Source | Users in `full_dataset.csv` | Users in `dataset.csv` |
|---|---|---|---|
| `LLM healthcare consultation` | [LLM health consultation user study](https://github.com/Cristliu/LLMHealthPrivacy_UserStudy) | 846 | 200 |
| `Chatbot group chat` | [Bot-among-us chatbot group chat study](https://github.com/csienslab/bot-among-us) | 374 | 200 |
| `LLM chatbot` | [LLM chatbot privacy norms factorial vignette study](https://github.com/WeberLab-UW/AIES_2025_ChatbotPrivacy) | 300 | 200 |
| `LLM conversational agents` | [LLM conversational-agent privacy preferences study](https://osf.io/2vqws/) | 277 | 200 |
| `AI agent permissions` | [AI agent permission user study](https://github.com/llm-platform-security/ai-agent-permissions) | 203 | 200 |
| **Total** | | **2,000** | **1,000** |
## Code
The full reproducibility pipeline (data download, processing, simulation,
evaluation, plotting) is at
[github.com/james-flemings/PrivacySIM](https://github.com/james-flemings/PrivacySIM).
See `docs/SCHEMA.md` for the column-by-column reference.
## License
The combined dataset is released under
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/legalcode).
Per-row content remains under each source study's original license:
| `domain` | Source license |
|---|---|
| `LLM healthcare consultation` | MIT |
| `Chatbot group chat` | MIT |
| `LLM chatbot` | CC0 1.0 (public-domain dedication) |
| `AI agent permissions` | CC BY 4.0 |
| `LLM conversational agents` | CC BY 4.0 |
Full attribution, license texts, copyright notices, and indication of
modifications for every source study are provided in the
[`THIRD_PARTY_LICENSES.md`](./THIRD_PARTY_LICENSES.md) file shipped alongside
the dataset. Anyone redistributing or building on this dataset must preserve
that attribution file (or an equivalent). |