license: cc-by-4.0
language:
- en
tags:
- privacy
- llm-evaluation
- llm-simulation
- alignment
size_categories:
- 1K<n<10K
configs:
- config_name: full_dataset
data_files:
- split: all
path: full_dataset.csv
- config_name: sampled
data_files:
- split: all
path: dataset.csv
PrivacySIM: An Evaluation Suite for LLM Simulation of User Privacy Behavior
PrivacySIM aggregates user privacy preferences from five published user studies into a unified evaluation suite for LLM simulation of user privacy behavior. Each row represents one participant's responses to a study-specific questionnaire about whether data-sharing involving an LLM is appropriate, plus persona facets (demographics, previous experience, privacy attitudes).
The five contributing studies cover distinct data-sharing contexts:
LLM chatbots, LLM healthcare consultations, AI agent permissions, group chat
with bots, and conversational agents. Identifiers are derived
deterministically from (user_id, domain) via UUID5.
Files
| File | Rows | Purpose |
|---|---|---|
full_dataset.csv |
2,000 | All users from all five studies, no filtering or sampling. |
dataset.csv |
1,000 | Sampled to 200 users per domain (healthcare filtered to records with informative previous-experience values), random_state=42. |
Schema notes
Several columns (demographics, previous_experiences, privacy_attitudes,
responses, preferences) hold dict-shaped values. Four of the five
contributing studies stored these as Python dict reprs (single-quoted);
the fifth uses JSON. Both forms parse with ast.literal_eval:
import ast
import pandas as pd
df = pd.read_csv("dataset.csv")
prefs = ast.literal_eval(df.iloc[0]["preferences"])
# -> {question_text: user_response, ...}
A column-by-column schema for every file is documented at docs/SCHEMA.md in the accompanying code repository.
Loading
from datasets import load_dataset
# Full unfiltered dataset (2,000 users)
ds = load_dataset("jamesflemings/PrivacySIM", "full_dataset", split="all")
# 1,000-user sampled from paper
ds = load_dataset("jamesflemings/PrivacySIM", "sampled", split="all")
Source studies
The data is derived from five public user studies. Each row's domain
column identifies the source study. Please cite the original studies in
addition to this dataset if you use the corresponding rows.
domain |
Source | Users in full_dataset.csv |
Users in dataset.csv |
|---|---|---|---|
LLM healthcare consultation |
LLM health consultation user study | 846 | 200 |
Chatbot group chat |
Bot-among-us chatbot group chat study | 374 | 200 |
LLM chatbot |
LLM chatbot privacy norms factorial vignette study | 300 | 200 |
LLM conversational agents |
LLM conversational-agent privacy preferences study | 277 | 200 |
AI agent permissions |
AI agent permission user study | 203 | 200 |
| Total | 2,000 | 1,000 |
Code
The full reproducibility pipeline (data download, processing, simulation,
evaluation, plotting) is at
github.com/james-flemings/PrivacySIM.
See docs/SCHEMA.md for the column-by-column reference.
License
The combined dataset is released under Creative Commons Attribution 4.0 International (CC BY 4.0). Per-row content remains under each source study's original license:
domain |
Source license |
|---|---|
LLM healthcare consultation |
MIT |
Chatbot group chat |
MIT |
LLM chatbot |
CC0 1.0 (public-domain dedication) |
AI agent permissions |
CC BY 4.0 |
LLM conversational agents |
CC BY 4.0 |
Full attribution, license texts, copyright notices, and indication of
modifications for every source study are provided in the
THIRD_PARTY_LICENSES.md file shipped alongside
the dataset. Anyone redistributing or building on this dataset must preserve
that attribution file (or an equivalent).