metadata
dataset_info:
- config_name: paw_edits
features:
- name: text
dtype: string
- name: page
dtype: string
- name: section_name
dtype: string
- name: is_paw_edit
dtype: bool
- config_name: queries
features:
- name: query
dtype: string
- name: type
dtype: string
license: cc-by-4.0
task_categories:
- text-classification
tags:
- animal-welfare
- wikipedia
- data-attribution
- llm-training
PAW Wikipedia Attribution Dataset
Data for the paper "Small edits, large models: How Wikipedia advocacy shapes LLM values" (Navas, Brazilek & Gnauck, 2026).
Files
paw_edits.csv (236 rows)
Wikipedia sections used in training data attribution experiments. Each row is a text chunk from Wikipedia.
| Column | Description |
|---|---|
text |
The Wikipedia section text |
page |
Wikipedia article title |
section_name |
Section heading |
is_paw_edit |
Whether this section was added/edited by Pro-Animal Wikipedians |
118 PAW-edited sections (animal welfare content added by advocacy editors) paired with 118 control sections from the same Wikipedia articles.
queries.csv (170 rows)
Queries used to measure training data influence.
| Column | Description |
|---|---|
query |
The query text |
type |
animal_welfare (80 queries) or general (90 queries) |
Animal welfare queries ask about animal welfare at specific companies/by specific politicians. General queries ask about the same entities without mentioning animal welfare.
Citation
@article{navas2026small,
title={Small edits, large models: How Wikipedia advocacy shapes LLM values},
author={Navas, M and Brazilek, J and Gnauck, A},
year={2026}
}