Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
File size: 10,933 Bytes
72d5b58
 
 
71c984d
56ddb5f
 
352e9ca
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cc05308
71c984d
cc05308
 
0520d50
 
 
 
 
72d5b58
217a66a
 
72d5b58
 
938eaf3
72d5b58
56ddb5f
 
 
 
72d5b58
56ddb5f
217a66a
3d8ab8b
72d5b58
56ddb5f
72d5b58
56ddb5f
 
 
72d5b58
 
56ddb5f
 
72d5b58
56ddb5f
72d5b58
2712228
72d5b58
56ddb5f
72d5b58
56ddb5f
 
 
 
 
 
 
 
 
 
 
 
 
 
2712228
56ddb5f
72d5b58
 
 
 
56ddb5f
72d5b58
56ddb5f
72d5b58
 
 
 
56ddb5f
72d5b58
56ddb5f
72d5b58
56ddb5f
 
b65a8d4
56ddb5f
72d5b58
 
56ddb5f
 
72d5b58
56ddb5f
72d5b58
b65a8d4
 
 
72d5b58
56ddb5f
 
 
 
 
 
 
 
72d5b58
56ddb5f
 
 
 
 
72d5b58
56ddb5f
 
 
 
 
72d5b58
56ddb5f
72d5b58
 
56ddb5f
72d5b58
 
56ddb5f
72d5b58
 
938eaf3
72d5b58
56ddb5f
72d5b58
 
938eaf3
56ddb5f
72d5b58
56ddb5f
72d5b58
56ddb5f
3d8ab8b
 
 
56ddb5f
3d8ab8b
 
 
 
fc97ffc
56ddb5f
 
72d5b58
3d8ab8b
72d5b58
56ddb5f
3319db3
72d5b58
 
3319db3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
---
language:
- en
license: odc-by
size_categories:
- 1K<n<10K
dataset_info:
  features:
  - name: provider
    dtype: string
  - name: name
    dtype: string
  - name: size
    dtype: string
  - name: variant
    dtype: string
  - name: version
    dtype: string
  - name: sector
    dtype: string
  - name: openness
    dtype: string
  - name: region
    dtype: string
  - name: country
    dtype: string
  - name: source_id
    dtype: string
  - name: is_first_party
    dtype: bool
  - name: category
    dtype: int64
  - name: year
    dtype: int64
  - name: metadata
    dtype: string
  - name: score
    dtype: float64
  - name: is_model_release
    dtype: bool
  splits:
  - name: train
    num_bytes: 1180481
    num_examples: 4241
  download_size: 59292
  dataset_size: 1180481
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
# Dataset Card for social_impact_eval_annotations
The `social_impact_eval_annotations` dataset contains annotations for first-party and third-party social impact evaluation reporting practices for 186 models along seven dimensions.
## Dataset Details
### Dataset Description
The `social_impact_eval_annotations` dataset comprises analyzed social impact evaluation reporting for 186 foundation models released between 2018-2025. Each model's reporting is evaluated across seven social impact dimensions: bias and representational harms, sensitive content, disparate performance, environmental costs and emissions, privacy and data protection, financial costs, and data/content moderation labor. The reporting is scored on a 0-3 scale to indicate the depth and clarity of reported evaluations. 

- **Curated by:** EvalEval Coalition
- **Shared by:** EvalEval Coalition
- **Language(s) (NLP):** English
- **License:** Open Data Commons Attribution License (ODC-By)

### Dataset Sources
- **Repository:** https://github.com/evaleval/social_impact_eval_annotations_code
- **Paper:** https://arxiv.org/pdf/2511.05613

## Uses 
### Direct Use
This dataset is intended for:
- Analyzing social impact evaluation reporting
- Informing the development of evaluation standards and reporting frameworks 

### Out-of-Scope Use
This dataset should not be used for:
- Assessing actual model societal impact or deployment suitability – scores reflect reporting presence and detail, not the quality or adequacy of evaluations themselves

## Dataset Structure

Each row represents one evaluation instance, capturing the level of reporting detail given for a specific model that was evaluated on one social impact category in one source, e.g., paper, leaderboard, blog. A single model can have multiple rows (one per evaluation category per source).

### Data Fields

* `provider`: Organization that developed the model (str)
* `name`: Base model name (str)
* `size`: Model parameter count when available (str)
* `variant`: Model variant specification (str)
* `version`: Specific model version or release identifier (str)
* `sector`: Organization sector (str)
* `openness`: Model weight accessibility (str)
* `region`: Provider headquarters region (str)
* `country`: Provider headquarters country (str)
* `source_id`: Unique identifier for the source of the evaluation report (str)
* `is_first_party`: Whether reported evaluation was conducted by the model provider (bool)
* `category`: Social impact category identifier (int, 1-7) corresponding to the seven dimensions
* `year`: Year of report (int)
* `metadata`: Metadata including URLs, full release dates, and other source information (dict)
* `score`: The level of reporting detail of the evaluation, scored on 0-3 scale (float)
* `is_model_release`: Whether instance is from model release-time reporting (bool)


## Dataset Creation
### Curation Rationale
As foundation models become central to high-stakes AI systems, governance frameworks increasingly rely on evaluations to assess risks and capabilities. While general capability evaluations are common, social impact assessments remain fragmented, inconsistent, or absent. 

This dataset was created to move beyond anecdotal evidence and provide systematic documentation of how model developers and the research community evaluate and report on societal impacts of AI systems.


### Source Data
#### Data Collection and Processing
For details, please see Section 3 in our paper.

We first compiled a list of models by triangulating across public sources (e.g., FMTI, LMArena). Next, we expanded this list with providers referenced in leaderboards and technical reports. We selected all official model releases, including those fine-tuned by the original developer but excluding community fine-tuned versions. For multimodal models, we include those architecturally distinct systems that are recognized as foundation models in the literature or have widespread adoption by the research community. We disambiguate consumer-facing applications (e.g., ChatGPT) to the underlying model where possible and skip it otherwise. 

For these models, we identified sources for first-party and third-party reports through complementary searches:
- **First-party**: Manual search of provider websites for papers, technical reports, model cards, system cards, blogs, and press releases
- **Third-party**: Automatic search using Paperfinder for peer-reviewed academic papers
- **Leaderboards**: Targeted queries on Google Search and Hugging Face Spaces

#### Who are the source data producers?
1. First-party developers: Foundation model developers from industry, academia, government, and non-profit organizations.
2. Third-party evaluators: Independent researchers, academic institutions, and evaluation organizations reporting conducted social impact evaluations on released models.

#### Annotation process

In total, we compiled data from 186 first-party release time sources and 248 post-release sources (out of which 211 are fully third-party, 17 are fully first-party, and there are 20 sources by model providers that report both results for their own model (labeled as first-party) and those of other providers’ (labeled as third-party)). 

This forms 4241 evaluation instances. Each instance was annotated against the seven social impact dimensions using a standardized guide. Annotations were performed by individual researchers, with manual spot checks for consistency.

The social impact categories are:
1. Bias, Stereotypes, and Representational Harms
2. Cultural Values and Sensitive Content
3. Disparate Performance
4. Environmental Costs and Carbon Emissions
5. Privacy and Data Protection
6. Financial Costs
7. Data and Content Moderation Labor

The scoring criteria are:
- **0**: No mention of the category, or only generic references without evaluation details.
- **1**: Vague mention of evaluation (e.g., “We check for X” or “Our model can exhibit X”).
- **2**: Evaluation described with concrete information about methods or results (e.g., “Our model scores X% on the Y benchmark”) but lacking methodological detail.
- **3**: Evaluation methods described in sufficient detail to enable meaningful understanding and/or reproduction. Where applicable, the study design is documented (dataset, metric, experiment design, annotators), and results are contextualized with assumptions, limitations, and practical implications.

For cost-related categories (environmental and financial), we applied slightly modified criteria to account for reporting based on hardware specifications or resource usage rather than benchmark-style evaluations:
- **0**: No reporting.
- **1**: Same as above, or when reported technical details (e.g., FLOPs, GPU type, runtime) could indirectly be used to estimate costs.
- **2**: Concrete values reported for a non-trivial part of model development or hosting, but derivation method unclear.
- **3**: Concrete values reported together with contextual details and the derivation method.

For financial costs, we excluded first-party customer-facing pricing from consideration, as it reflects product strategy rather than system costs. Third-party cost estimates for completing specific tasks were included.

#### Who are the annotators?
Researchers from the EvalEval Coalition created the annotations.

#### Personal and Sensitive Information
The dataset contains no personal information about individuals. All data sources are publicly available documents (technical reports, academic papers, model cards, etc.).

## Bias, Risks, and Limitations
This dataset may overrepresent models from prominent providers and English sources.

Our scoring captures reporting presence and specificity, but does not reflect methodological soundness, depth, or coverage of evaluations. Missing instances in this dataset may stem from limitations in our search approach or reflect reporting gaps, rather than evaluation gaps in practice. 

### Recommendations
Analyses should consider potential overrepresentation of prominent providers and English sources. 
Scores should be interpreted as perceived quality of reporting practices rather than actual model societal impact or capabilities.

## Citation
**BibTeX:**
```bibtex
@misc{reuel2025social,
    title={Who Evaluates AI's Social Impacts? Mapping Coverage and Gaps in First and Third Party Evaluations},
    author={Anka Reuel and Avijit Ghosh and Jenny Chim and Andrew Tran and Yanan Long and Jennifer Mickel and Usman Gohar and Srishti Yadav and Pawan Sasanka Ammanamanchi and Mowafak Allaham and Hossein A. Rahmani and Mubashara Akhtar and Felix Friedrich and Robert Scholz and Michael Alexander Riegler and Jan Batzner and Eliya Habba and Arushi Saxena and Anastassia Kornilova and Kevin Wei and Prajna Soni and Yohan Mathew and Kevin Klyman and Jeba Sania and Subramanyam Sahoo and Olivia Beyer Bruvik and Pouya Sadeghi and Sujata Goswami and Angelina Wang and Yacine Jernite and Zeerak Talat and Stella Biderman and Mykel Kochenderfer and Sanmi Koyejo and Irene Solaiman},
    year={2025},
    eprint={2511.05613},
    archivePrefix={arXiv},
    primaryClass={cs.CY},
    url={https://arxiv.org/abs/2511.05613},
    note={Preprint}
}
```
**APA:**
> Reuel, A., Ghosh, A., Chim, J., Tran, A., Long, Y., Mickel, J., Gohar, U., Yadav, S., Ammanamanchi, P. S., Allaham, M., Rahmani, H. A., Akhtar, M., Friedrich, F., Scholz, R., Riegler, M. A., Batzner, J., Habba, E., Saxena, A., Kornilova, A., Wei, K., Soni, P., Mathew, Y., Klyman, K., Sania, J., Sahoo, S., Bruvik, O. B., Sadeghi, P., Goswami, S., Wang, A., Jernite, Y., Talat, Z., Biderman, S., Kochenderfer, M., Koyejo, S., & Solaiman, I. (2025). Who evaluates AI's social impacts? Mapping coverage and gaps in first and third party evaluations (arXiv:2511.05613). arXiv. https://arxiv.org/abs/2511.05613

## Dataset Card Authors
[Jenny Chim](mailto:c.chim@qmul.ac.uk)

## Dataset Card Contact
[Anka Reuel](mailto:anka.reuel@stanford.edu), [Avijit Ghosh](mailto:avijit@huggingface.co), [Jenny Chim](mailto:c.chim@qmul.ac.uk)