Update README.md
Browse files
README.md
CHANGED
|
@@ -2,141 +2,139 @@
|
|
| 2 |
license: odc-by
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
-
# Dataset Card for
|
| 7 |
-
|
| 8 |
-
<!-- Provide a quick summary of the dataset. -->
|
| 9 |
-
|
| 10 |
-
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
|
| 11 |
-
|
| 12 |
## Dataset Details
|
| 13 |
-
|
| 14 |
### Dataset Description
|
|
|
|
| 15 |
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
- **Curated by:** [More Information Needed]
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 24 |
-
- **License:** [More Information Needed]
|
| 25 |
-
|
| 26 |
-
### Dataset Sources [optional]
|
| 27 |
-
|
| 28 |
-
<!-- Provide the basic links for the dataset. -->
|
| 29 |
-
|
| 30 |
-
- **Repository:** [More Information Needed]
|
| 31 |
-
- **Paper [optional]:** [More Information Needed]
|
| 32 |
-
- **Demo [optional]:** [More Information Needed]
|
| 33 |
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
|
|
|
|
| 38 |
### Direct Use
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
[More Information Needed]
|
| 43 |
|
| 44 |
### Out-of-Scope Use
|
|
|
|
|
|
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
-
[More Information Needed]
|
| 55 |
|
| 56 |
## Dataset Creation
|
| 57 |
-
|
| 58 |
### Curation Rationale
|
|
|
|
| 59 |
|
| 60 |
-
|
| 61 |
|
| 62 |
-
[More Information Needed]
|
| 63 |
|
| 64 |
### Source Data
|
| 65 |
-
|
| 66 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
| 67 |
-
|
| 68 |
#### Data Collection and Processing
|
|
|
|
| 69 |
|
| 70 |
-
|
| 71 |
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
| 73 |
|
| 74 |
#### Who are the source data producers?
|
|
|
|
|
|
|
| 75 |
|
| 76 |
-
|
| 77 |
|
| 78 |
-
|
| 79 |
|
| 80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
[More Information Needed]
|
| 89 |
|
| 90 |
#### Who are the annotators?
|
| 91 |
-
|
| 92 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
| 93 |
-
|
| 94 |
-
[More Information Needed]
|
| 95 |
|
| 96 |
#### Personal and Sensitive Information
|
| 97 |
-
|
| 98 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 99 |
-
|
| 100 |
-
[More Information Needed]
|
| 101 |
|
| 102 |
## Bias, Risks, and Limitations
|
|
|
|
| 103 |
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
[More Information Needed]
|
| 107 |
|
| 108 |
### Recommendations
|
|
|
|
| 109 |
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
| 113 |
-
|
| 114 |
-
## Citation [optional]
|
| 115 |
-
|
| 116 |
-
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
| 117 |
|
|
|
|
| 118 |
**BibTeX:**
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 122 |
**APA:**
|
|
|
|
| 123 |
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
## Glossary [optional]
|
| 127 |
-
|
| 128 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
| 129 |
-
|
| 130 |
-
[More Information Needed]
|
| 131 |
-
|
| 132 |
-
## More Information [optional]
|
| 133 |
-
|
| 134 |
-
[More Information Needed]
|
| 135 |
-
|
| 136 |
-
## Dataset Card Authors [optional]
|
| 137 |
-
|
| 138 |
-
[More Information Needed]
|
| 139 |
|
| 140 |
## Dataset Card Contact
|
| 141 |
-
|
| 142 |
-
[More Information Needed]
|
|
|
|
| 2 |
license: odc-by
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
+
size_categories:
|
| 6 |
+
- 1K<n<10K
|
| 7 |
---
|
| 8 |
+
# Dataset Card for wmwm
|
| 9 |
+
The `wmwm` dataset contains annotations for first-party and third-party social impact evaluation reporting practices for 171 models along seven dimensions.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
## Dataset Details
|
|
|
|
| 11 |
### Dataset Description
|
| 12 |
+
The `wmwm` dataset comprises analyzed social impact evaluation reporting for 171 foundation models released between 2018-2025. Each model's reporting is evaluated across seven social impact dimensions: bias and representational harms, sensitive content, disparate performance, environmental costs and emissions, privacy and data protection, financial costs, and data/content moderation labor. The reporting is scored on a 0-3 scale to indicate the depth and clarity of reported evaluations. The data covers first-party reports at model release time (2018-2025) and third-party evaluations from the past two years (2024-2025).
|
| 13 |
|
| 14 |
+
- **Curated by:** EvalEval Coalition
|
| 15 |
+
- **Shared by:** EvalEval Coalition
|
| 16 |
+
- **Language(s) (NLP):** English
|
| 17 |
+
- **License:** Open Data Commons Attribution License (ODC-By)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
+
### Dataset Sources
|
| 20 |
+
- **Repository:** https://github.com/evaleval/wmwm_code
|
| 21 |
+
- **Paper:** _(Forthcoming)_
|
| 22 |
|
| 23 |
+
## Uses
|
| 24 |
### Direct Use
|
| 25 |
+
This dataset is intended for:
|
| 26 |
+
- Analyzing social impact evaluation reporting
|
| 27 |
+
- Informing the development of evaluation standards and reporting frameworks
|
|
|
|
| 28 |
|
| 29 |
### Out-of-Scope Use
|
| 30 |
+
This dataset should not be used for:
|
| 31 |
+
- Assessing actual model societal impact or deployment suitability – scores reflect reporting presence and detail, not the quality or adequacy of evaluations themselves
|
| 32 |
|
| 33 |
+
## Dataset Structure
|
| 34 |
|
| 35 |
+
Each row represents one evaluation instance, capturing how a specific model was evaluated on one social impact category in one source, e.g., paper, leaderboard, blog. A single model can have multiple rows (one per evaluation category per source).
|
| 36 |
|
| 37 |
+
### Data Fields
|
| 38 |
|
| 39 |
+
* `provider`: Organization that developed the model (str)
|
| 40 |
+
* `name`: Base model name (str)
|
| 41 |
+
* `size`: Model parameter count when available (str)
|
| 42 |
+
* `variant`: Model variant specification (str)
|
| 43 |
+
* `version`: Specific model version or release identifier (str)
|
| 44 |
+
* `sector`: Organization sector (str)
|
| 45 |
+
* `openness`: Model weight accessibility (str)
|
| 46 |
+
* `region`: Provider headquarters region (str)
|
| 47 |
+
* `country`: Provider headquarters country (str)
|
| 48 |
+
* `source_id`: Unique identifier for the source of the evaluation report (str)
|
| 49 |
+
* `is_first_party`: Whether reported evaluation was conducted by the model provider (bool)
|
| 50 |
+
* `category`: Social impact category identifier (int, 1-7) corresponding to the seven dimensions
|
| 51 |
+
* `year`: Year of report (int)
|
| 52 |
+
* `metadata`: Metadata including URLs, full release dates, and other source information (dict)
|
| 53 |
+
* `score`: Evaluation score on 0-3 scale (float)
|
| 54 |
+
* `is_model_release`: Whether instance is from model release-time reporting (bool)
|
| 55 |
|
|
|
|
| 56 |
|
| 57 |
## Dataset Creation
|
|
|
|
| 58 |
### Curation Rationale
|
| 59 |
+
As foundation models become central to high-stakes AI systems, governance frameworks increasingly rely on evaluations to assess risks and capabilities. While general capability evaluations are common, social impact assessments remain fragmented, inconsistent, or absent.
|
| 60 |
|
| 61 |
+
This dataset was created to move beyond anecdotal evidence and provide systematic documentation of how model developers and the research community evaluate and report on societal impacts of AI systems.
|
| 62 |
|
|
|
|
| 63 |
|
| 64 |
### Source Data
|
|
|
|
|
|
|
|
|
|
| 65 |
#### Data Collection and Processing
|
| 66 |
+
For details, please see Section 3 in our paper.
|
| 67 |
|
| 68 |
+
We first compiled a list of models by triangulating across public sources (e.g., FMTI, LMArena). Next, we expanded this list with providers referenced in leaderboards and technical reports. We selected all official model releases, including those fine-tuned by the original developer but excluding community fine-tuned versions. For multimodal models, we include those architecturally distinct systems that are recognized as foundation models in the literature or have widespread adoption by the research community. We disambiguate consumer-facing applications (e.g., ChatGPT) to the underlying model where possible and skip it otherwise.
|
| 69 |
|
| 70 |
+
For these models, we identified sources for first-party and third-party reports through complementary searches:
|
| 71 |
+
- **First-party**: Manual search of provider websites for papers, technical reports, model cards, system cards, blogs, and press releases
|
| 72 |
+
- **Third-party**: Systematic search using Paperfinder for peer-reviewed academic papers 2024 onward
|
| 73 |
+
- **Leaderboards**: Targeted queries on Google Search and Hugging Face Spaces
|
| 74 |
|
| 75 |
#### Who are the source data producers?
|
| 76 |
+
1. First-party developers: Foundation model developers from industry, academia, government, and non-profit organizations.
|
| 77 |
+
2. Third-party evaluators: Independent researchers, academic institutions, and evaluation organizations reporting conducted social impact evaluations on released models.
|
| 78 |
|
| 79 |
+
#### Annotation process
|
| 80 |
|
| 81 |
+
In total, we compiled data from 204 first-party and 171 third-party sources, which form 3669 evaluation instances. Each instance was annotated against the seven social impact dimensions using a standardized guide. Annotations were performed by individual researchers, with manual spot checks for consistency.
|
| 82 |
|
| 83 |
+
The social impact categories are:
|
| 84 |
+
1. Bias, Stereotypes, and Representational Harms
|
| 85 |
+
2. Cultural Values and Sensitive Content
|
| 86 |
+
3. Disparate Performance
|
| 87 |
+
4. Environmental Costs and Carbon Emissions
|
| 88 |
+
5. Privacy and Data Protection
|
| 89 |
+
6. Financial Costs
|
| 90 |
+
7. Data and Content Moderation Labor
|
| 91 |
|
| 92 |
+
The scoring criteria are:
|
| 93 |
+
- **0**: No mention of the category, or only generic references without evaluation details.
|
| 94 |
+
- **1**: Vague mention of evaluation (e.g., “We check for X” or “Our model can exhibit X”).
|
| 95 |
+
- **2**: Evaluation described with concrete information about methods or results (e.g., “Our model scores X% on the Y benchmark”) but lacking methodological detail.
|
| 96 |
+
- **3**: Evaluation methods described in sufficient detail to enable meaningful understanding and/or reproduction. Where applicable, the study design is documented (dataset, metric, experiment design, annotators), and results are contextualized with assumptions, limitations, and practical implications.
|
| 97 |
|
| 98 |
+
For cost-related categories (environmental and financial), we applied slightly modified criteria to account for reporting based on hardware specifications or resource usage rather than benchmark-style evaluations:
|
| 99 |
+
- **0**: No reporting.
|
| 100 |
+
- **1**: Same as above, or when reported technical details (e.g., FLOPs, GPU type, runtime) could indirectly be used to estimate costs.
|
| 101 |
+
- **2**: Concrete values reported for a non-trivial part of model development or hosting, but derivation method unclear.
|
| 102 |
+
- **3**: Concrete values reported together with contextual details and the derivation method.
|
| 103 |
|
| 104 |
+
For financial costs, we excluded first-party customer-facing pricing from consideration, as it reflects product strategy rather than system costs. Third-party cost estimates for completing specific tasks were included.
|
|
|
|
|
|
|
| 105 |
|
| 106 |
#### Who are the annotators?
|
| 107 |
+
Researchers from the EvalEval Coalition created the annotations.
|
|
|
|
|
|
|
|
|
|
| 108 |
|
| 109 |
#### Personal and Sensitive Information
|
| 110 |
+
The dataset contains no personal information about individuals. All data sources are publicly available documents (technical reports, academic papers, model cards, etc.).
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
## Bias, Risks, and Limitations
|
| 113 |
+
This dataset may overrepresent models from prominent providers and English sources. Due to resource constraints, third-party sources are limited to those published 2024 onwards, which precludes a complete view of societal impact evaluations over time.
|
| 114 |
|
| 115 |
+
Our scoring captures reporting presence and specificity, but does not reflect methodological soundness, depth, or coverage of evaluations. Missing instances in this dataset may stem from limitations in our search approach or reflect reporting gaps, rather than evaluation gaps in practice.
|
|
|
|
|
|
|
| 116 |
|
| 117 |
### Recommendations
|
| 118 |
+
Analyses should consider potential overrepresentation of prominent providers and English sources. For longitudinal analyses, users should consider the asymmetric coverage of first-party versus third-party sources before drawing conclusions about reporting over time.
|
| 119 |
|
| 120 |
+
Scores should be interpreted as perceived quality of reporting practices rather than actual model societal impact or capabilities.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
|
| 122 |
+
## Citation
|
| 123 |
**BibTeX:**
|
| 124 |
+
```bibtex
|
| 125 |
+
@article{
|
| 126 |
+
reuel2025social,
|
| 127 |
+
title={Who Measures What Matters? An Analysis of Social Impact Evaluations in Foundation Model Reporting},
|
| 128 |
+
author={Reuel, Anka and Ghosh, Avijit and Chim, Jenny and Tran, Andrew and Long, Yanan and Mickel, Jennifer and Gohar, Usman and Yadav, Srishti and Ammanamanchi, Pawan Sasanka and Allaham, Mowafak and Rahmani, Hossein A. and Akhtar, Mubashara and Friedrich, Felix and Scholz, Robert and Riegler, Michael Alexander and Batzner, Jan and Habba, Eliya and Saxena, Arushi and Kornilova, Anastassia and Wei, Kevin and Soni, Prajna and Mathew, Yohan and Klyman, Kevin and Sania, Jeba and Sahoo, Subramanyam and Bruvik, Olivia Beyer and Wang, Angelina and Goswami, Sujata and Jernite, Yacine and Talat, Zeerak and Biderman, Stella and Kochenderfer, Mykel and Koyejo, Sanmi and Solaiman, Irene},
|
| 129 |
+
year={2025},
|
| 130 |
+
note={Under review}
|
| 131 |
+
}
|
| 132 |
+
```
|
| 133 |
**APA:**
|
| 134 |
+
> Reuel, A., Ghosh, A., Chim, J., Tran, A., Long, Y., Mickel, J., Gohar, U., Yadav, S., Ammanamanchi, P. S., Allaham, M., Rahmani, H. A., Akhtar, M., Friedrich, F., Scholz, R., Riegler, M. A., Batzner, J., Habba, E., Saxena, A., Kornilova, A., Wei, K., Soni, P., Mathew, Y., Klyman, K., Sania, J., Sahoo, S., Bruvik, O. B., Wang, A., Goswami, S., Jernite, Y., Talat, Z., Biderman, S., Kochenderfer, M., Koyejo, S., & Solaiman, I. (2025). Who Measures What Matters? An Analysis of Social Impact Evaluations in Foundation Model Reporting. _Under review_.
|
| 135 |
|
| 136 |
+
## Dataset Card Authors
|
| 137 |
+
[Jenny Chim](c.chim@qmul.ac.uk)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 138 |
|
| 139 |
## Dataset Card Contact
|
| 140 |
+
[Anka Reuel](anka.reuel@stanford.edu), [Avijit Ghosh](avijit@huggingface.co), [Jenny Chim](c.chim@qmul.ac.uk)
|
|
|