Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
j-chim commited on
Commit
938eaf3
·
verified ·
1 Parent(s): cc05308

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -54,7 +54,7 @@ configs:
54
  The `social_impact_eval_annotations` dataset contains annotations for first-party and third-party social impact evaluation reporting practices for 186 models along seven dimensions.
55
  ## Dataset Details
56
  ### Dataset Description
57
- The `social_impact_eval_annotations` dataset comprises analyzed social impact evaluation reporting for 186 foundation models released between 2018-2025. Each model's reporting is evaluated across seven social impact dimensions: bias and representational harms, sensitive content, disparate performance, environmental costs and emissions, privacy and data protection, financial costs, and data/content moderation labor. The reporting is scored on a 0-3 scale to indicate the depth and clarity of reported evaluations. The data covers first-party reports at model release time (2018-2025) and third-party evaluations from the past two years (2024-2025).
58
 
59
  - **Curated by:** EvalEval Coalition
60
  - **Shared by:** EvalEval Coalition
@@ -155,13 +155,12 @@ Researchers from the EvalEval Coalition created the annotations.
155
  The dataset contains no personal information about individuals. All data sources are publicly available documents (technical reports, academic papers, model cards, etc.).
156
 
157
  ## Bias, Risks, and Limitations
158
- This dataset may overrepresent models from prominent providers and English sources. Due to resource constraints, third-party sources are limited to those published 2024 onwards, which precludes a complete view of societal impact evaluations over time.
159
 
160
  Our scoring captures reporting presence and specificity, but does not reflect methodological soundness, depth, or coverage of evaluations. Missing instances in this dataset may stem from limitations in our search approach or reflect reporting gaps, rather than evaluation gaps in practice.
161
 
162
  ### Recommendations
163
- Analyses should consider potential overrepresentation of prominent providers and English sources. For longitudinal analyses, users should consider the asymmetric coverage of first-party versus third-party sources before drawing conclusions about reporting over time.
164
-
165
  Scores should be interpreted as perceived quality of reporting practices rather than actual model societal impact or capabilities.
166
 
167
  ## Citation
 
54
  The `social_impact_eval_annotations` dataset contains annotations for first-party and third-party social impact evaluation reporting practices for 186 models along seven dimensions.
55
  ## Dataset Details
56
  ### Dataset Description
57
+ The `social_impact_eval_annotations` dataset comprises analyzed social impact evaluation reporting for 186 foundation models released between 2018-2025. Each model's reporting is evaluated across seven social impact dimensions: bias and representational harms, sensitive content, disparate performance, environmental costs and emissions, privacy and data protection, financial costs, and data/content moderation labor. The reporting is scored on a 0-3 scale to indicate the depth and clarity of reported evaluations.
58
 
59
  - **Curated by:** EvalEval Coalition
60
  - **Shared by:** EvalEval Coalition
 
155
  The dataset contains no personal information about individuals. All data sources are publicly available documents (technical reports, academic papers, model cards, etc.).
156
 
157
  ## Bias, Risks, and Limitations
158
+ This dataset may overrepresent models from prominent providers and English sources.
159
 
160
  Our scoring captures reporting presence and specificity, but does not reflect methodological soundness, depth, or coverage of evaluations. Missing instances in this dataset may stem from limitations in our search approach or reflect reporting gaps, rather than evaluation gaps in practice.
161
 
162
  ### Recommendations
163
+ Analyses should consider potential overrepresentation of prominent providers and English sources.
 
164
  Scores should be interpreted as perceived quality of reporting practices rather than actual model societal impact or capabilities.
165
 
166
  ## Citation