Update README.md
Browse files
README.md
CHANGED
|
@@ -54,7 +54,7 @@ configs:
|
|
| 54 |
The `social_impact_eval_annotations` dataset contains annotations for first-party and third-party social impact evaluation reporting practices for 186 models along seven dimensions.
|
| 55 |
## Dataset Details
|
| 56 |
### Dataset Description
|
| 57 |
-
The `social_impact_eval_annotations` dataset comprises analyzed social impact evaluation reporting for 186 foundation models released between 2018-2025. Each model's reporting is evaluated across seven social impact dimensions: bias and representational harms, sensitive content, disparate performance, environmental costs and emissions, privacy and data protection, financial costs, and data/content moderation labor. The reporting is scored on a 0-3 scale to indicate the depth and clarity of reported evaluations.
|
| 58 |
|
| 59 |
- **Curated by:** EvalEval Coalition
|
| 60 |
- **Shared by:** EvalEval Coalition
|
|
@@ -155,13 +155,12 @@ Researchers from the EvalEval Coalition created the annotations.
|
|
| 155 |
The dataset contains no personal information about individuals. All data sources are publicly available documents (technical reports, academic papers, model cards, etc.).
|
| 156 |
|
| 157 |
## Bias, Risks, and Limitations
|
| 158 |
-
This dataset may overrepresent models from prominent providers and English sources.
|
| 159 |
|
| 160 |
Our scoring captures reporting presence and specificity, but does not reflect methodological soundness, depth, or coverage of evaluations. Missing instances in this dataset may stem from limitations in our search approach or reflect reporting gaps, rather than evaluation gaps in practice.
|
| 161 |
|
| 162 |
### Recommendations
|
| 163 |
-
Analyses should consider potential overrepresentation of prominent providers and English sources.
|
| 164 |
-
|
| 165 |
Scores should be interpreted as perceived quality of reporting practices rather than actual model societal impact or capabilities.
|
| 166 |
|
| 167 |
## Citation
|
|
|
|
| 54 |
The `social_impact_eval_annotations` dataset contains annotations for first-party and third-party social impact evaluation reporting practices for 186 models along seven dimensions.
|
| 55 |
## Dataset Details
|
| 56 |
### Dataset Description
|
| 57 |
+
The `social_impact_eval_annotations` dataset comprises analyzed social impact evaluation reporting for 186 foundation models released between 2018-2025. Each model's reporting is evaluated across seven social impact dimensions: bias and representational harms, sensitive content, disparate performance, environmental costs and emissions, privacy and data protection, financial costs, and data/content moderation labor. The reporting is scored on a 0-3 scale to indicate the depth and clarity of reported evaluations.
|
| 58 |
|
| 59 |
- **Curated by:** EvalEval Coalition
|
| 60 |
- **Shared by:** EvalEval Coalition
|
|
|
|
| 155 |
The dataset contains no personal information about individuals. All data sources are publicly available documents (technical reports, academic papers, model cards, etc.).
|
| 156 |
|
| 157 |
## Bias, Risks, and Limitations
|
| 158 |
+
This dataset may overrepresent models from prominent providers and English sources.
|
| 159 |
|
| 160 |
Our scoring captures reporting presence and specificity, but does not reflect methodological soundness, depth, or coverage of evaluations. Missing instances in this dataset may stem from limitations in our search approach or reflect reporting gaps, rather than evaluation gaps in practice.
|
| 161 |
|
| 162 |
### Recommendations
|
| 163 |
+
Analyses should consider potential overrepresentation of prominent providers and English sources.
|
|
|
|
| 164 |
Scores should be interpreted as perceived quality of reporting practices rather than actual model societal impact or capabilities.
|
| 165 |
|
| 166 |
## Citation
|