File size: 1,715 Bytes
783a81d 505d901 783a81d 52c41fb 8b7f9eb 52c41fb 8b7f9eb 52c41fb 8ada67f 17bb637 28c13db |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
---
license: mit
configs:
- config_name: default
data_files:
- split: ratio1_v1
path: ratio1_v1.csv
- split: ratio1_v2
path: ratio1_v2.csv
- split: ratio2_v1
path: ratio2_v2.csv
- split: ratio3_v1
path: ratio3_v2.csv
- split: ratio6_v2
path: ratio6_v2.csv
- split: ratio10_v1
path: ratio10_v1.csv
- split: ratio30_v1
path: ratio30_v1.csv
- split: ratio50_v1
path: ratio50_v1.csv
- split: test
path: test.csv
---
This dataset is composed of Claude-labelled [fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb/) documents.
For each document, Claude is asked if it is 'forecastable' (i.e. would be a reasonable seed for a pastcasting question) and to estimate the date the document was published.
V1 splits were generated by having Claude label ~50K random fineweb documents and v2 splits were augmented with labels on ~30K additional documents that a DebertaV3 classifier finetuned on ratio10_v1 thought were forecastable (Claude thought ~1/3 of these additional documents were forecastable).
Splits are of the form `ratio{negative_to_positive_ratio}_v{1 or 2}`. For example `ratio6_v2` is ~6 negative examples for each positive example. Splits do overlap.
`ratio6_v2` was used to train https://huggingface.co/noanabeshima/forecastability-classifier-v1.
Prompt can be found in `prompt.txt`. It was iterated on slightly using a small set of ground-truth human labels. GPT-4.1 performed slightly better, but Claude had a more favorable TOS for open-sourcing data/models.
Claude considers ~2% of fineweb documents to be forecastable.
Made with the help of Collin Gray. |