Joaoffg commited on
Commit
fb73d78
·
verified ·
1 Parent(s): 377d067

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -3
README.md CHANGED
@@ -1,3 +1,106 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - n<1K
9
+ ---
10
+
11
+ # SSH Cloze Benchmark
12
+
13
+ A Cloze-style benchmark for evaluating language models on Social Sciences and Humanities (SSH) text understanding. The benchmark measures whether a model can choose between two equivalent candidate tokens (e.g. *higher* vs. *lower*, *positive* vs. *negative*) in the context of an academic abstract, where the correct choice requires domain knowledge rather than general English fluency.
14
+
15
+ This dataset was introduced in the technical report *SHARE: Social-Humanities AI for Research and Education* (Gonçalves, de Jager, Knoth, Pride, & Jelicic, 2026) as the evaluation benchmark for the SHARE family of SSH-specialised language models.
16
+
17
+ ## Dataset summary
18
+
19
+ - **Task:** Cloze-style binary token prediction in academic abstracts.
20
+ - **Size:** 275 examples.
21
+ - **Fields (disciplines):** 11 SSH fields, 25 examples each — Art, Business, Communication, Economics, Education, Geography, History, Law, Philosophy, Psychology, Sociology.
22
+ - **Source:** Out-of-distribution SSH abstracts published in Q1 2026, retrieved from Web of Science and ranked per discipline by citation count. Recency was a requirement in order to minimise risk of training-data contamination for models with earlier cutoffs.
23
+ - **Language:** English.
24
+ - **Domain:** Social Sciences and Humanities scholarly writing.
25
+
26
+ ## Motivation
27
+
28
+ Standard LLM benchmarks such as MMLU assume content (often STEM, high-school level) and formats (multiple choice) that are not representative of SSH scholarship, and general perplexity comparisons conflate SSH-specific competence with general English fluency. The SSH Cloze Benchmark isolates SSH-relevant prediction by focusing on tokens where the choice between two equivalent alternatives hinges on domain knowledge. For example, in *"The correlation between social media use and well-being was negative,"* predicting *was* requires only basic English, but predicting *negative* over *positive* requires familiarity with the findings and conventions of SSH literature.
29
+
30
+ ## Data fields
31
+
32
+ Each row contains:
33
+
34
+ | Field | Description |
35
+ | --- | --- |
36
+ | `Record` | Web of Science URL for the source abstract. |
37
+ | `Original abstract` | Full unmodified abstract as retrieved from Web of Science. |
38
+ | `Cloze abstract` | Abstract rewritten/truncated so that the target token is the final (or otherwise decisive) word, making it suitable for a next-token prediction or masked-token evaluation. |
39
+ | `Correct token` | The token the model should prefer, grounded in the original abstract's finding. |
40
+ | `Incorrect token` | The equivalent distractor token (same syntactic role, opposite or alternative meaning). |
41
+ | `Sign` | `Positive`, `Negative`, or `Neutral` — the direction of the correct token's claim. Distribution: 140 Positive, 97 Negative, 38 Neutral. |
42
+ | `Field` | The SSH discipline the abstract belongs to (one of the 11 fields above). |
43
+
44
+ ## Construction
45
+
46
+ Candidate abstracts were retrieved with a keyword search aimed at finding terms that lend themselves to equivalent-token framing — *positive / negative*, *higher / lower*, *greater / smaller*. Results were ranked by citation count within each discipline, and 25 abstracts were kept per field. Each abstract was then rewritten into a Cloze prompt ending in (or hinging on) the target token, paired with a plausible distractor from the same equivalence class. The most frequent correct tokens are *higher* (31), *lower* (26), *positive* (23), and *negative* (18), followed by a long tail of other comparative and evaluative terms.
47
+
48
+ ## Evaluation
49
+
50
+ Models are scored on their ability to assign higher probability to the correct token than to the distractor. The report uses **prior-corrected accuracy** to control for the fact that one token in a pair (e.g. *positive* effects) is often more frequent in English than its counterpart, so that models cannot achieve high scores by defaulting to the more common word.
51
+
52
+ Reported results from the technical report:
53
+
54
+ | Model | Size | Training tokens | Raw accuracy | Prior-corrected |
55
+ | --- | --- | --- | --- | --- |
56
+ | Phi-4 | 14B | 9.8T | 81.8% | 81.8% |
57
+ | SHARE | 14B | 96B | 77.1% | 79.6% |
58
+ | OLMO-2 | 7B | 4T | 78.2% | 76.4% |
59
+ | OLMO-2-Step-20k | 13B | 168B | 74.9% | 73.8% |
60
+ | Phi-4 | 4B | 5T | 73.8% | 69.8% |
61
+ | SHARE | 4B | 28B | 69.8% | 66.2% |
62
+ | SSCI-SciBERT-e2 | 110M | ~1B | 66.9% | 67.6% |
63
+ | Pythia | 3B | 300B | 65.8% | 63.6% |
64
+ | SciBERT | 110M | 3B | 67.9% | 62.9% |
65
+ | Pythia | 12B | 300B | 67.3% | 61.5% |
66
+ | BERT | 110M | ~5B | 58.2% | 57.5% |
67
+
68
+ The benchmark is compatible with both causal LMs (scored on next-token logits at the Cloze position) and masked LMs (scored on the masked-token distribution).
69
+
70
+ ## Intended uses
71
+
72
+ - Comparing SSH-domain competence of causal and masked language models independently of general English proficiency.
73
+ - Evaluating domain-specialised pretraining recipes, particularly for social-science and humanities corpora.
74
+ - Probing for data contamination: because abstracts are drawn from Q1 2026 publications, pre-2026 models are unlikely to have seen them verbatim.
75
+
76
+ ## Limitations
77
+
78
+ - **Initial release.** The report describes this as an initial version; the authors plan to expand the number of examples and disciplines.
79
+ - **English only.** All abstracts are in English, mirroring the English-centric bias of the SHARE training corpus.
80
+ - **Keyword-driven selection.** The requirement that abstracts contain comparative/evaluative keywords (*higher/lower*, *positive/negative*, *greater/smaller*) biases the benchmark toward quantitative or empirically-framed SSH research, and away from purely interpretive humanities writing.
81
+ - **Possible LLM contamination in source abstracts.** Since the abstracts are recent, some may themselves have been drafted with LLM assistance.
82
+ - **Distractor design.** Distractors are single equivalent tokens; the benchmark does not test open-ended generation, long-range reasoning, or argumentation.
83
+ - **Prior correction is necessary.** Because *positive*, *higher*, and *greater* dominate the correct-token distribution, raw accuracy overstates performance; the prior-corrected metric should be the headline number.
84
+
85
+ ## Citation
86
+
87
+ If you use this dataset, please cite the accompanying technical report:
88
+
89
+ ```
90
+ @techreport{goncalves2026share,
91
+ title = {SHARE: Social-Humanities AI for Research and Education},
92
+ author = {Gon{\c{c}}alves, Jo{\~a}o and de Jager, Sonia and Knoth, Petr and Pride, David and Jelicic, Nick},
93
+ year = {2026},
94
+ note = {arXiv:2604.11152}
95
+ }
96
+ ```
97
+
98
+ And the original Cloze procedure:
99
+
100
+ ```
101
+ Taylor, W. L. (1953). "Cloze procedure": A new tool for measuring readability. Journalism Quarterly, 30(4), 415–433.
102
+ ```
103
+
104
+ ## License and ethics
105
+
106
+ Abstracts are drawn from Web of Science-indexed publications. Redistribution should respect publisher terms; the dataset is intended for non-commercial research and evaluation, consistent with the Responsible AI License (RAIL) terms used by the SHARE models.