license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- en
size_categories:
- n<1K
SSH Cloze Benchmark
A Cloze-style benchmark for evaluating language models on Social Sciences and Humanities (SSH) text understanding. The benchmark measures whether a model can choose between two equivalent candidate tokens (e.g. higher vs. lower, positive vs. negative) in the context of an academic abstract, where the correct choice requires domain knowledge rather than general English fluency.
This dataset was introduced in the technical report SHARE: Social-Humanities AI for Research and Education (Gonçalves, de Jager, Knoth, Pride, & Jelicic, 2026) as the evaluation benchmark for the SHARE family of SSH-specialised language models.
Dataset summary
- Task: Cloze-style binary token prediction in academic abstracts.
- Size: 275 examples.
- Fields (disciplines): 11 SSH fields, 25 examples each — Art, Business, Communication, Economics, Education, Geography, History, Law, Philosophy, Psychology, Sociology.
- Source: Out-of-distribution SSH abstracts published in Q1 2026, retrieved from Web of Science and ranked per discipline by citation count. Recency was a requirement in order to minimise risk of training-data contamination for models with earlier cutoffs.
- Language: English.
- Domain: Social Sciences and Humanities scholarly writing.
Motivation
Standard LLM benchmarks such as MMLU assume content (often STEM, high-school level) and formats (multiple choice) that are not representative of SSH scholarship, and general perplexity comparisons conflate SSH-specific competence with general English fluency. The SSH Cloze Benchmark isolates SSH-relevant prediction by focusing on tokens where the choice between two equivalent alternatives hinges on domain knowledge. For example, in "The correlation between social media use and well-being was negative," predicting was requires only basic English, but predicting negative over positive requires familiarity with the findings and conventions of SSH literature.
Data fields
Each row contains:
| Field | Description |
|---|---|
Record |
Web of Science URL for the source abstract. |
Original abstract |
Full unmodified abstract as retrieved from Web of Science. |
Cloze abstract |
Abstract rewritten/truncated so that the target token is the final (or otherwise decisive) word, making it suitable for a next-token prediction or masked-token evaluation. |
Correct token |
The token the model should prefer, grounded in the original abstract's finding. |
Incorrect token |
The equivalent distractor token (same syntactic role, opposite or alternative meaning). |
Sign |
Positive, Negative, or Neutral — the direction of the correct token's claim. Distribution: 140 Positive, 97 Negative, 38 Neutral. |
Field |
The SSH discipline the abstract belongs to (one of the 11 fields above). |
Construction
Candidate abstracts were retrieved with a keyword search aimed at finding terms that lend themselves to equivalent-token framing — positive / negative, higher / lower, greater / smaller. Results were ranked by citation count within each discipline, and 25 abstracts were kept per field. Each abstract was then rewritten into a Cloze prompt ending in (or hinging on) the target token, paired with a plausible distractor from the same equivalence class. The most frequent correct tokens are higher (31), lower (26), positive (23), and negative (18), followed by a long tail of other comparative and evaluative terms.
Evaluation
Models are scored on their ability to assign higher probability to the correct token than to the distractor. The report uses prior-corrected accuracy to control for the fact that one token in a pair (e.g. positive effects) is often more frequent in English than its counterpart, so that models cannot achieve high scores by defaulting to the more common word.
Reported results from the technical report:
| Model | Size | Training tokens | Raw accuracy | Prior-corrected |
|---|---|---|---|---|
| Phi-4 | 14B | 9.8T | 81.8% | 81.8% |
| SHARE | 14B | 96B | 77.1% | 79.6% |
| OLMO-2 | 7B | 4T | 78.2% | 76.4% |
| OLMO-2-Step-20k | 13B | 168B | 74.9% | 73.8% |
| Phi-4 | 4B | 5T | 73.8% | 69.8% |
| SHARE | 4B | 28B | 69.8% | 66.2% |
| SSCI-SciBERT-e2 | 110M | ~1B | 66.9% | 67.6% |
| Pythia | 3B | 300B | 65.8% | 63.6% |
| SciBERT | 110M | 3B | 67.9% | 62.9% |
| Pythia | 12B | 300B | 67.3% | 61.5% |
| BERT | 110M | ~5B | 58.2% | 57.5% |
The benchmark is compatible with both causal LMs (scored on next-token logits at the Cloze position) and masked LMs (scored on the masked-token distribution).
Intended uses
- Comparing SSH-domain competence of causal and masked language models independently of general English proficiency.
- Evaluating domain-specialised pretraining recipes, particularly for social-science and humanities corpora.
- Probing for data contamination: because abstracts are drawn from Q1 2026 publications, pre-2026 models are unlikely to have seen them verbatim.
Limitations
- Initial release. The report describes this as an initial version; the authors plan to expand the number of examples and disciplines.
- English only. All abstracts are in English, mirroring the English-centric bias of the SHARE training corpus.
- Keyword-driven selection. The requirement that abstracts contain comparative/evaluative keywords (higher/lower, positive/negative, greater/smaller) biases the benchmark toward quantitative or empirically-framed SSH research, and away from purely interpretive humanities writing.
- Possible LLM contamination in source abstracts. Since the abstracts are recent, some may themselves have been drafted with LLM assistance.
- Distractor design. Distractors are single equivalent tokens; the benchmark does not test open-ended generation, long-range reasoning, or argumentation.
- Prior correction is necessary. Because positive, higher, and greater dominate the correct-token distribution, raw accuracy overstates performance; the prior-corrected metric should be the headline number.
Citation
If you use this dataset, please cite the accompanying technical report:
@techreport{goncalves2026share,
title = {SHARE: Social-Humanities AI for Research and Education},
author = {Gon{\c{c}}alves, Jo{\~a}o and de Jager, Sonia and Knoth, Petr and Pride, David and Jelicic, Nick},
year = {2026},
note = {arXiv:2604.11152}
}
And the original Cloze procedure:
Taylor, W. L. (1953). "Cloze procedure": A new tool for measuring readability. Journalism Quarterly, 30(4), 415–433.
License and ethics
Abstracts are drawn from Web of Science-indexed publications. Redistribution should respect publisher terms; the dataset is intended for non-commercial research and evaluation, consistent with the Responsible AI License (RAIL) terms used by the SHARE models.