File size: 3,711 Bytes
b62b8fe
 
 
 
 
 
 
 
 
 
 
 
 
 
7a48526
b62b8fe
 
 
 
 
 
9e424ae
b62b8fe
 
 
 
 
 
 
9e424ae
b62b8fe
 
 
7a48526
 
b62b8fe
d08c9ae
 
b62b8fe
 
7a48526
 
b62b8fe
 
 
 
7a48526
 
d08c9ae
b62b8fe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d08c9ae
 
 
 
 
 
 
b62b8fe
 
 
 
 
 
 
 
 
 
d08c9ae
 
b62b8fe
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
license: other
task_categories:
  - text-classification
  - text-retrieval
language:
  - en
pretty_name: S2ORC Safety
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: main/*.parquet
---

# S2ORC Safety

This dataset is a filtered and enriched subset of an S2ORC computer science paper corpus, focused on AI safety and adjacent safety-relevant research.

It contains `16,806` papers selected through:

1. local embedding generation
2. clustering
3. GPT-5.4 mini cluster-level screening
4. GPT-5.4 mini paper-level labeling
5. a rescue relabel pass on suspicious exclusions
6. structured metadata extraction over the accepted paper set
7. filtering out `304` rows that were missing both `parsed_title` and `abstract`

## Main Files

- `main/*.parquet`
  - sharded full enriched source rows
  - extracted metadata
  - normalized GitHub repo links
  - Hugging Face code mirror links
  - normalized model / dataset / metric / scalar fields

- `metadata/*.parquet`
  - sharded metadata extraction only

- `paper_metadata_summary_normalized.json`
  - corpus-level summary statistics over the normalized metadata fields

- `code_links/*.parquet`
  - sharded paper-to-code join table with normalized GitHub URLs and HF mirror paths

## Contents

The main parquet includes:

- original enriched paper fields from the source corpus
  - title, abstract, full text, sections, references, authors, venue metadata, URLs
  - extracted source-side fields like `summary`, `methods`, `results`, `models`, `datasets`, `metrics`, `limitations`, `training_details`

- metadata extraction fields
  - reproducibility:
    - `repro_steps_json`
    - `setup_requirements_json`
    - `training_or_eval_recipe_json`
    - `artifact_availability_json`
    - `code_urls_json`
    - `dataset_urls_json`
    - `model_urls_json`
  - safety taxonomy:
    - `safety_area_json`
    - `attack_or_defense_json`
    - `threat_model_json`
    - `target_system_json`
    - `harm_type_json`
  - experimental details:
    - `target_models_json`
    - `datasets_benchmarks_json`
    - `baselines_compared_json`
    - `evaluation_metrics_json`
    - `main_results_json`
    - `claimed_contributions_json`
  - practicality:
    - `compute_requirements_json`
    - `runtime_cost`
    - `human_eval_required`
    - `closed_model_dependency`
    - `deployment_readiness`
    - `replication_difficulty`
    - `extraction_confidence`

- normalized fields
  - `setup_requirements_norm_json`
  - `target_models_norm_json`
  - `datasets_benchmarks_norm_json`
  - `baselines_compared_norm_json`
  - `evaluation_metrics_norm_json`
  - `runtime_cost_norm`
  - `human_eval_required_norm`
  - `closed_model_dependency_norm`
  - `deployment_readiness_norm`
  - `replication_difficulty_norm`

- code link fields
  - `github_repo_urls_json`
  - `hf_code_paths_json`
  - `hf_code_web_urls_json`
  - `github_repo_count`
  - `hf_code_repo_count`

## Missing Values

- missing list-like fields are stored as empty JSON arrays
- missing scalar categorical fields are stored as `"None specified"`

## Notes

- This is a broad-tent AI safety dataset rather than a narrow alignment-only dataset.
- The labeling and extraction steps were LLM-assisted and should be treated as high-utility annotations, not ground truth.
- Process-only columns used to build the release were removed from the published parquet.
- The companion code mirror is published separately as `AlgorithmicResearchGroup/s2orc-safety-code`.
- Normalization is conservative. It collapses obvious duplicates like `CIFAR10` / `CIFAR-10`, `ResNet50` / `ResNet-50`, and `accuracy` / `Accuracy`, but does not try to solve full ontology matching.