| --- |
| license: other |
| task_categories: |
| - text-classification |
| - text-retrieval |
| language: |
| - en |
| pretty_name: S2ORC Safety |
| size_categories: |
| - 10K<n<100K |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: main/*.parquet |
| --- |
| |
| # S2ORC Safety |
|
|
| This dataset is a filtered and enriched subset of an S2ORC computer science paper corpus, focused on AI safety and adjacent safety-relevant research. |
|
|
| It contains `16,806` papers selected through: |
|
|
| 1. local embedding generation |
| 2. clustering |
| 3. GPT-5.4 mini cluster-level screening |
| 4. GPT-5.4 mini paper-level labeling |
| 5. a rescue relabel pass on suspicious exclusions |
| 6. structured metadata extraction over the accepted paper set |
| 7. filtering out `304` rows that were missing both `parsed_title` and `abstract` |
|
|
| ## Main Files |
|
|
| - `main/*.parquet` |
| - sharded full enriched source rows |
| - extracted metadata |
| - normalized GitHub repo links |
| - Hugging Face code mirror links |
| - normalized model / dataset / metric / scalar fields |
|
|
| - `metadata/*.parquet` |
| - sharded metadata extraction only |
|
|
| - `paper_metadata_summary_normalized.json` |
| - corpus-level summary statistics over the normalized metadata fields |
|
|
| - `code_links/*.parquet` |
| - sharded paper-to-code join table with normalized GitHub URLs and HF mirror paths |
|
|
| ## Contents |
|
|
| The main parquet includes: |
|
|
| - original enriched paper fields from the source corpus |
| - title, abstract, full text, sections, references, authors, venue metadata, URLs |
| - extracted source-side fields like `summary`, `methods`, `results`, `models`, `datasets`, `metrics`, `limitations`, `training_details` |
|
|
| - metadata extraction fields |
| - reproducibility: |
| - `repro_steps_json` |
| - `setup_requirements_json` |
| - `training_or_eval_recipe_json` |
| - `artifact_availability_json` |
| - `code_urls_json` |
| - `dataset_urls_json` |
| - `model_urls_json` |
| - safety taxonomy: |
| - `safety_area_json` |
| - `attack_or_defense_json` |
| - `threat_model_json` |
| - `target_system_json` |
| - `harm_type_json` |
| - experimental details: |
| - `target_models_json` |
| - `datasets_benchmarks_json` |
| - `baselines_compared_json` |
| - `evaluation_metrics_json` |
| - `main_results_json` |
| - `claimed_contributions_json` |
| - practicality: |
| - `compute_requirements_json` |
| - `runtime_cost` |
| - `human_eval_required` |
| - `closed_model_dependency` |
| - `deployment_readiness` |
| - `replication_difficulty` |
| - `extraction_confidence` |
|
|
| - normalized fields |
| - `setup_requirements_norm_json` |
| - `target_models_norm_json` |
| - `datasets_benchmarks_norm_json` |
| - `baselines_compared_norm_json` |
| - `evaluation_metrics_norm_json` |
| - `runtime_cost_norm` |
| - `human_eval_required_norm` |
| - `closed_model_dependency_norm` |
| - `deployment_readiness_norm` |
| - `replication_difficulty_norm` |
|
|
| - code link fields |
| - `github_repo_urls_json` |
| - `hf_code_paths_json` |
| - `hf_code_web_urls_json` |
| - `github_repo_count` |
| - `hf_code_repo_count` |
|
|
| ## Missing Values |
|
|
| - missing list-like fields are stored as empty JSON arrays |
| - missing scalar categorical fields are stored as `"None specified"` |
|
|
| ## Notes |
|
|
| - This is a broad-tent AI safety dataset rather than a narrow alignment-only dataset. |
| - The labeling and extraction steps were LLM-assisted and should be treated as high-utility annotations, not ground truth. |
| - Process-only columns used to build the release were removed from the published parquet. |
| - The companion code mirror is published separately as `AlgorithmicResearchGroup/s2orc-safety-code`. |
| - Normalization is conservative. It collapses obvious duplicates like `CIFAR10` / `CIFAR-10`, `ResNet50` / `ResNet-50`, and `accuracy` / `Accuracy`, but does not try to solve full ontology matching. |
|
|