File size: 6,101 Bytes
7527970 6b9aad3 7527970 0d8c30a 7527970 0d8c30a b68e6d3 0d8c30a eb85afa 0d8c30a 6b9aad3 16bdaf4 7527970 6b9aad3 7527970 6b9aad3 7527970 6b9aad3 7527970 6b9aad3 7527970 45fb80e 6b9aad3 7527970 6b9aad3 fd7e968 6b9aad3 7527970 9e09a7a c9624b2 2209696 c9624b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
---
license: cc-by-4.0
language: en
task_categories:
- text-classification
- question-answering
- text-retrieval
---
# DefExtra
<p align="center">
<a href="https://arxiv.org/abs/2602.05413"><img src="https://img.shields.io/badge/arXiv-2602.05413-b31b1b" alt="arXiv:2602.05413"></a>
<a href="https://sigir.org/"><img src="https://img.shields.io/badge/SIGIR%202026-under%20review-0054a6" alt="SIGIR 2026 under review"></a>
<a href="https://huggingface.co/datasets/mediabiasgroup/DefExtra"><img src="https://img.shields.io/badge/HF%20Dataset-DefExtra-ff9d00" alt="HF Dataset DefExtra"></a>
<a href="https://huggingface.co/datasets/mediabiasgroup/DefSim"><img src="https://img.shields.io/badge/HF%20Dataset-DefSim-ff9d00" alt="HF Dataset DefSim"></a>
<a href="https://media-bias-group.github.io/SciDef-ProjectPage/"><img src="https://img.shields.io/badge/Project%20Page-SciDef-2e7d32" alt="SciDef Project Page"></a>
<a href="https://doi.org/10.5281/zenodo.18501198"><img src="https://img.shields.io/badge/Zenodo-10.5281%2Fzenodo.18501198-1682D4?logo=zenodo" alt="Zenodo DOI: 10.5281/zenodo.18501198"></a>
<a href="https://github.com/Media-Bias-Group/SciDef"><img src="https://img.shields.io/badge/Code-GitHub-181717?logo=github" alt="Code on GitHub"></a>
</p>
## Overview
DefExtra contains 268 definition records (term, definition, context, type) from 75 papers. **We do not ship excerpts from papers** due to copyright. Instead, we ship markers and scripts that let users hydrate the dataset from their own PDFs.
Why this workflow:
- We cannot redistribute copyrighted excerpts.
- We therefore ship **only localization markers** plus scripts to reconstruct the text from user‑supplied PDFs.
## Examples (from our own papers; after hydration)
| Source | Concept | Definition | Context (excerpt) |
| --- | --- | --- | --- |
| [https://aclanthology.org/2024.lrec-main.952](https://aclanthology.org/2024.lrec-main.952) | `media bias` | “a skewed portrayal of information favoring certain group interests, which manifests in multiple facets, including political, gender, racial, and linguistic biases.” | “Media bias is a skewed portrayal of information favoring certain group interests … Such subtypes of bias … make the classification of media bias a challenging task.” |
| [https://arxiv.org/abs/2312.16148](https://arxiv.org/abs/2312.16148) | `spin bias` | “a form of bias introduced either by leaving out necessary information or by adding unnecessary information.” | “Spin Bias describes a form of bias introduced either by leaving out necessary information … or by adding unnecessary information.” |
## Quickstart (DefExtra hydration)
1) Put PDFs in `pdfs/` (filename should match `paper_id`, DOI/PII alias, or arXiv ID).
2) Start a GROBID server (see `docs/defextra_hydration.md`).
3) Hydrate:
```bash
uv run python scripts/hydrate_defextra.py \
--legal-csv data/defextra_legal.csv \
--pdf-dir pdfs \
--grobid-out grobid_out \
--output-csv defextra_hydrated.csv \
--report defextra_hydrated_report.txt \
--require-complete
```
## Getting PDFs
- See `docs/get_pdfs.md` for sources and a helper script that lists required PDFs.
- `defextra_required_pdfs.csv` and `defextra_required_pdfs.md` are precomputed lists.
## Environment (uv)
- This repo ships a `pyproject.toml` with all dependencies.
- Run any script with `uv run python ...` and uv will resolve/install deps.
## Data files
- `data/defextra_legal.csv` / `data/defextra_legal.parquet`: DefExtra markers (no excerpts).
## Hydrated columns
The hydrated output (e.g., `defextra_hydrated.csv`) matches the schema below.
Full legal marker columns are documented in `docs/defextra_hydration.md`.
| Column | Description |
| --- | --- |
| `paper_id` | Paper identifier (often a Semantic Scholar ID, DOI, or arXiv ID). |
| `paper_title` | Paper title. |
| `paper_doi` | DOI (if available). |
| `paper_arxiv` | arXiv ID or URL (if available). |
| **`concept`** | Term / concept being defined. |
| **`definition`** | Definition text (hydrated from PDFs). |
| **`context`** | Context excerpt (hydrated from PDFs). |
| **`definition_type`** | Definition type (e.g., explicit / implicit). |
| `source_file` | Source JSON filename used during curation. |
| **`is_out_of_domain`** | Boolean flag for out‑of‑domain papers. |
## Scripts
- `scripts/hydrate_defextra.py`: hydrate DefExtra from PDFs + GROBID.
- `scripts/pdf_to_grobid.py`: batch GROBID runner (requires a running GROBID server).
- `scripts/list_defextra_pdfs.py`: list required PDFs + download links.
- `scripts/build_defextra_test_pdfs.py`: build a test PDF set from a larger PDF pool.
- `scripts/report_defextra_status.py`: summarize missing items by paper/definition.
## Documentation
- [`docs/defextra_hydration.md`](docs/defextra_hydration.md) (technical details, CLI flags, markers).
- [`docs/get_pdfs.md`](docs/get_pdfs.md) (how to find PDFs).
- [`docs/mismatch_examples.md`](docs/mismatch_examples.md) (mismatch types with short excerpts).
## Expected minor mismatches
- Small differences vs. the manual reference can occur due to PDF/GROBID text normalization.
- Typical cases: line‑break hyphenation, spacing around numbers, citation formatting.
- These are documented and do not affect the ability to hydrate all entries.
## Notes
- Hash IDs are typically Semantic Scholar paper IDs; many PDFs can be obtained from Semantic Scholar.
- If you see PDF hash mismatch warnings, verify you have the correct paper version and rerun with `--allow-pdf-hash-mismatch` only after manual inspection.
- The script was largely produced using LLMs for robustness.
## Citation
```bibtex
@misc{kucera2026scidefautomatingdefinitionextraction,
title={SciDef: Automating Definition Extraction from Academic Literature with Large Language Models},
author={Filip Ku\v{c}era and Christoph Mandl and Isao Echizen and Radu Timofte and Timo Spinde},
year={2026},
eprint={2602.05413},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2602.05413},
}
```
|