Upload folder using huggingface_hub
Browse files- README.md +51 -3
- config.json +6 -0
- data/defextra_legal.csv +0 -0
- data/defextra_legal.parquet +3 -0
- docs/defextra_hydration.md +89 -0
- docs/get_pdfs.md +29 -0
- scripts/__init__.py +1 -0
- scripts/__pycache__/__init__.cpython-314.pyc +0 -0
- scripts/__pycache__/defextra_markers.cpython-314.pyc +0 -0
- scripts/__pycache__/defextra_pdf_aliases.cpython-314.pyc +0 -0
- scripts/build_defextra_test_pdfs.py +257 -0
- scripts/compare_defextra_csvs.py +327 -0
- scripts/defextra_markers.py +560 -0
- scripts/defextra_pdf_aliases.py +46 -0
- scripts/hydrate_defextra.py +1760 -0
- scripts/list_defextra_pdfs.py +197 -0
- scripts/pdf_to_grobid.py +213 -0
- scripts/prepare_defextra_legal.py +877 -0
- scripts/report_defextra_status.py +249 -0
README.md
CHANGED
|
@@ -1,3 +1,51 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
language: en
|
| 4 |
+
task_categories:
|
| 5 |
+
- text-classification
|
| 6 |
+
- question-answering
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
# DefExtra
|
| 10 |
+
|
| 11 |
+
DefExtra contains 268 definition records (term, definition, context, type) from 75 papers. **We do not ship excerpts from papers** due to copyright. Instead, we ship legal markers and scripts that let users hydrate the dataset from their own PDFs.
|
| 12 |
+
|
| 13 |
+
Why this workflow
|
| 14 |
+
- We cannot redistribute copyrighted excerpts.
|
| 15 |
+
- We therefore ship **only localization markers** plus scripts to reconstruct the text from user‑supplied PDFs.
|
| 16 |
+
|
| 17 |
+
Quickstart (DefExtra hydration)
|
| 18 |
+
1) Put PDFs in `pdfs/` (filename should match `paper_id`, DOI/PII alias, or arXiv ID).
|
| 19 |
+
2) Start a GROBID server (see `docs/defextra_hydration.md`).
|
| 20 |
+
3) Hydrate:
|
| 21 |
+
```sh
|
| 22 |
+
python scripts/hydrate_defextra.py \
|
| 23 |
+
--legal-csv data/defextra_legal.csv \
|
| 24 |
+
--pdf-dir pdfs \
|
| 25 |
+
--grobid-out grobid_out \
|
| 26 |
+
--output-csv defextra_hydrated.csv \
|
| 27 |
+
--report defextra_hydrated_report.txt \
|
| 28 |
+
--require-complete
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
Getting PDFs
|
| 32 |
+
- See [`docs/get_pdfs.md`](docs/get_pdfs.md) for sources and a helper script that lists required PDFs.
|
| 33 |
+
|
| 34 |
+
Data files
|
| 35 |
+
- `data/defextra_legal.csv` / `data/defextra_legal.parquet`: DefExtra markers (no excerpts).
|
| 36 |
+
|
| 37 |
+
Scripts
|
| 38 |
+
- `scripts/hydrate_defextra.py`: hydrate DefExtra from PDFs + GROBID.
|
| 39 |
+
- `scripts/pdf_to_grobid.py`: batch GROBID runner (requires a running GROBID server).
|
| 40 |
+
- `scripts/list_defextra_pdfs.py`: list required PDFs + download links.
|
| 41 |
+
- `scripts/build_defextra_test_pdfs.py`: build a test PDF set from a larger PDF pool.
|
| 42 |
+
- `scripts/report_defextra_status.py`: summarize missing items by paper/definition.
|
| 43 |
+
- `scripts/compare_defextra_csvs.py`: compare hydrated output to a reference.
|
| 44 |
+
|
| 45 |
+
Documentation
|
| 46 |
+
- `docs/defextra_hydration.md` (technical details, CLI flags, markers).
|
| 47 |
+
- `docs/get_pdfs.md` (how to find PDFs).
|
| 48 |
+
|
| 49 |
+
Notes
|
| 50 |
+
- Hash IDs are typically Semantic Scholar paper IDs; many PDFs can be obtained from Semantic Scholar.
|
| 51 |
+
- If you see PDF hash mismatch warnings, verify you have the correct paper version and rerun with `--allow-pdf-hash-mismatch` only after manual inspection.
|
config.json
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"grobid_server": "http://localhost:8070",
|
| 3 |
+
"grobid_port": 8070,
|
| 4 |
+
"batch_size": 1000,
|
| 5 |
+
"sleep_time": 5
|
| 6 |
+
}
|
data/defextra_legal.csv
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/defextra_legal.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e43b04d733afd5acf066344c0d9364fb924b4959da4384e0b3eb2e04467563a1
|
| 3 |
+
size 300352
|
docs/defextra_hydration.md
ADDED
|
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DefExtra hydration (technical)
|
| 2 |
+
|
| 3 |
+
This dataset cannot ship excerpts. We ship markers only. Users supply PDFs and run hydration to reconstruct the definitions and contexts.
|
| 4 |
+
|
| 5 |
+
```mermaid
|
| 6 |
+
flowchart LR
|
| 7 |
+
A[data/defextra_legal.csv] --> B[hydrate_defextra.py]
|
| 8 |
+
B --> C[defextra_hydrated.csv]
|
| 9 |
+
subgraph GROBID
|
| 10 |
+
D[pdf_to_grobid.py] --> E[*.grobid.tei.xml]
|
| 11 |
+
end
|
| 12 |
+
B --> E
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
## Requirements
|
| 16 |
+
- Python 3.10+
|
| 17 |
+
- A running GROBID server (default: `http://localhost:8070`).
|
| 18 |
+
Example Docker run:
|
| 19 |
+
```
|
| 20 |
+
docker run --rm -p 8070:8070 grobid/grobid:0.8.0
|
| 21 |
+
```
|
| 22 |
+
- Python packages used by scripts:
|
| 23 |
+
- `grobid-client-python`, `lxml`, `pdfplumber`, `pdfminer.six`, `PyPDF2`, `pyarrow`
|
| 24 |
+
|
| 25 |
+
## Getting PDFs
|
| 26 |
+
See `docs/get_pdfs.md` for sources and a helper script that lists required PDFs.
|
| 27 |
+
|
| 28 |
+
## Quickstart
|
| 29 |
+
1) Place your PDFs in `pdfs/`.
|
| 30 |
+
2) Hydrate (runs GROBID unless `--skip-grobid`):
|
| 31 |
+
```
|
| 32 |
+
python scripts/hydrate_defextra.py \
|
| 33 |
+
--legal-csv data/defextra_legal.csv \
|
| 34 |
+
--pdf-dir pdfs \
|
| 35 |
+
--grobid-out grobid_out \
|
| 36 |
+
--output-csv defextra_hydrated.csv \
|
| 37 |
+
--report defextra_hydrated_report.txt \
|
| 38 |
+
--require-complete
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
## File naming expectations
|
| 42 |
+
- Preferred: `<paper_id>.pdf` (the `paper_id` in the CSV).
|
| 43 |
+
- DOI/arXiv/PII aliases are supported for common cases.
|
| 44 |
+
- Hash IDs are often Semantic Scholar IDs; many PDFs can be downloaded from there.
|
| 45 |
+
|
| 46 |
+
## GROBID configuration
|
| 47 |
+
- By default, `scripts/pdf_to_grobid.py` will generate a temporary config that points to `http://localhost:8070`.
|
| 48 |
+
- Use `--grobid-config config.json` to override the URL/timeout/batch size.
|
| 49 |
+
|
| 50 |
+
## Hash mismatch behavior
|
| 51 |
+
- The legal CSV includes hash markers to verify that the PDF content matches the paper version.
|
| 52 |
+
- Default: **skip** PDFs with hash mismatches and report them.
|
| 53 |
+
- Optional: `--allow-pdf-hash-mismatch` to continue anyway (use only after manual inspection).
|
| 54 |
+
|
| 55 |
+
## Legal CSV schema (marker columns)
|
| 56 |
+
- `definition_char_start`, `definition_char_end`: exact TEI substring span (optional).
|
| 57 |
+
- `definition_match`: `exact`/`hash`/`missing` indicating how the TEI span was found.
|
| 58 |
+
- `definition_hash64`, `definition_sha256`, `definition_token_count`: full span hash.
|
| 59 |
+
- `definition_head_*`, `definition_mid_*`, `definition_tail_*`: anchor hashes (window size 12).
|
| 60 |
+
- `definition_head_alt_*`, `definition_mid_alt_*`, `definition_tail_alt_*`: anchor hashes (window size 6).
|
| 61 |
+
- Same fields exist for `context_*`.
|
| 62 |
+
- `definition_preserve_linebreaks`, `context_preserve_linebreaks`:
|
| 63 |
+
keep line breaks when present in the manual copy (tables).
|
| 64 |
+
- `definition_preserve_hyphenation`, `context_preserve_hyphenation`:
|
| 65 |
+
preserve line-break hyphenation when present.
|
| 66 |
+
- `definition_has_bracket_citation`, `context_has_bracket_citation`:
|
| 67 |
+
whether the manual copy included `[12]`‑style citations.
|
| 68 |
+
- `definition_has_paren_citation`, `context_has_paren_citation`:
|
| 69 |
+
whether the manual copy included `(Author, 2019)`‑style citations.
|
| 70 |
+
- `definition_has_letter_digit`, `context_has_letter_digit`:
|
| 71 |
+
whether the manual copy contained letter‑digit runs like `bias4`.
|
| 72 |
+
- `definition_end_punct`, `context_end_punct`:
|
| 73 |
+
trailing punctuation marker captured from the manual copy.
|
| 74 |
+
|
| 75 |
+
## Comparing to the original (optional)
|
| 76 |
+
If you maintain a private reference file with excerpts, you can compare:
|
| 77 |
+
```
|
| 78 |
+
python scripts/compare_defextra_csvs.py \
|
| 79 |
+
--postprocess \
|
| 80 |
+
--legal data/defextra_legal.csv \
|
| 81 |
+
--ref defextra_reference.csv \
|
| 82 |
+
--hyd defextra_hydrated.csv \
|
| 83 |
+
--report defextra_diff_report.txt \
|
| 84 |
+
--report-limit 0
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
Notes
|
| 88 |
+
- Small mismatches are expected due to PDF/GROBID text normalization.
|
| 89 |
+
- Missing exact TEI spans do **not** block hydration; hash/anchor markers are used as fallback.
|
docs/get_pdfs.md
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Getting PDFs for DefExtra
|
| 2 |
+
|
| 3 |
+
You must supply PDFs yourself. The dataset does **not** ship any copyrighted excerpts.
|
| 4 |
+
|
| 5 |
+
## Recommended sources
|
| 6 |
+
- **Semantic Scholar**: most `paper_id` values are Semantic Scholar IDs. You can open
|
| 7 |
+
`https://www.semanticscholar.org/paper/<paper_id>` and download the PDF when available.
|
| 8 |
+
- **DOI landing pages**: use `https://doi.org/<paper_doi>` to locate the publisher PDF.
|
| 9 |
+
- **arXiv**: use `https://arxiv.org/abs/<paper_arxiv>`.
|
| 10 |
+
- **ACL Anthology**: if a `paper_id` looks like `2024.lrec-main.952`, use
|
| 11 |
+
`https://aclanthology.org/2024.lrec-main.952`.
|
| 12 |
+
|
| 13 |
+
## Helper script
|
| 14 |
+
Generate a CSV of required PDFs and links:
|
| 15 |
+
```
|
| 16 |
+
python scripts/list_defextra_pdfs.py \
|
| 17 |
+
--legal-csv data/defextra_legal.csv \
|
| 18 |
+
--output-csv defextra_required_pdfs.csv \
|
| 19 |
+
--output-md defextra_required_pdfs.md
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
The output includes:
|
| 23 |
+
- `preferred_pdf_name` (the filename we recommend you use)
|
| 24 |
+
- `alias_pdf_names` (acceptable filename aliases)
|
| 25 |
+
- `url_semanticscholar`, `url_doi`, `url_arxiv`, `url_acl`
|
| 26 |
+
|
| 27 |
+
## File naming
|
| 28 |
+
Place PDFs in one folder and name them as `<paper_id>.pdf` when possible.
|
| 29 |
+
DOI/arXiv/PII aliases are supported (see `alias_pdf_names` in the helper output).
|
scripts/__init__.py
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
"""Utility package for dataset scripts."""
|
scripts/__pycache__/__init__.cpython-314.pyc
ADDED
|
Binary file (249 Bytes). View file
|
|
|
scripts/__pycache__/defextra_markers.cpython-314.pyc
ADDED
|
Binary file (28.6 kB). View file
|
|
|
scripts/__pycache__/defextra_pdf_aliases.cpython-314.pyc
ADDED
|
Binary file (1.66 kB). View file
|
|
|
scripts/build_defextra_test_pdfs.py
ADDED
|
@@ -0,0 +1,257 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
# ruff: noqa: E402
|
| 4 |
+
|
| 5 |
+
import argparse
|
| 6 |
+
import csv
|
| 7 |
+
import os
|
| 8 |
+
import shutil
|
| 9 |
+
import sys
|
| 10 |
+
import re
|
| 11 |
+
from pathlib import Path
|
| 12 |
+
|
| 13 |
+
try:
|
| 14 |
+
from scripts.defextra_markers import (
|
| 15 |
+
doi_suffix,
|
| 16 |
+
normalize_arxiv,
|
| 17 |
+
normalize_doi,
|
| 18 |
+
normalize_paper_id,
|
| 19 |
+
)
|
| 20 |
+
from scripts.defextra_pdf_aliases import candidate_pdf_aliases
|
| 21 |
+
except ModuleNotFoundError as exc:
|
| 22 |
+
if exc.name != "scripts":
|
| 23 |
+
raise
|
| 24 |
+
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
| 25 |
+
if str(PROJECT_ROOT) not in sys.path:
|
| 26 |
+
sys.path.insert(0, str(PROJECT_ROOT))
|
| 27 |
+
from scripts.defextra_markers import (
|
| 28 |
+
doi_suffix,
|
| 29 |
+
normalize_arxiv,
|
| 30 |
+
normalize_doi,
|
| 31 |
+
normalize_paper_id,
|
| 32 |
+
)
|
| 33 |
+
from scripts.defextra_pdf_aliases import candidate_pdf_aliases
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
def _build_pdf_index(
|
| 37 |
+
source_dirs: list[Path],
|
| 38 |
+
*,
|
| 39 |
+
skip_dir: Path | None = None,
|
| 40 |
+
) -> dict[str, Path]:
|
| 41 |
+
index: dict[str, Path] = {}
|
| 42 |
+
version_re = re.compile(r"^(?P<base>.+?)(v\d+)$", re.IGNORECASE)
|
| 43 |
+
arxiv_re = re.compile(r"^(?P<base>\d{4}\.\d{4,5})v\d+$", re.IGNORECASE)
|
| 44 |
+
pii_re = re.compile(r"(S\d{8,})", re.IGNORECASE)
|
| 45 |
+
for source_dir in source_dirs:
|
| 46 |
+
if not source_dir.exists():
|
| 47 |
+
continue
|
| 48 |
+
for suffix in ("*.pdf", "*.PDF"):
|
| 49 |
+
for path in source_dir.rglob(suffix):
|
| 50 |
+
if skip_dir is not None:
|
| 51 |
+
try:
|
| 52 |
+
path_abs = path.resolve()
|
| 53 |
+
except (OSError, RuntimeError):
|
| 54 |
+
path_abs = path.absolute()
|
| 55 |
+
if skip_dir == path_abs or skip_dir in path_abs.parents:
|
| 56 |
+
continue
|
| 57 |
+
stem = path.stem
|
| 58 |
+
for key in {stem, stem.lower()}:
|
| 59 |
+
index.setdefault(key, path)
|
| 60 |
+
if stem.startswith("paper_"):
|
| 61 |
+
stripped = stem[len("paper_") :]
|
| 62 |
+
if stripped:
|
| 63 |
+
index.setdefault(stripped, path)
|
| 64 |
+
index.setdefault(stripped.lower(), path)
|
| 65 |
+
if stem.endswith("_fixed") or stem.endswith("-fixed"):
|
| 66 |
+
base = stem[: -len("_fixed")] if stem.endswith("_fixed") else stem[: -len("-fixed")]
|
| 67 |
+
if base:
|
| 68 |
+
index[base] = path
|
| 69 |
+
index[base.lower()] = path
|
| 70 |
+
if base.startswith("paper_"):
|
| 71 |
+
stripped_base = base[len("paper_") :]
|
| 72 |
+
if stripped_base:
|
| 73 |
+
index[stripped_base] = path
|
| 74 |
+
index[stripped_base.lower()] = path
|
| 75 |
+
match = arxiv_re.match(stem)
|
| 76 |
+
if match:
|
| 77 |
+
base = match.group("base")
|
| 78 |
+
index.setdefault(base, path)
|
| 79 |
+
index.setdefault(base.lower(), path)
|
| 80 |
+
match = version_re.match(stem)
|
| 81 |
+
if match:
|
| 82 |
+
base = match.group("base")
|
| 83 |
+
index.setdefault(base, path)
|
| 84 |
+
index.setdefault(base.lower(), path)
|
| 85 |
+
pii_match = pii_re.search(stem)
|
| 86 |
+
if pii_match:
|
| 87 |
+
pii = pii_match.group(1)
|
| 88 |
+
index.setdefault(pii, path)
|
| 89 |
+
index.setdefault(pii.lower(), path)
|
| 90 |
+
return index
|
| 91 |
+
|
| 92 |
+
|
| 93 |
+
def _candidate_ids(paper_id: str, doi: str, arxiv: str) -> list[str]:
|
| 94 |
+
candidates = []
|
| 95 |
+
if paper_id:
|
| 96 |
+
candidates.append(paper_id)
|
| 97 |
+
candidates.append(normalize_paper_id(paper_id))
|
| 98 |
+
if doi:
|
| 99 |
+
norm_doi = normalize_doi(doi)
|
| 100 |
+
candidates.append(norm_doi)
|
| 101 |
+
candidates.append(doi_suffix(norm_doi))
|
| 102 |
+
if arxiv:
|
| 103 |
+
norm_arxiv = normalize_arxiv(arxiv)
|
| 104 |
+
candidates.append(norm_arxiv)
|
| 105 |
+
ordered: list[str] = []
|
| 106 |
+
seen = set()
|
| 107 |
+
for item in candidates:
|
| 108 |
+
value = (item or "").strip()
|
| 109 |
+
if not value:
|
| 110 |
+
continue
|
| 111 |
+
if value not in seen:
|
| 112 |
+
seen.add(value)
|
| 113 |
+
ordered.append(value)
|
| 114 |
+
for alias in candidate_pdf_aliases(paper_id, doi, arxiv):
|
| 115 |
+
value = (alias or "").strip()
|
| 116 |
+
if not value:
|
| 117 |
+
continue
|
| 118 |
+
if value not in seen:
|
| 119 |
+
seen.add(value)
|
| 120 |
+
ordered.append(value)
|
| 121 |
+
return ordered
|
| 122 |
+
|
| 123 |
+
|
| 124 |
+
def _normalize_title(title: str) -> str:
|
| 125 |
+
return " ".join(title.lower().split())
|
| 126 |
+
|
| 127 |
+
|
| 128 |
+
def main() -> None:
|
| 129 |
+
parser = argparse.ArgumentParser(
|
| 130 |
+
description="Build a test-only PDF folder for DefExtra hydration.",
|
| 131 |
+
)
|
| 132 |
+
parser.add_argument(
|
| 133 |
+
"--legal-csv",
|
| 134 |
+
type=Path,
|
| 135 |
+
default=Path("results/paper_results/defextra_legal.csv"),
|
| 136 |
+
help="Legal DefExtra CSV with paper_ids.",
|
| 137 |
+
)
|
| 138 |
+
parser.add_argument(
|
| 139 |
+
"--source-dir",
|
| 140 |
+
action="append",
|
| 141 |
+
type=Path,
|
| 142 |
+
default=[
|
| 143 |
+
Path("ManualPDFsGROBID"),
|
| 144 |
+
Path("ManualPDFsGROBID/manual_pdfs/manual_pdfs"),
|
| 145 |
+
],
|
| 146 |
+
help="Source PDF directory (can be repeated).",
|
| 147 |
+
)
|
| 148 |
+
parser.add_argument(
|
| 149 |
+
"--output-dir",
|
| 150 |
+
type=Path,
|
| 151 |
+
default=Path("ManualPDFsGROBID/test_pdfs"),
|
| 152 |
+
help="Output directory for test PDFs.",
|
| 153 |
+
)
|
| 154 |
+
parser.add_argument(
|
| 155 |
+
"--mode",
|
| 156 |
+
choices=("symlink", "copy"),
|
| 157 |
+
default="symlink",
|
| 158 |
+
help="Whether to symlink or copy PDFs into the output dir.",
|
| 159 |
+
)
|
| 160 |
+
parser.add_argument(
|
| 161 |
+
"--report",
|
| 162 |
+
type=Path,
|
| 163 |
+
default=None,
|
| 164 |
+
help="Optional report file listing missing PDFs.",
|
| 165 |
+
)
|
| 166 |
+
args = parser.parse_args()
|
| 167 |
+
|
| 168 |
+
if not args.legal_csv.exists():
|
| 169 |
+
raise SystemExit(f"Legal CSV not found: {args.legal_csv}")
|
| 170 |
+
|
| 171 |
+
pdf_index = _build_pdf_index(
|
| 172 |
+
args.source_dir,
|
| 173 |
+
skip_dir=args.output_dir.resolve(),
|
| 174 |
+
)
|
| 175 |
+
args.output_dir.mkdir(parents=True, exist_ok=True)
|
| 176 |
+
|
| 177 |
+
missing: list[str] = []
|
| 178 |
+
matched = 0
|
| 179 |
+
seen_ids: set[str] = set()
|
| 180 |
+
|
| 181 |
+
with args.legal_csv.open("r", encoding="utf-8", newline="") as handle:
|
| 182 |
+
reader = csv.DictReader(handle)
|
| 183 |
+
rows = list(reader)
|
| 184 |
+
|
| 185 |
+
title_to_candidates: dict[str, list[str]] = {}
|
| 186 |
+
for row in rows:
|
| 187 |
+
title = _normalize_title(row.get("paper_title") or "")
|
| 188 |
+
if not title:
|
| 189 |
+
continue
|
| 190 |
+
paper_id = (row.get("paper_id") or "").strip()
|
| 191 |
+
doi = (row.get("paper_doi") or "").strip()
|
| 192 |
+
arxiv = (row.get("paper_arxiv") or "").strip()
|
| 193 |
+
candidates = _candidate_ids(paper_id, doi, arxiv)
|
| 194 |
+
if candidates:
|
| 195 |
+
title_to_candidates.setdefault(title, []).extend(candidates)
|
| 196 |
+
|
| 197 |
+
for row in rows:
|
| 198 |
+
paper_id = (row.get("paper_id") or "").strip()
|
| 199 |
+
if not paper_id or paper_id in seen_ids:
|
| 200 |
+
continue
|
| 201 |
+
seen_ids.add(paper_id)
|
| 202 |
+
doi = (row.get("paper_doi") or "").strip()
|
| 203 |
+
arxiv = (row.get("paper_arxiv") or "").strip()
|
| 204 |
+
title_key = _normalize_title(row.get("paper_title") or "")
|
| 205 |
+
path = None
|
| 206 |
+
for candidate in _candidate_ids(paper_id, doi, arxiv):
|
| 207 |
+
key = candidate.lower()
|
| 208 |
+
path = pdf_index.get(key) or pdf_index.get(f"paper_{key}")
|
| 209 |
+
if path is not None:
|
| 210 |
+
break
|
| 211 |
+
if path is None and title_key in title_to_candidates:
|
| 212 |
+
for candidate in title_to_candidates[title_key]:
|
| 213 |
+
key = candidate.lower()
|
| 214 |
+
path = pdf_index.get(key) or pdf_index.get(f"paper_{key}")
|
| 215 |
+
if path is not None:
|
| 216 |
+
break
|
| 217 |
+
if path is None:
|
| 218 |
+
missing.append(paper_id)
|
| 219 |
+
continue
|
| 220 |
+
|
| 221 |
+
dest_id = normalize_paper_id(paper_id) or paper_id
|
| 222 |
+
dest = args.output_dir / f"{dest_id}.pdf"
|
| 223 |
+
if dest.is_symlink():
|
| 224 |
+
try:
|
| 225 |
+
current = dest.resolve(strict=True)
|
| 226 |
+
except (OSError, RuntimeError):
|
| 227 |
+
current = None
|
| 228 |
+
if current is None or current != path.resolve():
|
| 229 |
+
dest.unlink()
|
| 230 |
+
else:
|
| 231 |
+
matched += 1
|
| 232 |
+
continue
|
| 233 |
+
elif dest.exists():
|
| 234 |
+
matched += 1
|
| 235 |
+
continue
|
| 236 |
+
|
| 237 |
+
if args.mode == "copy":
|
| 238 |
+
shutil.copy2(path, dest)
|
| 239 |
+
else:
|
| 240 |
+
# Relative symlink keeps the test folder portable.
|
| 241 |
+
rel = Path(os.path.relpath(path, start=dest.parent))
|
| 242 |
+
dest.symlink_to(rel)
|
| 243 |
+
matched += 1
|
| 244 |
+
|
| 245 |
+
print(f"Matched PDFs: {matched}")
|
| 246 |
+
print(f"Missing PDFs: {len(missing)}")
|
| 247 |
+
if missing:
|
| 248 |
+
print("Missing IDs (first 20):", ", ".join(missing[:20]))
|
| 249 |
+
|
| 250 |
+
if args.report is not None:
|
| 251 |
+
args.report.parent.mkdir(parents=True, exist_ok=True)
|
| 252 |
+
args.report.write_text("\n".join(missing) + "\n", encoding="utf-8")
|
| 253 |
+
print(f"Wrote report to {args.report}")
|
| 254 |
+
|
| 255 |
+
|
| 256 |
+
if __name__ == "__main__":
|
| 257 |
+
main()
|
scripts/compare_defextra_csvs.py
ADDED
|
@@ -0,0 +1,327 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
# ruff: noqa: E402
|
| 4 |
+
|
| 5 |
+
import argparse
|
| 6 |
+
import csv
|
| 7 |
+
import re
|
| 8 |
+
import sys
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
from typing import Dict, Tuple
|
| 11 |
+
|
| 12 |
+
try:
|
| 13 |
+
from scripts.hydrate_defextra import (
|
| 14 |
+
_ensure_trailing_punct,
|
| 15 |
+
_postprocess_text,
|
| 16 |
+
)
|
| 17 |
+
except ModuleNotFoundError as exc:
|
| 18 |
+
if exc.name != "scripts":
|
| 19 |
+
raise
|
| 20 |
+
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
| 21 |
+
if str(PROJECT_ROOT) not in sys.path:
|
| 22 |
+
sys.path.insert(0, str(PROJECT_ROOT))
|
| 23 |
+
from scripts.hydrate_defextra import (
|
| 24 |
+
_ensure_trailing_punct,
|
| 25 |
+
_postprocess_text,
|
| 26 |
+
)
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
def _normalize_text(text: str) -> str:
|
| 30 |
+
if text is None:
|
| 31 |
+
return ""
|
| 32 |
+
value = text.replace("\u00ad", "")
|
| 33 |
+
value = re.sub(r"([A-Za-z])-\s+([A-Za-z])", r"\1\2", value)
|
| 34 |
+
value = re.sub(r"\s+", " ", value).strip()
|
| 35 |
+
return value
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
def _normalize_punct(text: str) -> str:
|
| 39 |
+
if text is None:
|
| 40 |
+
return ""
|
| 41 |
+
value = text.replace("\u00ad", "")
|
| 42 |
+
value = re.sub(r"[^\w\s]", "", value)
|
| 43 |
+
value = re.sub(r"\\s+", " ", value).strip()
|
| 44 |
+
return value
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
def _load(path: Path) -> Dict[Tuple[str, str], Dict[str, str]]:
|
| 48 |
+
with path.open(encoding="utf-8", newline="") as handle:
|
| 49 |
+
reader = csv.DictReader(handle)
|
| 50 |
+
return {(row["paper_id"], row["concept"]): row for row in reader}
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
def _row_flag(
|
| 54 |
+
row: Dict[str, str],
|
| 55 |
+
key: str,
|
| 56 |
+
default: bool = False,
|
| 57 |
+
) -> bool:
|
| 58 |
+
value = (row.get(key) or "").strip().lower()
|
| 59 |
+
if not value:
|
| 60 |
+
return default
|
| 61 |
+
return value == "true"
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
def main() -> None:
|
| 65 |
+
parser = argparse.ArgumentParser(
|
| 66 |
+
description="Compare DefExtra hydrated CSV against reference.",
|
| 67 |
+
)
|
| 68 |
+
parser.add_argument(
|
| 69 |
+
"--ref",
|
| 70 |
+
type=Path,
|
| 71 |
+
default=Path("results/paper_results/defextra_hf_tablefix.csv"),
|
| 72 |
+
help="Reference CSV path.",
|
| 73 |
+
)
|
| 74 |
+
parser.add_argument(
|
| 75 |
+
"--hyd",
|
| 76 |
+
type=Path,
|
| 77 |
+
default=Path(
|
| 78 |
+
"results/paper_results/defextra_hydrated_tablefix_test.csv",
|
| 79 |
+
),
|
| 80 |
+
help="Hydrated CSV path.",
|
| 81 |
+
)
|
| 82 |
+
parser.add_argument(
|
| 83 |
+
"--limit",
|
| 84 |
+
type=int,
|
| 85 |
+
default=5,
|
| 86 |
+
help="Number of mismatches to print.",
|
| 87 |
+
)
|
| 88 |
+
parser.add_argument(
|
| 89 |
+
"--report",
|
| 90 |
+
type=Path,
|
| 91 |
+
default=None,
|
| 92 |
+
help="Optional path to write a detailed mismatch report.",
|
| 93 |
+
)
|
| 94 |
+
parser.add_argument(
|
| 95 |
+
"--report-limit",
|
| 96 |
+
type=int,
|
| 97 |
+
default=0,
|
| 98 |
+
help="Limit mismatches written to report (0 = all).",
|
| 99 |
+
)
|
| 100 |
+
parser.add_argument(
|
| 101 |
+
"--legal",
|
| 102 |
+
type=Path,
|
| 103 |
+
default=Path("results/paper_results/defextra_legal_tablefix.csv"),
|
| 104 |
+
help="Legal CSV with token counts and linebreak flags.",
|
| 105 |
+
)
|
| 106 |
+
parser.add_argument(
|
| 107 |
+
"--postprocess",
|
| 108 |
+
action="store_true",
|
| 109 |
+
help="Apply hydration postprocessing to hydrated text before compare.",
|
| 110 |
+
)
|
| 111 |
+
args = parser.parse_args()
|
| 112 |
+
|
| 113 |
+
ref = _load(args.ref)
|
| 114 |
+
hyd = _load(args.hyd)
|
| 115 |
+
legal = _load(args.legal) if args.postprocess else {}
|
| 116 |
+
|
| 117 |
+
missing = [k for k in ref if k not in hyd]
|
| 118 |
+
extra = [k for k in hyd if k not in ref]
|
| 119 |
+
|
| 120 |
+
def_mismatch = []
|
| 121 |
+
ctx_mismatch = []
|
| 122 |
+
def_mismatch_norm = []
|
| 123 |
+
ctx_mismatch_norm = []
|
| 124 |
+
def_mismatch_punct = []
|
| 125 |
+
ctx_mismatch_punct = []
|
| 126 |
+
|
| 127 |
+
for key, ref_row in ref.items():
|
| 128 |
+
hyd_row = hyd.get(key)
|
| 129 |
+
if not hyd_row:
|
| 130 |
+
continue
|
| 131 |
+
if args.postprocess:
|
| 132 |
+
legal_row = legal.get(key, {})
|
| 133 |
+
def_expected = int(legal_row.get("definition_token_count") or 0)
|
| 134 |
+
ctx_expected = int(legal_row.get("context_token_count") or 0)
|
| 135 |
+
def_preserve = _row_flag(
|
| 136 |
+
legal_row,
|
| 137 |
+
"definition_preserve_linebreaks",
|
| 138 |
+
)
|
| 139 |
+
ctx_preserve = _row_flag(
|
| 140 |
+
legal_row,
|
| 141 |
+
"context_preserve_linebreaks",
|
| 142 |
+
)
|
| 143 |
+
def_preserve_hyphen = _row_flag(
|
| 144 |
+
legal_row,
|
| 145 |
+
"definition_preserve_hyphenation",
|
| 146 |
+
)
|
| 147 |
+
ctx_preserve_hyphen = _row_flag(
|
| 148 |
+
legal_row,
|
| 149 |
+
"context_preserve_hyphenation",
|
| 150 |
+
)
|
| 151 |
+
def_keep_bracket = _row_flag(
|
| 152 |
+
legal_row,
|
| 153 |
+
"definition_has_bracket_citation",
|
| 154 |
+
True,
|
| 155 |
+
)
|
| 156 |
+
def_keep_paren = _row_flag(
|
| 157 |
+
legal_row,
|
| 158 |
+
"definition_has_paren_citation",
|
| 159 |
+
True,
|
| 160 |
+
)
|
| 161 |
+
def_split_letter_digit = not _row_flag(
|
| 162 |
+
legal_row,
|
| 163 |
+
"definition_has_letter_digit",
|
| 164 |
+
)
|
| 165 |
+
ctx_keep_bracket = _row_flag(
|
| 166 |
+
legal_row,
|
| 167 |
+
"context_has_bracket_citation",
|
| 168 |
+
True,
|
| 169 |
+
)
|
| 170 |
+
ctx_keep_paren = _row_flag(
|
| 171 |
+
legal_row,
|
| 172 |
+
"context_has_paren_citation",
|
| 173 |
+
True,
|
| 174 |
+
)
|
| 175 |
+
ctx_split_letter_digit = not _row_flag(
|
| 176 |
+
legal_row,
|
| 177 |
+
"context_has_letter_digit",
|
| 178 |
+
)
|
| 179 |
+
hyd_row = dict(hyd_row)
|
| 180 |
+
hyd_row["definition"] = _ensure_trailing_punct(
|
| 181 |
+
_postprocess_text(
|
| 182 |
+
hyd_row.get("definition", ""),
|
| 183 |
+
def_expected,
|
| 184 |
+
def_preserve,
|
| 185 |
+
def_preserve_hyphen,
|
| 186 |
+
def_keep_bracket,
|
| 187 |
+
def_keep_paren,
|
| 188 |
+
def_split_letter_digit,
|
| 189 |
+
),
|
| 190 |
+
legal_row.get("definition_end_punct", ""),
|
| 191 |
+
)
|
| 192 |
+
hyd_row["context"] = _ensure_trailing_punct(
|
| 193 |
+
_postprocess_text(
|
| 194 |
+
hyd_row.get("context", ""),
|
| 195 |
+
ctx_expected,
|
| 196 |
+
ctx_preserve,
|
| 197 |
+
ctx_preserve_hyphen,
|
| 198 |
+
ctx_keep_bracket,
|
| 199 |
+
ctx_keep_paren,
|
| 200 |
+
ctx_split_letter_digit,
|
| 201 |
+
),
|
| 202 |
+
legal_row.get("context_end_punct", ""),
|
| 203 |
+
)
|
| 204 |
+
if ref_row.get("definition", "") != hyd_row.get("definition", ""):
|
| 205 |
+
def_mismatch.append(key)
|
| 206 |
+
if _normalize_text(
|
| 207 |
+
ref_row.get("definition", ""),
|
| 208 |
+
) != _normalize_text(
|
| 209 |
+
hyd_row.get("definition", ""),
|
| 210 |
+
):
|
| 211 |
+
def_mismatch_norm.append(key)
|
| 212 |
+
if _normalize_punct(ref_row.get("definition", "")) == _normalize_punct(
|
| 213 |
+
hyd_row.get("definition", ""),
|
| 214 |
+
):
|
| 215 |
+
def_mismatch_punct.append(key)
|
| 216 |
+
if ref_row.get("context", "") != hyd_row.get("context", ""):
|
| 217 |
+
ctx_mismatch.append(key)
|
| 218 |
+
if _normalize_text(ref_row.get("context", "")) != _normalize_text(
|
| 219 |
+
hyd_row.get("context", ""),
|
| 220 |
+
):
|
| 221 |
+
ctx_mismatch_norm.append(key)
|
| 222 |
+
if _normalize_punct(ref_row.get("context", "")) == _normalize_punct(
|
| 223 |
+
hyd_row.get("context", ""),
|
| 224 |
+
):
|
| 225 |
+
ctx_mismatch_punct.append(key)
|
| 226 |
+
|
| 227 |
+
total_ref = len(ref)
|
| 228 |
+
total_hyd = len(hyd)
|
| 229 |
+
print(f"Reference rows: {total_ref}")
|
| 230 |
+
print(f"Hydrated rows: {total_hyd}")
|
| 231 |
+
print(f"Missing keys: {len(missing)}")
|
| 232 |
+
print(f"Extra keys: {len(extra)}")
|
| 233 |
+
print(f"Definition mismatches (exact): {len(def_mismatch)}")
|
| 234 |
+
print(f"Definition mismatches (normalized): {len(def_mismatch_norm)}")
|
| 235 |
+
print(f"Context mismatches (exact): {len(ctx_mismatch)}")
|
| 236 |
+
print(f"Context mismatches (normalized): {len(ctx_mismatch_norm)}")
|
| 237 |
+
if def_mismatch_punct:
|
| 238 |
+
print(
|
| 239 |
+
"Definition mismatches (punctuation-only): "
|
| 240 |
+
f"{len(def_mismatch_punct)}",
|
| 241 |
+
)
|
| 242 |
+
if ctx_mismatch_punct:
|
| 243 |
+
print(
|
| 244 |
+
"Context mismatches (punctuation-only): "
|
| 245 |
+
f"{len(ctx_mismatch_punct)}",
|
| 246 |
+
)
|
| 247 |
+
|
| 248 |
+
if args.limit <= 0:
|
| 249 |
+
return
|
| 250 |
+
|
| 251 |
+
shown = 0
|
| 252 |
+
for key in def_mismatch:
|
| 253 |
+
if shown >= args.limit:
|
| 254 |
+
break
|
| 255 |
+
ref_row = ref[key]
|
| 256 |
+
hyd_row = hyd[key]
|
| 257 |
+
print("\nDefinition mismatch:", key)
|
| 258 |
+
print("ref:", ref_row.get("definition", ""))
|
| 259 |
+
print("hyd:", hyd_row.get("definition", ""))
|
| 260 |
+
shown += 1
|
| 261 |
+
|
| 262 |
+
shown = 0
|
| 263 |
+
for key in ctx_mismatch:
|
| 264 |
+
if shown >= args.limit:
|
| 265 |
+
break
|
| 266 |
+
ref_row = ref[key]
|
| 267 |
+
hyd_row = hyd[key]
|
| 268 |
+
print("\nContext mismatch:", key)
|
| 269 |
+
print("ref:", ref_row.get("context", ""))
|
| 270 |
+
print("hyd:", hyd_row.get("context", ""))
|
| 271 |
+
shown += 1
|
| 272 |
+
|
| 273 |
+
if args.report is not None:
|
| 274 |
+
report_lines = []
|
| 275 |
+
report_lines.append(f"Missing keys: {len(missing)}")
|
| 276 |
+
report_lines.extend([f"- {k}" for k in missing])
|
| 277 |
+
report_lines.append("")
|
| 278 |
+
report_lines.append(
|
| 279 |
+
f"Definition mismatches (exact): {len(def_mismatch)}"
|
| 280 |
+
)
|
| 281 |
+
report_lines.append(
|
| 282 |
+
f"Definition mismatches (normalized): {len(def_mismatch_norm)}"
|
| 283 |
+
)
|
| 284 |
+
report_lines.append(
|
| 285 |
+
f"Definition mismatches (punctuation-only): {len(def_mismatch_punct)}"
|
| 286 |
+
)
|
| 287 |
+
report_lines.append(
|
| 288 |
+
f"Context mismatches (exact): {len(ctx_mismatch)}"
|
| 289 |
+
)
|
| 290 |
+
report_lines.append(
|
| 291 |
+
f"Context mismatches (normalized): {len(ctx_mismatch_norm)}"
|
| 292 |
+
)
|
| 293 |
+
report_lines.append(
|
| 294 |
+
f"Context mismatches (punctuation-only): {len(ctx_mismatch_punct)}"
|
| 295 |
+
)
|
| 296 |
+
report_lines.append("")
|
| 297 |
+
|
| 298 |
+
def_limit = args.report_limit or len(def_mismatch)
|
| 299 |
+
ctx_limit = args.report_limit or len(ctx_mismatch)
|
| 300 |
+
|
| 301 |
+
report_lines.append("Definition mismatches:")
|
| 302 |
+
for key in def_mismatch[:def_limit]:
|
| 303 |
+
ref_row = ref[key]
|
| 304 |
+
hyd_row = hyd[key]
|
| 305 |
+
report_lines.append(f"- {key[0]} | {key[1]}")
|
| 306 |
+
report_lines.append(f" ref: {ref_row.get('definition','')}")
|
| 307 |
+
report_lines.append(f" hyd: {hyd_row.get('definition','')}")
|
| 308 |
+
report_lines.append("")
|
| 309 |
+
|
| 310 |
+
report_lines.append("Context mismatches:")
|
| 311 |
+
for key in ctx_mismatch[:ctx_limit]:
|
| 312 |
+
ref_row = ref[key]
|
| 313 |
+
hyd_row = hyd[key]
|
| 314 |
+
report_lines.append(f"- {key[0]} | {key[1]}")
|
| 315 |
+
report_lines.append(f" ref: {ref_row.get('context','')}")
|
| 316 |
+
report_lines.append(f" hyd: {hyd_row.get('context','')}")
|
| 317 |
+
|
| 318 |
+
args.report.parent.mkdir(parents=True, exist_ok=True)
|
| 319 |
+
args.report.write_text(
|
| 320 |
+
"\n".join(report_lines) + "\n",
|
| 321 |
+
encoding="utf-8",
|
| 322 |
+
)
|
| 323 |
+
print(f"Wrote report to {args.report}")
|
| 324 |
+
|
| 325 |
+
|
| 326 |
+
if __name__ == "__main__":
|
| 327 |
+
main()
|
scripts/defextra_markers.py
ADDED
|
@@ -0,0 +1,560 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import importlib
|
| 4 |
+
import re
|
| 5 |
+
import unicodedata
|
| 6 |
+
from dataclasses import dataclass, field
|
| 7 |
+
from pathlib import Path
|
| 8 |
+
from typing import Any, Dict, Iterable, Optional
|
| 9 |
+
|
| 10 |
+
try:
|
| 11 |
+
etree = importlib.import_module("lxml.etree")
|
| 12 |
+
except Exception as exc:
|
| 13 |
+
raise RuntimeError(
|
| 14 |
+
"Missing lxml dependency. Install with `pip install lxml`.",
|
| 15 |
+
) from exc
|
| 16 |
+
|
| 17 |
+
NS = {"tei": "http://www.tei-c.org/ns/1.0"}
|
| 18 |
+
|
| 19 |
+
BLOCK_TAGS = {
|
| 20 |
+
"head",
|
| 21 |
+
"p",
|
| 22 |
+
"item",
|
| 23 |
+
"title",
|
| 24 |
+
"label",
|
| 25 |
+
"figDesc",
|
| 26 |
+
"caption",
|
| 27 |
+
"cell",
|
| 28 |
+
"ab",
|
| 29 |
+
"note",
|
| 30 |
+
"quote",
|
| 31 |
+
}
|
| 32 |
+
|
| 33 |
+
HASH_BASE = 1_000_003
|
| 34 |
+
HASH_MASK = (1 << 64) - 1
|
| 35 |
+
HASH_VERSION = "tokhash_v2"
|
| 36 |
+
ANCHOR_WINDOW = 12
|
| 37 |
+
ANCHOR_WINDOW_ALT = 6
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
def normalize_paper_id(paper_id: str) -> str:
|
| 41 |
+
raw = paper_id.strip()
|
| 42 |
+
if raw.lower().startswith("doi:"):
|
| 43 |
+
raw = raw[4:]
|
| 44 |
+
if raw.startswith("dx.doi.org/"):
|
| 45 |
+
raw = raw[len("dx.doi.org/") :]
|
| 46 |
+
if "doi.org/" in raw:
|
| 47 |
+
raw = raw.split("doi.org/", 1)[1]
|
| 48 |
+
if raw.startswith("http://") or raw.startswith("https://"):
|
| 49 |
+
match = re.search(r"arxiv\.org/(abs|pdf)/([^?#]+)", raw)
|
| 50 |
+
if match:
|
| 51 |
+
return match.group(2).replace(".pdf", "")
|
| 52 |
+
match = re.search(r"/pii/([^/?#]+)", raw)
|
| 53 |
+
if match:
|
| 54 |
+
return match.group(1)
|
| 55 |
+
match = re.search(r"10\.[0-9]+/([^?#]+)", raw)
|
| 56 |
+
if match:
|
| 57 |
+
return match.group(1)
|
| 58 |
+
parts = [p for p in re.split(r"[/?#]", raw) if p]
|
| 59 |
+
if parts:
|
| 60 |
+
return parts[-1]
|
| 61 |
+
match = re.search(r"10\.[0-9]+/([^\s]+)", raw)
|
| 62 |
+
if match:
|
| 63 |
+
return match.group(1)
|
| 64 |
+
return raw
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
def normalize_arxiv(value: str) -> str:
|
| 68 |
+
cleaned = value.strip()
|
| 69 |
+
match = re.search(
|
| 70 |
+
r"(\d{4}\.\d{4,5}v\d+|[a-z-]+/\d{7}v\d+)",
|
| 71 |
+
cleaned,
|
| 72 |
+
re.I,
|
| 73 |
+
)
|
| 74 |
+
if match:
|
| 75 |
+
return match.group(1)
|
| 76 |
+
return cleaned.replace("arXiv:", "").strip()
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
def normalize_doi(value: str) -> str:
|
| 80 |
+
cleaned = value.strip()
|
| 81 |
+
if cleaned.startswith("https://doi.org/"):
|
| 82 |
+
cleaned = cleaned[len("https://doi.org/") :]
|
| 83 |
+
if cleaned.startswith("http://doi.org/"):
|
| 84 |
+
cleaned = cleaned[len("http://doi.org/") :]
|
| 85 |
+
return cleaned.lower()
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
def doi_suffix(value: str) -> str:
|
| 89 |
+
cleaned = normalize_doi(value)
|
| 90 |
+
match = re.search(r"10\.[0-9]+/(.+)", cleaned)
|
| 91 |
+
if match:
|
| 92 |
+
return match.group(1)
|
| 93 |
+
return cleaned
|
| 94 |
+
|
| 95 |
+
|
| 96 |
+
def extract_ids_from_tei(path: Path) -> tuple[Optional[str], Optional[str]]:
|
| 97 |
+
try:
|
| 98 |
+
root = etree.parse(str(path)).getroot()
|
| 99 |
+
except (OSError, etree.XMLSyntaxError):
|
| 100 |
+
return None, None
|
| 101 |
+
bibl = root.find(
|
| 102 |
+
".//tei:teiHeader/tei:fileDesc/tei:sourceDesc/tei:biblStruct",
|
| 103 |
+
namespaces=NS,
|
| 104 |
+
)
|
| 105 |
+
if bibl is None:
|
| 106 |
+
return None, None
|
| 107 |
+
idnos = bibl.findall(".//tei:idno", namespaces=NS)
|
| 108 |
+
doi = arxiv = None
|
| 109 |
+
for idno in idnos:
|
| 110 |
+
if idno.text is None:
|
| 111 |
+
continue
|
| 112 |
+
value = idno.text.strip()
|
| 113 |
+
if not value:
|
| 114 |
+
continue
|
| 115 |
+
id_type = (idno.get("type") or "").lower()
|
| 116 |
+
if doi is None and (
|
| 117 |
+
id_type == "doi" or value.lower().startswith("10.")
|
| 118 |
+
):
|
| 119 |
+
doi = value
|
| 120 |
+
if arxiv is None and (
|
| 121 |
+
"arxiv" in id_type or value.lower().startswith("arxiv")
|
| 122 |
+
):
|
| 123 |
+
arxiv = normalize_arxiv(value)
|
| 124 |
+
return doi, arxiv
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
def extract_title_from_tei(path: Path) -> Optional[str]:
|
| 128 |
+
try:
|
| 129 |
+
root = etree.parse(str(path)).getroot()
|
| 130 |
+
except (OSError, etree.XMLSyntaxError):
|
| 131 |
+
return None
|
| 132 |
+
for path_expr in (
|
| 133 |
+
".//tei:teiHeader/tei:fileDesc/tei:titleStmt/tei:title",
|
| 134 |
+
".//tei:teiHeader/tei:fileDesc/tei:sourceDesc/tei:biblStruct/tei:analytic/tei:title",
|
| 135 |
+
".//tei:teiHeader/tei:fileDesc/tei:sourceDesc/tei:biblStruct/tei:monogr/tei:title",
|
| 136 |
+
):
|
| 137 |
+
node = root.find(path_expr, namespaces=NS)
|
| 138 |
+
if node is None:
|
| 139 |
+
continue
|
| 140 |
+
text = "".join(node.itertext()).strip()
|
| 141 |
+
if text:
|
| 142 |
+
return text
|
| 143 |
+
return None
|
| 144 |
+
|
| 145 |
+
|
| 146 |
+
def extract_text_from_pdf(path: Path) -> str:
|
| 147 |
+
pdfplumber_module: Any | None
|
| 148 |
+
try:
|
| 149 |
+
pdfplumber_module = importlib.import_module("pdfplumber")
|
| 150 |
+
except Exception:
|
| 151 |
+
pdfplumber_module = None
|
| 152 |
+
pdfplumber_any: Any = pdfplumber_module
|
| 153 |
+
if pdfplumber_any is not None:
|
| 154 |
+
texts = []
|
| 155 |
+
with pdfplumber_any.open(str(path)) as pdf:
|
| 156 |
+
for page in pdf.pages:
|
| 157 |
+
page_text = page.extract_text() or ""
|
| 158 |
+
words_text = ""
|
| 159 |
+
try:
|
| 160 |
+
words = page.extract_words(
|
| 161 |
+
use_text_flow=True,
|
| 162 |
+
keep_blank_chars=False,
|
| 163 |
+
)
|
| 164 |
+
if words:
|
| 165 |
+
words_text = " ".join(w.get("text", "") for w in words)
|
| 166 |
+
except Exception:
|
| 167 |
+
words_text = ""
|
| 168 |
+
if words_text and words_text not in page_text:
|
| 169 |
+
if page_text:
|
| 170 |
+
page_text = f"{page_text}\n{words_text}"
|
| 171 |
+
else:
|
| 172 |
+
page_text = words_text
|
| 173 |
+
texts.append(page_text)
|
| 174 |
+
return "\n\n".join(texts)
|
| 175 |
+
try:
|
| 176 |
+
from pdfminer.high_level import extract_text
|
| 177 |
+
except Exception: # pragma: no cover - optional dependency
|
| 178 |
+
try:
|
| 179 |
+
from PyPDF2 import PdfReader
|
| 180 |
+
except Exception as py_exc: # pragma: no cover
|
| 181 |
+
raise RuntimeError(
|
| 182 |
+
"PDF fallback extraction requires pdfminer.six "
|
| 183 |
+
"or PyPDF2. Install with `pip install pdfminer.six`.",
|
| 184 |
+
) from py_exc
|
| 185 |
+
reader = PdfReader(str(path))
|
| 186 |
+
texts = []
|
| 187 |
+
for page in reader.pages:
|
| 188 |
+
texts.append(page.extract_text() or "")
|
| 189 |
+
return "\n\n".join(texts)
|
| 190 |
+
return extract_text(str(path))
|
| 191 |
+
|
| 192 |
+
|
| 193 |
+
def build_tei_index(tei_dirs: Iterable[Path]) -> Dict[str, Path]:
|
| 194 |
+
index: Dict[str, Path] = {}
|
| 195 |
+
for tei_dir in tei_dirs:
|
| 196 |
+
if not tei_dir.exists():
|
| 197 |
+
continue
|
| 198 |
+
for path in tei_dir.glob("*.grobid.tei.xml"):
|
| 199 |
+
stem = path.name[: -len(".grobid.tei.xml")]
|
| 200 |
+
if stem.startswith("paper_"):
|
| 201 |
+
stem = stem[len("paper_") :]
|
| 202 |
+
index.setdefault(stem, path)
|
| 203 |
+
return index
|
| 204 |
+
|
| 205 |
+
|
| 206 |
+
def _local_name(tag: str) -> str:
|
| 207 |
+
return tag.split("}", 1)[-1] if "}" in tag else tag
|
| 208 |
+
|
| 209 |
+
|
| 210 |
+
def extract_blocks_from_tei(path: Path) -> list[str]:
|
| 211 |
+
root = etree.parse(str(path)).getroot()
|
| 212 |
+
blocks: list[str] = []
|
| 213 |
+
|
| 214 |
+
def add_blocks(elem) -> None:
|
| 215 |
+
tag = _local_name(elem.tag)
|
| 216 |
+
if tag in BLOCK_TAGS:
|
| 217 |
+
text = "".join(elem.itertext()).strip()
|
| 218 |
+
if text:
|
| 219 |
+
number = elem.get("n")
|
| 220 |
+
if number and tag in {"head", "label"}:
|
| 221 |
+
text = f"{number} {text}".strip()
|
| 222 |
+
blocks.append(text)
|
| 223 |
+
return
|
| 224 |
+
for child in elem:
|
| 225 |
+
add_blocks(child)
|
| 226 |
+
|
| 227 |
+
for abstract in root.xpath(
|
| 228 |
+
".//tei:teiHeader//tei:abstract",
|
| 229 |
+
namespaces=NS,
|
| 230 |
+
):
|
| 231 |
+
add_blocks(abstract)
|
| 232 |
+
|
| 233 |
+
text_root = root.find(".//tei:text", namespaces=NS)
|
| 234 |
+
if text_root is not None:
|
| 235 |
+
add_blocks(text_root)
|
| 236 |
+
return blocks
|
| 237 |
+
|
| 238 |
+
|
| 239 |
+
def _normalize_token(token: str) -> str:
|
| 240 |
+
return unicodedata.normalize("NFKC", token).lower()
|
| 241 |
+
|
| 242 |
+
|
| 243 |
+
HYPHEN_CHARS = {
|
| 244 |
+
"-",
|
| 245 |
+
"\u2010",
|
| 246 |
+
"\u2011",
|
| 247 |
+
"\u2012",
|
| 248 |
+
"\u2013",
|
| 249 |
+
"\u2014",
|
| 250 |
+
"\u2212",
|
| 251 |
+
}
|
| 252 |
+
SOFT_HYPHEN = "\u00ad"
|
| 253 |
+
|
| 254 |
+
|
| 255 |
+
def tokenize_text(
|
| 256 |
+
text: str,
|
| 257 |
+
*,
|
| 258 |
+
return_spans: bool = False,
|
| 259 |
+
) -> tuple[list[str], Optional[list[tuple[int, int]]]]:
|
| 260 |
+
tokens: list[str] = []
|
| 261 |
+
spans: list[tuple[int, int]] = []
|
| 262 |
+
i = 0
|
| 263 |
+
while i < len(text):
|
| 264 |
+
ch = text[i]
|
| 265 |
+
if ch == SOFT_HYPHEN:
|
| 266 |
+
i += 1
|
| 267 |
+
continue
|
| 268 |
+
if ch.isalnum():
|
| 269 |
+
start = i
|
| 270 |
+
last_idx = i
|
| 271 |
+
last_alpha = ch.isalpha()
|
| 272 |
+
token_chars = [ch]
|
| 273 |
+
i += 1
|
| 274 |
+
while i < len(text):
|
| 275 |
+
ch = text[i]
|
| 276 |
+
if ch == SOFT_HYPHEN:
|
| 277 |
+
i += 1
|
| 278 |
+
continue
|
| 279 |
+
if ch.isalnum():
|
| 280 |
+
is_alpha = ch.isalpha()
|
| 281 |
+
if is_alpha != last_alpha:
|
| 282 |
+
break
|
| 283 |
+
token_chars.append(ch)
|
| 284 |
+
last_idx = i
|
| 285 |
+
last_alpha = is_alpha
|
| 286 |
+
i += 1
|
| 287 |
+
continue
|
| 288 |
+
if ch in HYPHEN_CHARS and last_alpha:
|
| 289 |
+
j = i + 1
|
| 290 |
+
while j < len(text) and text[j].isspace():
|
| 291 |
+
j += 1
|
| 292 |
+
if j < len(text) and text[j].isalpha():
|
| 293 |
+
i = j
|
| 294 |
+
continue
|
| 295 |
+
break
|
| 296 |
+
tokens.append(_normalize_token("".join(token_chars)))
|
| 297 |
+
if return_spans:
|
| 298 |
+
spans.append((start, last_idx + 1))
|
| 299 |
+
else:
|
| 300 |
+
i += 1
|
| 301 |
+
return tokens, spans if return_spans else None
|
| 302 |
+
|
| 303 |
+
|
| 304 |
+
def hash_token(token: str) -> int:
|
| 305 |
+
import hashlib
|
| 306 |
+
|
| 307 |
+
digest = hashlib.blake2b(token.encode("utf-8"), digest_size=8).digest()
|
| 308 |
+
return int.from_bytes(digest, "big")
|
| 309 |
+
|
| 310 |
+
|
| 311 |
+
def hash_token_sequence(tokens: list[str]) -> tuple[int, str, int]:
|
| 312 |
+
import hashlib
|
| 313 |
+
|
| 314 |
+
rolling = 0
|
| 315 |
+
normalized = [_normalize_token(token) for token in tokens]
|
| 316 |
+
for token in normalized:
|
| 317 |
+
rolling = ((rolling * HASH_BASE) + hash_token(token)) & HASH_MASK
|
| 318 |
+
joined = " ".join(normalized).encode("utf-8")
|
| 319 |
+
sha = hashlib.sha256(joined).hexdigest()
|
| 320 |
+
return rolling, sha, len(normalized)
|
| 321 |
+
|
| 322 |
+
|
| 323 |
+
@dataclass
|
| 324 |
+
class TokenIndex:
|
| 325 |
+
doc_text: str
|
| 326 |
+
tokens: list[str]
|
| 327 |
+
spans: list[tuple[int, int]]
|
| 328 |
+
token_hashes: list[int]
|
| 329 |
+
rolling_cache: dict[int, dict[int, list[int]]] = field(
|
| 330 |
+
default_factory=dict,
|
| 331 |
+
)
|
| 332 |
+
|
| 333 |
+
@classmethod
|
| 334 |
+
def from_text(cls, doc_text: str) -> "TokenIndex":
|
| 335 |
+
tokens, spans = tokenize_text(doc_text, return_spans=True)
|
| 336 |
+
token_hashes = [hash_token(t) for t in tokens]
|
| 337 |
+
return cls(
|
| 338 |
+
doc_text=doc_text,
|
| 339 |
+
tokens=tokens,
|
| 340 |
+
spans=spans or [],
|
| 341 |
+
token_hashes=token_hashes,
|
| 342 |
+
)
|
| 343 |
+
|
| 344 |
+
def _build_rolling_index(self, window: int) -> dict[int, list[int]]:
|
| 345 |
+
if window in self.rolling_cache:
|
| 346 |
+
return self.rolling_cache[window]
|
| 347 |
+
index: dict[int, list[int]] = {}
|
| 348 |
+
if window <= 0 or window > len(self.tokens):
|
| 349 |
+
self.rolling_cache[window] = index
|
| 350 |
+
return index
|
| 351 |
+
|
| 352 |
+
pow_base = 1
|
| 353 |
+
for _ in range(window - 1):
|
| 354 |
+
pow_base = (pow_base * HASH_BASE) & HASH_MASK
|
| 355 |
+
|
| 356 |
+
rolling = 0
|
| 357 |
+
for i in range(window):
|
| 358 |
+
rolling = (
|
| 359 |
+
(rolling * HASH_BASE) + self.token_hashes[i]
|
| 360 |
+
) & HASH_MASK
|
| 361 |
+
index.setdefault(rolling, []).append(0)
|
| 362 |
+
|
| 363 |
+
for i in range(1, len(self.tokens) - window + 1):
|
| 364 |
+
remove = (self.token_hashes[i - 1] * pow_base) & HASH_MASK
|
| 365 |
+
rolling = (rolling - remove) & HASH_MASK
|
| 366 |
+
rolling = (
|
| 367 |
+
(rolling * HASH_BASE) + self.token_hashes[i + window - 1]
|
| 368 |
+
) & HASH_MASK
|
| 369 |
+
index.setdefault(rolling, []).append(i)
|
| 370 |
+
|
| 371 |
+
self.rolling_cache[window] = index
|
| 372 |
+
return index
|
| 373 |
+
|
| 374 |
+
def _positions_for_hash(
|
| 375 |
+
self,
|
| 376 |
+
window: int,
|
| 377 |
+
target_hash: int,
|
| 378 |
+
target_sha: str,
|
| 379 |
+
) -> list[int]:
|
| 380 |
+
index = self._build_rolling_index(window)
|
| 381 |
+
candidates = index.get(target_hash, [])
|
| 382 |
+
if not candidates:
|
| 383 |
+
return []
|
| 384 |
+
import hashlib
|
| 385 |
+
|
| 386 |
+
positions: list[int] = []
|
| 387 |
+
for start_idx in candidates:
|
| 388 |
+
end_idx = start_idx + window - 1
|
| 389 |
+
if end_idx >= len(self.tokens):
|
| 390 |
+
continue
|
| 391 |
+
token_slice = self.tokens[start_idx : start_idx + window]
|
| 392 |
+
sha = hashlib.sha256(
|
| 393 |
+
" ".join(token_slice).encode("utf-8"),
|
| 394 |
+
).hexdigest()
|
| 395 |
+
if sha == target_sha:
|
| 396 |
+
positions.append(start_idx)
|
| 397 |
+
return positions
|
| 398 |
+
|
| 399 |
+
def find_token_span_by_hash(
|
| 400 |
+
self,
|
| 401 |
+
window: int,
|
| 402 |
+
target_hash: int,
|
| 403 |
+
target_sha: str,
|
| 404 |
+
) -> Optional[tuple[int, int]]:
|
| 405 |
+
positions = self._positions_for_hash(window, target_hash, target_sha)
|
| 406 |
+
if not positions:
|
| 407 |
+
return None
|
| 408 |
+
start_idx = positions[0]
|
| 409 |
+
end_idx = start_idx + window - 1
|
| 410 |
+
return start_idx, end_idx
|
| 411 |
+
|
| 412 |
+
def find_token_positions_by_hash(
|
| 413 |
+
self,
|
| 414 |
+
window: int,
|
| 415 |
+
target_hash: int,
|
| 416 |
+
target_sha: str,
|
| 417 |
+
) -> list[int]:
|
| 418 |
+
return self._positions_for_hash(window, target_hash, target_sha)
|
| 419 |
+
|
| 420 |
+
def find_span_by_hash(
|
| 421 |
+
self,
|
| 422 |
+
window: int,
|
| 423 |
+
target_hash: int,
|
| 424 |
+
target_sha: str,
|
| 425 |
+
) -> Optional[tuple[int, int]]:
|
| 426 |
+
span = self.find_token_span_by_hash(window, target_hash, target_sha)
|
| 427 |
+
if span is None:
|
| 428 |
+
return None
|
| 429 |
+
start_idx, end_idx = span
|
| 430 |
+
start_char = self.spans[start_idx][0]
|
| 431 |
+
end_char = self.spans[end_idx][1]
|
| 432 |
+
return start_char, end_char
|
| 433 |
+
return None
|
| 434 |
+
|
| 435 |
+
|
| 436 |
+
@dataclass
|
| 437 |
+
class DocIndex:
|
| 438 |
+
doc_text: str
|
| 439 |
+
norm_space: str
|
| 440 |
+
norm_space_map: list[int]
|
| 441 |
+
norm_nospace: str
|
| 442 |
+
norm_nospace_map: list[int]
|
| 443 |
+
|
| 444 |
+
@classmethod
|
| 445 |
+
def from_tei(cls, tei_path: Path) -> "DocIndex":
|
| 446 |
+
blocks = extract_blocks_from_tei(tei_path)
|
| 447 |
+
doc_text = " ".join(blocks)
|
| 448 |
+
return cls.from_text(doc_text)
|
| 449 |
+
|
| 450 |
+
@classmethod
|
| 451 |
+
def from_text(cls, doc_text: str) -> "DocIndex":
|
| 452 |
+
norm_space: list[str] = []
|
| 453 |
+
norm_space_map: list[int] = []
|
| 454 |
+
norm_nospace: list[str] = []
|
| 455 |
+
norm_nospace_map: list[int] = []
|
| 456 |
+
prev_space = False
|
| 457 |
+
i = 0
|
| 458 |
+
while i < len(doc_text):
|
| 459 |
+
ch = doc_text[i]
|
| 460 |
+
if ch == "-" and i > 0 and doc_text[i - 1].isalpha():
|
| 461 |
+
j = i + 1
|
| 462 |
+
while j < len(doc_text) and doc_text[j].isspace():
|
| 463 |
+
j += 1
|
| 464 |
+
if j < len(doc_text) and doc_text[j].isalpha():
|
| 465 |
+
i = j
|
| 466 |
+
continue
|
| 467 |
+
lower = ch.lower()
|
| 468 |
+
if lower.isalnum():
|
| 469 |
+
norm_space.append(lower)
|
| 470 |
+
norm_space_map.append(i)
|
| 471 |
+
norm_nospace.append(lower)
|
| 472 |
+
norm_nospace_map.append(i)
|
| 473 |
+
prev_space = False
|
| 474 |
+
else:
|
| 475 |
+
if not prev_space:
|
| 476 |
+
norm_space.append(" ")
|
| 477 |
+
norm_space_map.append(i)
|
| 478 |
+
prev_space = True
|
| 479 |
+
i += 1
|
| 480 |
+
|
| 481 |
+
while norm_space and norm_space[0] == " ":
|
| 482 |
+
norm_space.pop(0)
|
| 483 |
+
norm_space_map.pop(0)
|
| 484 |
+
while norm_space and norm_space[-1] == " ":
|
| 485 |
+
norm_space.pop()
|
| 486 |
+
norm_space_map.pop()
|
| 487 |
+
|
| 488 |
+
return cls(
|
| 489 |
+
doc_text=doc_text,
|
| 490 |
+
norm_space="".join(norm_space),
|
| 491 |
+
norm_space_map=norm_space_map,
|
| 492 |
+
norm_nospace="".join(norm_nospace),
|
| 493 |
+
norm_nospace_map=norm_nospace_map,
|
| 494 |
+
)
|
| 495 |
+
|
| 496 |
+
def find_span(self, query: str) -> Optional[tuple[int, int, str]]:
|
| 497 |
+
if not query:
|
| 498 |
+
return None
|
| 499 |
+
n_q, n_q_ns = _normalize_query(query)
|
| 500 |
+
idx = self.norm_space.find(n_q)
|
| 501 |
+
if idx != -1:
|
| 502 |
+
start = self.norm_space_map[idx]
|
| 503 |
+
end = self.norm_space_map[idx + len(n_q) - 1] + 1
|
| 504 |
+
return start, end, "space"
|
| 505 |
+
|
| 506 |
+
trimmed = re.sub(r"^\s*\d+(?:\.\d+)*\s+", "", query)
|
| 507 |
+
if trimmed != query:
|
| 508 |
+
n_q_trim, n_q_trim_ns = _normalize_query(trimmed)
|
| 509 |
+
idx = self.norm_space.find(n_q_trim)
|
| 510 |
+
if idx != -1:
|
| 511 |
+
start = self.norm_space_map[idx]
|
| 512 |
+
end = self.norm_space_map[idx + len(n_q_trim) - 1] + 1
|
| 513 |
+
return start, end, "space_trim"
|
| 514 |
+
n_q_ns = n_q_trim_ns
|
| 515 |
+
|
| 516 |
+
idx = self.norm_nospace.find(n_q_ns)
|
| 517 |
+
if idx != -1:
|
| 518 |
+
start = self.norm_nospace_map[idx]
|
| 519 |
+
end = self.norm_nospace_map[idx + len(n_q_ns) - 1] + 1
|
| 520 |
+
return start, end, "nospace"
|
| 521 |
+
return None
|
| 522 |
+
|
| 523 |
+
def extract_span(self, start: Optional[int], end: Optional[int]) -> str:
|
| 524 |
+
if start is None or end is None:
|
| 525 |
+
return ""
|
| 526 |
+
if start < 0 or end > len(self.doc_text) or start >= end:
|
| 527 |
+
return ""
|
| 528 |
+
return self.doc_text[start:end]
|
| 529 |
+
|
| 530 |
+
|
| 531 |
+
def _normalize_query(text: str) -> tuple[str, str]:
|
| 532 |
+
norm_space: list[str] = []
|
| 533 |
+
norm_nospace: list[str] = []
|
| 534 |
+
prev_space = False
|
| 535 |
+
i = 0
|
| 536 |
+
while i < len(text):
|
| 537 |
+
ch = text[i]
|
| 538 |
+
if ch == "-" and i > 0 and text[i - 1].isalpha():
|
| 539 |
+
j = i + 1
|
| 540 |
+
while j < len(text) and text[j].isspace():
|
| 541 |
+
j += 1
|
| 542 |
+
if j < len(text) and text[j].isalpha():
|
| 543 |
+
i = j
|
| 544 |
+
continue
|
| 545 |
+
lower = ch.lower()
|
| 546 |
+
if lower.isalnum():
|
| 547 |
+
norm_space.append(lower)
|
| 548 |
+
norm_nospace.append(lower)
|
| 549 |
+
prev_space = False
|
| 550 |
+
else:
|
| 551 |
+
if not prev_space:
|
| 552 |
+
norm_space.append(" ")
|
| 553 |
+
prev_space = True
|
| 554 |
+
i += 1
|
| 555 |
+
|
| 556 |
+
while norm_space and norm_space[0] == " ":
|
| 557 |
+
norm_space.pop(0)
|
| 558 |
+
while norm_space and norm_space[-1] == " ":
|
| 559 |
+
norm_space.pop()
|
| 560 |
+
return "".join(norm_space), "".join(norm_nospace)
|
scripts/defextra_pdf_aliases.py
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
|
| 4 |
+
from scripts.defextra_markers import normalize_doi, normalize_paper_id
|
| 5 |
+
|
| 6 |
+
PDF_ALIAS_MAP: dict[str, list[str]] = {
|
| 7 |
+
# Info Processing & Management DOI mapped to Elsevier PII stems.
|
| 8 |
+
"10.1016/j.ipm.2021.102505": [
|
| 9 |
+
"S0306457321000157",
|
| 10 |
+
"1-s2.0-S0306457321000157-main",
|
| 11 |
+
"S0957417423021437",
|
| 12 |
+
"1-s2.0-S0957417423021437-main",
|
| 13 |
+
],
|
| 14 |
+
"j.ipm.2021.102505": [
|
| 15 |
+
"S0306457321000157",
|
| 16 |
+
"1-s2.0-S0306457321000157-main",
|
| 17 |
+
"S0957417423021437",
|
| 18 |
+
"1-s2.0-S0957417423021437-main",
|
| 19 |
+
],
|
| 20 |
+
"dx.doi.org/https://doi.org/10.1016/j.ipm.2021.102505": [
|
| 21 |
+
"S0306457321000157",
|
| 22 |
+
"1-s2.0-S0306457321000157-main",
|
| 23 |
+
"S0957417423021437",
|
| 24 |
+
"1-s2.0-S0957417423021437-main",
|
| 25 |
+
],
|
| 26 |
+
}
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
def candidate_pdf_aliases(
|
| 30 |
+
paper_id: str,
|
| 31 |
+
doi: str,
|
| 32 |
+
arxiv: str,
|
| 33 |
+
) -> list[str]:
|
| 34 |
+
keys = {
|
| 35 |
+
(paper_id or "").strip(),
|
| 36 |
+
normalize_paper_id(paper_id or ""),
|
| 37 |
+
(doi or "").strip(),
|
| 38 |
+
normalize_doi(doi or ""),
|
| 39 |
+
(arxiv or "").strip(),
|
| 40 |
+
}
|
| 41 |
+
aliases: list[str] = []
|
| 42 |
+
for key in keys:
|
| 43 |
+
if not key:
|
| 44 |
+
continue
|
| 45 |
+
aliases.extend(PDF_ALIAS_MAP.get(key, []))
|
| 46 |
+
return aliases
|
scripts/hydrate_defextra.py
ADDED
|
@@ -0,0 +1,1760 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
# ruff: noqa: E402
|
| 4 |
+
|
| 5 |
+
import argparse
|
| 6 |
+
import csv
|
| 7 |
+
import re
|
| 8 |
+
import subprocess
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
from typing import Dict, Optional
|
| 11 |
+
|
| 12 |
+
import sys
|
| 13 |
+
|
| 14 |
+
try:
|
| 15 |
+
from scripts.defextra_markers import (
|
| 16 |
+
DocIndex,
|
| 17 |
+
HASH_VERSION,
|
| 18 |
+
TokenIndex,
|
| 19 |
+
build_tei_index,
|
| 20 |
+
doi_suffix,
|
| 21 |
+
extract_ids_from_tei,
|
| 22 |
+
extract_text_from_pdf,
|
| 23 |
+
normalize_arxiv,
|
| 24 |
+
normalize_doi,
|
| 25 |
+
normalize_paper_id,
|
| 26 |
+
tokenize_text,
|
| 27 |
+
)
|
| 28 |
+
from scripts.defextra_pdf_aliases import candidate_pdf_aliases
|
| 29 |
+
except ModuleNotFoundError as exc:
|
| 30 |
+
if exc.name != "scripts":
|
| 31 |
+
raise
|
| 32 |
+
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
| 33 |
+
if str(PROJECT_ROOT) not in sys.path:
|
| 34 |
+
sys.path.insert(0, str(PROJECT_ROOT))
|
| 35 |
+
from scripts.defextra_markers import (
|
| 36 |
+
DocIndex,
|
| 37 |
+
HASH_VERSION,
|
| 38 |
+
TokenIndex,
|
| 39 |
+
build_tei_index,
|
| 40 |
+
doi_suffix,
|
| 41 |
+
extract_ids_from_tei,
|
| 42 |
+
extract_text_from_pdf,
|
| 43 |
+
normalize_arxiv,
|
| 44 |
+
normalize_doi,
|
| 45 |
+
normalize_paper_id,
|
| 46 |
+
tokenize_text,
|
| 47 |
+
)
|
| 48 |
+
from scripts.defextra_pdf_aliases import candidate_pdf_aliases
|
| 49 |
+
|
| 50 |
+
TRAILING_PUNCT = set(".,;:?!)]}\"'")
|
| 51 |
+
END_PUNCT = {".", ",", ";", ":", "?", "!"}
|
| 52 |
+
TRAILING_QUOTES = {"'", '"', "”", "’", ")", "]"}
|
| 53 |
+
CITATION_BRACKET_RE = re.compile(r"\[[0-9][0-9,;\s\-–]*\]")
|
| 54 |
+
CITATION_PAREN_RE = re.compile(r"\([^)]*\d{4}[^)]*\)")
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
def _extend_span_end(doc_text: str, end: int) -> int:
|
| 58 |
+
if end < 0:
|
| 59 |
+
return end
|
| 60 |
+
limit = len(doc_text)
|
| 61 |
+
while end < limit and doc_text[end] in TRAILING_PUNCT:
|
| 62 |
+
end += 1
|
| 63 |
+
j = end
|
| 64 |
+
while j < limit and doc_text[j].isspace():
|
| 65 |
+
j += 1
|
| 66 |
+
if j < limit and doc_text[j] in TRAILING_PUNCT:
|
| 67 |
+
end = j + 1
|
| 68 |
+
while end < limit and doc_text[end] in TRAILING_PUNCT:
|
| 69 |
+
end += 1
|
| 70 |
+
return end
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
def _extract_with_trailing_punct(
|
| 74 |
+
doc_text: str,
|
| 75 |
+
start: Optional[int],
|
| 76 |
+
end: Optional[int],
|
| 77 |
+
) -> str:
|
| 78 |
+
if start is None or end is None:
|
| 79 |
+
return ""
|
| 80 |
+
if start < 0 or end > len(doc_text) or start >= end:
|
| 81 |
+
return ""
|
| 82 |
+
end = _extend_span_end(doc_text, end)
|
| 83 |
+
return doc_text[start:end]
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
def _token_count(text: str) -> int:
|
| 87 |
+
tokens, _ = tokenize_text(text or "", return_spans=False)
|
| 88 |
+
return len(tokens)
|
| 89 |
+
|
| 90 |
+
|
| 91 |
+
def _row_flag(row: dict, key: str, default: bool = False) -> bool:
|
| 92 |
+
value = (row.get(key) or "").strip().lower()
|
| 93 |
+
if not value:
|
| 94 |
+
return default
|
| 95 |
+
return value == "true"
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
def _trim_pattern(
|
| 99 |
+
text: str,
|
| 100 |
+
expected: int,
|
| 101 |
+
pattern: re.Pattern[str],
|
| 102 |
+
) -> str:
|
| 103 |
+
if not text or expected <= 0:
|
| 104 |
+
return text
|
| 105 |
+
while True:
|
| 106 |
+
current = _token_count(text)
|
| 107 |
+
best = text
|
| 108 |
+
best_diff = abs(current - expected)
|
| 109 |
+
improved = False
|
| 110 |
+
for match in pattern.finditer(text):
|
| 111 |
+
candidate = (
|
| 112 |
+
text[: match.start()] + " " + text[match.end() :]
|
| 113 |
+
).strip()
|
| 114 |
+
diff = abs(_token_count(candidate) - expected)
|
| 115 |
+
if diff < best_diff:
|
| 116 |
+
best = candidate
|
| 117 |
+
best_diff = diff
|
| 118 |
+
improved = True
|
| 119 |
+
if not improved:
|
| 120 |
+
break
|
| 121 |
+
text = best
|
| 122 |
+
if _token_count(text) <= expected:
|
| 123 |
+
break
|
| 124 |
+
return text
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
def _trim_citations(text: str, expected: int) -> str:
|
| 128 |
+
if not text or expected <= 0:
|
| 129 |
+
return text
|
| 130 |
+
current = _token_count(text)
|
| 131 |
+
if current <= expected:
|
| 132 |
+
return text
|
| 133 |
+
text = _trim_pattern(text, expected, CITATION_BRACKET_RE)
|
| 134 |
+
if _token_count(text) <= expected:
|
| 135 |
+
return text
|
| 136 |
+
text = _trim_pattern(text, expected, CITATION_PAREN_RE)
|
| 137 |
+
return text
|
| 138 |
+
|
| 139 |
+
|
| 140 |
+
def _trim_to_token_count(text: str, expected: int) -> str:
|
| 141 |
+
if not text or expected <= 0:
|
| 142 |
+
return text
|
| 143 |
+
tokens, spans = tokenize_text(text, return_spans=True)
|
| 144 |
+
if not spans or len(spans) <= expected:
|
| 145 |
+
return text
|
| 146 |
+
end_idx = spans[expected - 1][1]
|
| 147 |
+
end_idx = _extend_span_end(text, end_idx)
|
| 148 |
+
return text[:end_idx].rstrip()
|
| 149 |
+
|
| 150 |
+
|
| 151 |
+
def _cleanup_spacing(text: str) -> str:
|
| 152 |
+
if not text:
|
| 153 |
+
return text
|
| 154 |
+
value = text
|
| 155 |
+
value = value.replace("“", "\"").replace("”", "\"")
|
| 156 |
+
value = value.replace("’", "'").replace("‘", "'")
|
| 157 |
+
def _dash_repl(match: re.Match[str]) -> str:
|
| 158 |
+
run = match.group(0)
|
| 159 |
+
return "--" if len(run) >= 2 else "-"
|
| 160 |
+
|
| 161 |
+
value = re.sub(r"[\u2010-\u2015\u2212\u2043]+", _dash_repl, value)
|
| 162 |
+
value = value.replace("\ufb00", "ff")
|
| 163 |
+
value = value.replace("\ufb01", "fi")
|
| 164 |
+
value = value.replace("\ufb02", "fl")
|
| 165 |
+
value = value.replace("\ufb03", "ffi")
|
| 166 |
+
value = value.replace("\ufb04", "ffl")
|
| 167 |
+
value = value.replace("…", "...")
|
| 168 |
+
value = re.sub(r"-{3,}", "--", value)
|
| 169 |
+
value = re.sub(r"([a-z0-9])([.!?])(?=[A-Z])", r"\1\2 ", value)
|
| 170 |
+
value = re.sub(r"([A-Za-z]),(?=[A-Za-z])", r"\1, ", value)
|
| 171 |
+
value = re.sub(r"([A-Za-z0-9])([;:])(?=[A-Za-z])", r"\1\2 ", value)
|
| 172 |
+
value = re.sub(r"(\d)\s+(s)\b", r"\1\2", value)
|
| 173 |
+
value = re.sub(r"([0-9])([A-Za-z])", r"\1 \2", value)
|
| 174 |
+
value = re.sub(r"[ \t]+([,.;:!?])", r"\1", value)
|
| 175 |
+
value = re.sub(r"\(\s+", "(", value)
|
| 176 |
+
value = re.sub(r"\s+\)", ")", value)
|
| 177 |
+
value = re.sub(r"\[\s+", "[", value)
|
| 178 |
+
value = re.sub(r"\s+\]", "]", value)
|
| 179 |
+
value = re.sub(r"\)(?=[A-Za-z0-9])", ") ", value)
|
| 180 |
+
value = re.sub(r"\](?=[A-Za-z0-9])", "] ", value)
|
| 181 |
+
|
| 182 |
+
def _space_citation_commas(match: re.Match[str]) -> str:
|
| 183 |
+
inner = match.group(1)
|
| 184 |
+
inner = re.sub(r",(?!\\s)", ", ", inner)
|
| 185 |
+
return f"[{inner}]"
|
| 186 |
+
|
| 187 |
+
value = re.sub(r"\[([0-9,;\s\-–]+)\]", _space_citation_commas, value)
|
| 188 |
+
value = re.sub(r"[ \t]{2,}", " ", value)
|
| 189 |
+
return value
|
| 190 |
+
|
| 191 |
+
|
| 192 |
+
def _normalize_whitespace(
|
| 193 |
+
text: str,
|
| 194 |
+
preserve_linebreaks: bool,
|
| 195 |
+
) -> str:
|
| 196 |
+
if not text:
|
| 197 |
+
return text
|
| 198 |
+
value = _cleanup_spacing(text)
|
| 199 |
+
if preserve_linebreaks:
|
| 200 |
+
value = re.sub(r"[ \t]{2,}", " ", value)
|
| 201 |
+
return value
|
| 202 |
+
value = re.sub(r"\s+", " ", value).strip()
|
| 203 |
+
return value
|
| 204 |
+
|
| 205 |
+
|
| 206 |
+
def _normalize_hyphenation(
|
| 207 |
+
text: str,
|
| 208 |
+
preserve_hyphenation: bool,
|
| 209 |
+
) -> str:
|
| 210 |
+
if not text:
|
| 211 |
+
return text
|
| 212 |
+
if preserve_hyphenation:
|
| 213 |
+
return text
|
| 214 |
+
return re.sub(r"([A-Za-z])-\s+([A-Za-z])", r"\1\2", text)
|
| 215 |
+
|
| 216 |
+
|
| 217 |
+
def _postprocess_text(
|
| 218 |
+
text: str,
|
| 219 |
+
expected_tokens: int,
|
| 220 |
+
preserve_linebreaks: bool,
|
| 221 |
+
preserve_hyphenation: bool,
|
| 222 |
+
keep_bracket_citations: bool = True,
|
| 223 |
+
keep_paren_citations: bool = True,
|
| 224 |
+
split_letter_digit: bool = True,
|
| 225 |
+
) -> str:
|
| 226 |
+
value = text
|
| 227 |
+
if not keep_bracket_citations:
|
| 228 |
+
value = CITATION_BRACKET_RE.sub(" ", value)
|
| 229 |
+
if not keep_paren_citations:
|
| 230 |
+
value = CITATION_PAREN_RE.sub(" ", value)
|
| 231 |
+
value = _trim_citations(value, expected_tokens)
|
| 232 |
+
value = _trim_to_token_count(value, expected_tokens)
|
| 233 |
+
value = _cleanup_spacing(value)
|
| 234 |
+
if split_letter_digit:
|
| 235 |
+
value = re.sub(r"([A-Za-z])([0-9])", r"\1 \2", value)
|
| 236 |
+
value = _normalize_hyphenation(value, preserve_hyphenation)
|
| 237 |
+
return _normalize_whitespace(value, preserve_linebreaks)
|
| 238 |
+
|
| 239 |
+
|
| 240 |
+
def _ensure_trailing_punct(text: str, end_punct: str) -> str:
|
| 241 |
+
if not text or not end_punct:
|
| 242 |
+
stripped = text.rstrip()
|
| 243 |
+
if not stripped:
|
| 244 |
+
return text
|
| 245 |
+
i = len(stripped) - 1
|
| 246 |
+
suffix = ""
|
| 247 |
+
while i >= 0 and stripped[i] in TRAILING_QUOTES:
|
| 248 |
+
suffix = stripped[i] + suffix
|
| 249 |
+
i -= 1
|
| 250 |
+
base = stripped[: i + 1]
|
| 251 |
+
if base and base[-1] in END_PUNCT:
|
| 252 |
+
base = base[:-1]
|
| 253 |
+
if base and suffix:
|
| 254 |
+
if ")" in suffix and "(" not in base:
|
| 255 |
+
suffix = suffix.replace(")", "")
|
| 256 |
+
if "]" in suffix and "[" not in base:
|
| 257 |
+
suffix = suffix.replace("]", "")
|
| 258 |
+
return f"{base}{suffix}"
|
| 259 |
+
stripped = text.rstrip()
|
| 260 |
+
if not stripped:
|
| 261 |
+
return text
|
| 262 |
+
i = len(stripped) - 1
|
| 263 |
+
suffix = ""
|
| 264 |
+
while i >= 0 and stripped[i] in TRAILING_QUOTES:
|
| 265 |
+
suffix = stripped[i] + suffix
|
| 266 |
+
i -= 1
|
| 267 |
+
base = stripped[: i + 1]
|
| 268 |
+
if base and base[-1] in END_PUNCT:
|
| 269 |
+
base = base[:-1] + end_punct
|
| 270 |
+
else:
|
| 271 |
+
base = f"{base}{end_punct}"
|
| 272 |
+
return f"{base}{suffix}"
|
| 273 |
+
|
| 274 |
+
|
| 275 |
+
def _find_pdf_hash_span(
|
| 276 |
+
row: dict,
|
| 277 |
+
pdf_token_index: Optional[TokenIndex],
|
| 278 |
+
prefix: str,
|
| 279 |
+
) -> Optional[tuple[int, int]]:
|
| 280 |
+
if pdf_token_index is None:
|
| 281 |
+
return None
|
| 282 |
+
spec = _select_hash_specs(row, prefix)
|
| 283 |
+
if spec:
|
| 284 |
+
span = pdf_token_index.find_span_by_hash(*spec)
|
| 285 |
+
if span:
|
| 286 |
+
return span
|
| 287 |
+
return None
|
| 288 |
+
|
| 289 |
+
|
| 290 |
+
def _candidate_ids(paper_id: str, doi: str, arxiv: str) -> list[str]:
|
| 291 |
+
candidates = [
|
| 292 |
+
paper_id,
|
| 293 |
+
normalize_paper_id(paper_id),
|
| 294 |
+
]
|
| 295 |
+
if doi:
|
| 296 |
+
candidates.append(doi)
|
| 297 |
+
candidates.append(doi_suffix(doi))
|
| 298 |
+
if arxiv:
|
| 299 |
+
candidates.append(arxiv)
|
| 300 |
+
candidates.append(normalize_arxiv(arxiv))
|
| 301 |
+
seen = set()
|
| 302 |
+
ordered = []
|
| 303 |
+
for item in candidates:
|
| 304 |
+
value = (item or "").strip()
|
| 305 |
+
if value and value not in seen:
|
| 306 |
+
seen.add(value)
|
| 307 |
+
ordered.append(value)
|
| 308 |
+
for alias in candidate_pdf_aliases(paper_id, doi, arxiv):
|
| 309 |
+
value = (alias or "").strip()
|
| 310 |
+
if value and value not in seen:
|
| 311 |
+
seen.add(value)
|
| 312 |
+
ordered.append(value)
|
| 313 |
+
return ordered
|
| 314 |
+
|
| 315 |
+
|
| 316 |
+
def _normalize_title(title: str) -> str:
|
| 317 |
+
return " ".join(title.lower().split())
|
| 318 |
+
|
| 319 |
+
|
| 320 |
+
def _build_meta_index(
|
| 321 |
+
tei_index: Dict[str, Path],
|
| 322 |
+
) -> tuple[Dict[str, Path], Dict[str, Path]]:
|
| 323 |
+
doi_index: Dict[str, Path] = {}
|
| 324 |
+
arxiv_index: Dict[str, Path] = {}
|
| 325 |
+
for path in tei_index.values():
|
| 326 |
+
doi, arxiv = extract_ids_from_tei(path)
|
| 327 |
+
if doi:
|
| 328 |
+
doi_index.setdefault(normalize_doi(doi), path)
|
| 329 |
+
doi_index.setdefault(doi_suffix(doi), path)
|
| 330 |
+
if arxiv:
|
| 331 |
+
arxiv_index.setdefault(normalize_arxiv(arxiv), path)
|
| 332 |
+
return doi_index, arxiv_index
|
| 333 |
+
|
| 334 |
+
|
| 335 |
+
def _resolve_tei_path(
|
| 336 |
+
paper_id: str,
|
| 337 |
+
doi: str,
|
| 338 |
+
arxiv: str,
|
| 339 |
+
tei_index: Dict[str, Path],
|
| 340 |
+
doi_index: Dict[str, Path],
|
| 341 |
+
arxiv_index: Dict[str, Path],
|
| 342 |
+
) -> Optional[Path]:
|
| 343 |
+
for candidate in _candidate_ids(paper_id, doi, arxiv):
|
| 344 |
+
if candidate in tei_index:
|
| 345 |
+
return tei_index[candidate]
|
| 346 |
+
if candidate.startswith("paper_"):
|
| 347 |
+
stripped = candidate[len("paper_") :]
|
| 348 |
+
if stripped in tei_index:
|
| 349 |
+
return tei_index[stripped]
|
| 350 |
+
if doi:
|
| 351 |
+
doi_key = normalize_doi(doi)
|
| 352 |
+
if doi_key in doi_index:
|
| 353 |
+
return doi_index[doi_key]
|
| 354 |
+
doi_key = doi_suffix(doi)
|
| 355 |
+
if doi_key in doi_index:
|
| 356 |
+
return doi_index[doi_key]
|
| 357 |
+
if arxiv:
|
| 358 |
+
arxiv_key = normalize_arxiv(arxiv)
|
| 359 |
+
if arxiv_key in arxiv_index:
|
| 360 |
+
return arxiv_index[arxiv_key]
|
| 361 |
+
return None
|
| 362 |
+
|
| 363 |
+
|
| 364 |
+
def _tei_stem(path: Path) -> str:
|
| 365 |
+
name = path.name
|
| 366 |
+
if name.endswith(".grobid.tei.xml"):
|
| 367 |
+
name = name[: -len(".grobid.tei.xml")]
|
| 368 |
+
return name
|
| 369 |
+
|
| 370 |
+
|
| 371 |
+
def _build_pdf_index(pdf_dir: Path) -> Dict[str, Path]:
|
| 372 |
+
index: Dict[str, Path] = {}
|
| 373 |
+
if not pdf_dir.exists():
|
| 374 |
+
return index
|
| 375 |
+
version_re = re.compile(r"^(?P<base>.+?)(v\d+)$", re.IGNORECASE)
|
| 376 |
+
arxiv_re = re.compile(r"^(?P<base>\d{4}\.\d{4,5})v\d+$", re.IGNORECASE)
|
| 377 |
+
pii_re = re.compile(r"(S\d{8,})", re.IGNORECASE)
|
| 378 |
+
for suffix in ("*.pdf", "*.PDF"):
|
| 379 |
+
for path in pdf_dir.rglob(suffix):
|
| 380 |
+
stem = path.stem
|
| 381 |
+
index.setdefault(stem, path)
|
| 382 |
+
index.setdefault(normalize_paper_id(stem), path)
|
| 383 |
+
index.setdefault(f"paper_{stem}", path)
|
| 384 |
+
if stem.startswith("paper_"):
|
| 385 |
+
stripped = stem[len("paper_") :]
|
| 386 |
+
if stripped:
|
| 387 |
+
index.setdefault(stripped, path)
|
| 388 |
+
index.setdefault(normalize_paper_id(stripped), path)
|
| 389 |
+
if stem.endswith("_fixed") or stem.endswith("-fixed"):
|
| 390 |
+
base = stem[: -len("_fixed")] if stem.endswith("_fixed") else stem[: -len("-fixed")]
|
| 391 |
+
if base:
|
| 392 |
+
index[base] = path
|
| 393 |
+
index[normalize_paper_id(base)] = path
|
| 394 |
+
index[f"paper_{base}"] = path
|
| 395 |
+
if base.startswith("paper_"):
|
| 396 |
+
stripped_base = base[len("paper_") :]
|
| 397 |
+
if stripped_base:
|
| 398 |
+
index[stripped_base] = path
|
| 399 |
+
index[normalize_paper_id(stripped_base)] = path
|
| 400 |
+
match = arxiv_re.match(stem)
|
| 401 |
+
if match:
|
| 402 |
+
base = match.group("base")
|
| 403 |
+
index.setdefault(base, path)
|
| 404 |
+
index.setdefault(normalize_paper_id(base), path)
|
| 405 |
+
match = version_re.match(stem)
|
| 406 |
+
if match:
|
| 407 |
+
base = match.group("base")
|
| 408 |
+
index.setdefault(base, path)
|
| 409 |
+
index.setdefault(normalize_paper_id(base), path)
|
| 410 |
+
pii_match = pii_re.search(stem)
|
| 411 |
+
if pii_match:
|
| 412 |
+
pii = pii_match.group(1)
|
| 413 |
+
index.setdefault(pii, path)
|
| 414 |
+
index.setdefault(normalize_paper_id(pii), path)
|
| 415 |
+
return index
|
| 416 |
+
|
| 417 |
+
|
| 418 |
+
def _select_hash_specs(
|
| 419 |
+
row: dict,
|
| 420 |
+
prefix: str,
|
| 421 |
+
) -> Optional[tuple[int, int, str]]:
|
| 422 |
+
hash_version = (row.get("hash_version") or "").strip()
|
| 423 |
+
if hash_version and hash_version != HASH_VERSION:
|
| 424 |
+
return None
|
| 425 |
+
count = row.get(f"{prefix}_token_count") or ""
|
| 426 |
+
hash64 = row.get(f"{prefix}_hash64") or ""
|
| 427 |
+
sha = row.get(f"{prefix}_sha256") or ""
|
| 428 |
+
if not count or not hash64 or not sha:
|
| 429 |
+
return None
|
| 430 |
+
try:
|
| 431 |
+
return int(count), int(hash64), sha
|
| 432 |
+
except ValueError:
|
| 433 |
+
return None
|
| 434 |
+
|
| 435 |
+
|
| 436 |
+
def _select_anchor_specs(
|
| 437 |
+
row: dict,
|
| 438 |
+
prefix: str,
|
| 439 |
+
position: str,
|
| 440 |
+
) -> Optional[tuple[int, int, str]]:
|
| 441 |
+
hash_version = (row.get("hash_version") or "").strip()
|
| 442 |
+
if hash_version and hash_version != HASH_VERSION:
|
| 443 |
+
return None
|
| 444 |
+
count = row.get(f"{prefix}_{position}_token_count") or ""
|
| 445 |
+
hash64 = row.get(f"{prefix}_{position}_hash64") or ""
|
| 446 |
+
sha = row.get(f"{prefix}_{position}_sha256") or ""
|
| 447 |
+
if not count or not hash64 or not sha:
|
| 448 |
+
return None
|
| 449 |
+
try:
|
| 450 |
+
return int(count), int(hash64), sha
|
| 451 |
+
except ValueError:
|
| 452 |
+
return None
|
| 453 |
+
|
| 454 |
+
|
| 455 |
+
def _select_anchor_spec_list(
|
| 456 |
+
row: dict,
|
| 457 |
+
prefix: str,
|
| 458 |
+
position: str,
|
| 459 |
+
) -> list[tuple[int, int, str]]:
|
| 460 |
+
specs: list[tuple[int, int, str]] = []
|
| 461 |
+
primary = _select_anchor_specs(row, prefix, position)
|
| 462 |
+
if primary is not None:
|
| 463 |
+
specs.append(primary)
|
| 464 |
+
alt_count = row.get(f"{prefix}_{position}_alt_token_count") or ""
|
| 465 |
+
alt_hash = row.get(f"{prefix}_{position}_alt_hash64") or ""
|
| 466 |
+
alt_sha = row.get(f"{prefix}_{position}_alt_sha256") or ""
|
| 467 |
+
hash_version = (row.get("hash_version") or "").strip()
|
| 468 |
+
if hash_version and hash_version != HASH_VERSION:
|
| 469 |
+
return specs
|
| 470 |
+
if alt_count and alt_hash and alt_sha:
|
| 471 |
+
try:
|
| 472 |
+
specs.append((int(alt_count), int(alt_hash), alt_sha))
|
| 473 |
+
except ValueError:
|
| 474 |
+
pass
|
| 475 |
+
return specs
|
| 476 |
+
|
| 477 |
+
|
| 478 |
+
def _match_doc_by_hash(
|
| 479 |
+
token_index: TokenIndex,
|
| 480 |
+
hash_specs: list[tuple[int, int, str]],
|
| 481 |
+
) -> int:
|
| 482 |
+
score = 0
|
| 483 |
+
for window, hash64, sha in hash_specs:
|
| 484 |
+
if token_index.find_span_by_hash(window, hash64, sha):
|
| 485 |
+
score += 1
|
| 486 |
+
return score
|
| 487 |
+
|
| 488 |
+
|
| 489 |
+
def _build_mid_candidates(
|
| 490 |
+
token_index: TokenIndex,
|
| 491 |
+
mid_specs: Optional[list[tuple[int, int, str]]],
|
| 492 |
+
) -> list[tuple[int, int]]:
|
| 493 |
+
if not mid_specs:
|
| 494 |
+
return []
|
| 495 |
+
candidates: list[tuple[int, int]] = []
|
| 496 |
+
for spec in mid_specs:
|
| 497 |
+
for position in token_index.find_token_positions_by_hash(*spec):
|
| 498 |
+
candidates.append((position, spec[0]))
|
| 499 |
+
return candidates
|
| 500 |
+
|
| 501 |
+
|
| 502 |
+
def _span_has_mid(
|
| 503 |
+
mid_candidates: list[tuple[int, int]],
|
| 504 |
+
start_idx: int,
|
| 505 |
+
end_idx: int,
|
| 506 |
+
) -> bool:
|
| 507 |
+
for mid_start, mid_len in mid_candidates:
|
| 508 |
+
mid_end = mid_start + mid_len - 1
|
| 509 |
+
if mid_start >= start_idx and mid_end <= end_idx:
|
| 510 |
+
return True
|
| 511 |
+
return False
|
| 512 |
+
|
| 513 |
+
|
| 514 |
+
def _find_span_by_anchors(
|
| 515 |
+
token_index: TokenIndex,
|
| 516 |
+
head_spec: Optional[tuple[int, int, str]],
|
| 517 |
+
tail_spec: Optional[tuple[int, int, str]],
|
| 518 |
+
expected_len: int,
|
| 519 |
+
mid_specs: Optional[list[tuple[int, int, str]]] = None,
|
| 520 |
+
*,
|
| 521 |
+
require_mid: bool = False,
|
| 522 |
+
) -> Optional[tuple[int, int]]:
|
| 523 |
+
if head_spec is None or tail_spec is None:
|
| 524 |
+
return None
|
| 525 |
+
head_positions = token_index.find_token_positions_by_hash(*head_spec)
|
| 526 |
+
tail_positions = token_index.find_token_positions_by_hash(*tail_spec)
|
| 527 |
+
if not head_positions or not tail_positions:
|
| 528 |
+
return None
|
| 529 |
+
mid_candidates = []
|
| 530 |
+
if require_mid:
|
| 531 |
+
mid_candidates = _build_mid_candidates(token_index, mid_specs)
|
| 532 |
+
if not mid_candidates:
|
| 533 |
+
return None
|
| 534 |
+
best = None
|
| 535 |
+
best_diff = None
|
| 536 |
+
tol = max(5, int(expected_len * 0.3)) if expected_len else 10
|
| 537 |
+
min_len = max(1, expected_len // 2) if expected_len else 1
|
| 538 |
+
max_len = expected_len * 3 if expected_len else None
|
| 539 |
+
for head_start in head_positions:
|
| 540 |
+
head_end = head_start + head_spec[0] - 1
|
| 541 |
+
for tail_start in tail_positions:
|
| 542 |
+
tail_end = tail_start + tail_spec[0] - 1
|
| 543 |
+
if tail_end < head_end:
|
| 544 |
+
continue
|
| 545 |
+
length = tail_end - head_start + 1
|
| 546 |
+
if mid_candidates and not _span_has_mid(
|
| 547 |
+
mid_candidates,
|
| 548 |
+
head_start,
|
| 549 |
+
tail_end,
|
| 550 |
+
):
|
| 551 |
+
continue
|
| 552 |
+
if expected_len:
|
| 553 |
+
if length < min_len or (max_len and length > max_len):
|
| 554 |
+
continue
|
| 555 |
+
if length < expected_len - tol or length > expected_len + tol:
|
| 556 |
+
continue
|
| 557 |
+
diff = abs(length - expected_len)
|
| 558 |
+
else:
|
| 559 |
+
diff = length
|
| 560 |
+
if best_diff is None or diff < best_diff:
|
| 561 |
+
best_diff = diff
|
| 562 |
+
best = (head_start, tail_end)
|
| 563 |
+
if best is None and expected_len:
|
| 564 |
+
for head_start in head_positions:
|
| 565 |
+
head_end = head_start + head_spec[0] - 1
|
| 566 |
+
for tail_start in tail_positions:
|
| 567 |
+
tail_end = tail_start + tail_spec[0] - 1
|
| 568 |
+
if tail_end < head_end:
|
| 569 |
+
continue
|
| 570 |
+
length = tail_end - head_start + 1
|
| 571 |
+
if mid_candidates and not _span_has_mid(
|
| 572 |
+
mid_candidates,
|
| 573 |
+
head_start,
|
| 574 |
+
tail_end,
|
| 575 |
+
):
|
| 576 |
+
continue
|
| 577 |
+
if length < min_len or (max_len and length > max_len):
|
| 578 |
+
continue
|
| 579 |
+
diff = abs(length - expected_len)
|
| 580 |
+
if best_diff is None or diff < best_diff:
|
| 581 |
+
best_diff = diff
|
| 582 |
+
best = (head_start, tail_end)
|
| 583 |
+
if best is None:
|
| 584 |
+
return None
|
| 585 |
+
start_char = token_index.spans[best[0]][0]
|
| 586 |
+
end_char = token_index.spans[best[1]][1]
|
| 587 |
+
return start_char, end_char
|
| 588 |
+
|
| 589 |
+
|
| 590 |
+
def _find_span_from_anchor(
|
| 591 |
+
token_index: TokenIndex,
|
| 592 |
+
anchor_spec: Optional[tuple[int, int, str]],
|
| 593 |
+
expected_len: int,
|
| 594 |
+
position: str,
|
| 595 |
+
mid_specs: Optional[list[tuple[int, int, str]]] = None,
|
| 596 |
+
*,
|
| 597 |
+
require_mid: bool = False,
|
| 598 |
+
require_unique: bool = False,
|
| 599 |
+
) -> Optional[tuple[int, int]]:
|
| 600 |
+
if anchor_spec is None or expected_len <= 0:
|
| 601 |
+
return None
|
| 602 |
+
positions = token_index.find_token_positions_by_hash(*anchor_spec)
|
| 603 |
+
if not positions:
|
| 604 |
+
return None
|
| 605 |
+
if require_unique and len(positions) != 1:
|
| 606 |
+
return None
|
| 607 |
+
mid_candidates = []
|
| 608 |
+
if require_mid:
|
| 609 |
+
mid_candidates = _build_mid_candidates(token_index, mid_specs)
|
| 610 |
+
if not mid_candidates:
|
| 611 |
+
return None
|
| 612 |
+
if position == "tail":
|
| 613 |
+
positions = list(reversed(positions))
|
| 614 |
+
for anchor_start in positions:
|
| 615 |
+
if position == "head":
|
| 616 |
+
start_idx = anchor_start
|
| 617 |
+
end_idx = anchor_start + expected_len - 1
|
| 618 |
+
else:
|
| 619 |
+
anchor_end = anchor_start + anchor_spec[0] - 1
|
| 620 |
+
end_idx = anchor_end
|
| 621 |
+
start_idx = end_idx - expected_len + 1
|
| 622 |
+
if start_idx < 0 or end_idx >= len(token_index.tokens):
|
| 623 |
+
continue
|
| 624 |
+
if mid_candidates and not _span_has_mid(
|
| 625 |
+
mid_candidates,
|
| 626 |
+
start_idx,
|
| 627 |
+
end_idx,
|
| 628 |
+
):
|
| 629 |
+
continue
|
| 630 |
+
start_char = token_index.spans[start_idx][0]
|
| 631 |
+
end_char = token_index.spans[end_idx][1]
|
| 632 |
+
return start_char, end_char
|
| 633 |
+
return None
|
| 634 |
+
|
| 635 |
+
|
| 636 |
+
def _pick_best_doc(
|
| 637 |
+
token_indexes: Dict[Path, TokenIndex],
|
| 638 |
+
hash_specs: list[tuple[int, int, str]],
|
| 639 |
+
) -> Optional[Path]:
|
| 640 |
+
best_path = None
|
| 641 |
+
best_score = 0
|
| 642 |
+
tie = False
|
| 643 |
+
for path, token_index in token_indexes.items():
|
| 644 |
+
score = _match_doc_by_hash(token_index, hash_specs)
|
| 645 |
+
if score > best_score:
|
| 646 |
+
best_score = score
|
| 647 |
+
best_path = path
|
| 648 |
+
tie = False
|
| 649 |
+
elif score == best_score and score > 0:
|
| 650 |
+
tie = True
|
| 651 |
+
if best_score == 0 or tie:
|
| 652 |
+
return None
|
| 653 |
+
return best_path
|
| 654 |
+
|
| 655 |
+
|
| 656 |
+
def _pick_best_pdf(
|
| 657 |
+
pdf_paths: list[Path],
|
| 658 |
+
pdf_token_cache: Dict[Path, TokenIndex],
|
| 659 |
+
hash_specs: list[tuple[int, int, str]],
|
| 660 |
+
) -> Optional[Path]:
|
| 661 |
+
best_path = None
|
| 662 |
+
best_score = 0
|
| 663 |
+
tie = False
|
| 664 |
+
for path in pdf_paths:
|
| 665 |
+
if path not in pdf_token_cache:
|
| 666 |
+
pdf_text = extract_text_from_pdf(path)
|
| 667 |
+
pdf_token_cache[path] = TokenIndex.from_text(pdf_text)
|
| 668 |
+
token_index = pdf_token_cache[path]
|
| 669 |
+
score = _match_doc_by_hash(token_index, hash_specs)
|
| 670 |
+
if score > best_score:
|
| 671 |
+
best_score = score
|
| 672 |
+
best_path = path
|
| 673 |
+
tie = False
|
| 674 |
+
elif score == best_score and score > 0:
|
| 675 |
+
tie = True
|
| 676 |
+
if best_score == 0 or tie:
|
| 677 |
+
return None
|
| 678 |
+
return best_path
|
| 679 |
+
|
| 680 |
+
|
| 681 |
+
def _run_grobid(input_dir: Path, output_dir: Path, config: Path) -> None:
|
| 682 |
+
output_dir.mkdir(parents=True, exist_ok=True)
|
| 683 |
+
cmd = [
|
| 684 |
+
sys.executable,
|
| 685 |
+
"scripts/pdf_to_grobid.py",
|
| 686 |
+
"--input_folder",
|
| 687 |
+
str(input_dir),
|
| 688 |
+
"--output_folder",
|
| 689 |
+
str(output_dir),
|
| 690 |
+
]
|
| 691 |
+
if config and config.exists():
|
| 692 |
+
cmd.extend(["--config", str(config)])
|
| 693 |
+
subprocess.run(cmd, check=True)
|
| 694 |
+
|
| 695 |
+
|
| 696 |
+
def main() -> None:
|
| 697 |
+
parser = argparse.ArgumentParser(
|
| 698 |
+
description="Hydrate DefExtra legal CSV using user-provided PDFs.",
|
| 699 |
+
)
|
| 700 |
+
parser.add_argument(
|
| 701 |
+
"--legal-csv",
|
| 702 |
+
type=Path,
|
| 703 |
+
default=Path("results/paper_results/defextra_legal.csv"),
|
| 704 |
+
help="Legal DefExtra CSV with markers.",
|
| 705 |
+
)
|
| 706 |
+
parser.add_argument(
|
| 707 |
+
"--pdf-dir",
|
| 708 |
+
type=Path,
|
| 709 |
+
required=True,
|
| 710 |
+
help="Directory with user-provided PDFs.",
|
| 711 |
+
)
|
| 712 |
+
parser.add_argument(
|
| 713 |
+
"--grobid-out",
|
| 714 |
+
type=Path,
|
| 715 |
+
default=Path("outputs/defextra_grobid"),
|
| 716 |
+
help="Output directory for GROBID TEI files.",
|
| 717 |
+
)
|
| 718 |
+
parser.add_argument(
|
| 719 |
+
"--grobid-config",
|
| 720 |
+
type=Path,
|
| 721 |
+
default=Path("config.json"),
|
| 722 |
+
help="Optional GROBID client config path.",
|
| 723 |
+
)
|
| 724 |
+
parser.add_argument(
|
| 725 |
+
"--skip-grobid",
|
| 726 |
+
action="store_true",
|
| 727 |
+
help="Skip running GROBID (expects TEI files already present).",
|
| 728 |
+
)
|
| 729 |
+
parser.add_argument(
|
| 730 |
+
"--output-csv",
|
| 731 |
+
type=Path,
|
| 732 |
+
default=Path("results/paper_results/defextra_hydrated.csv"),
|
| 733 |
+
help="Output hydrated CSV with excerpts.",
|
| 734 |
+
)
|
| 735 |
+
parser.add_argument(
|
| 736 |
+
"--report",
|
| 737 |
+
type=Path,
|
| 738 |
+
default=None,
|
| 739 |
+
help="Optional report path for missing matches.",
|
| 740 |
+
)
|
| 741 |
+
parser.add_argument(
|
| 742 |
+
"--require-complete",
|
| 743 |
+
action="store_true",
|
| 744 |
+
help="Exit with error if any definition/context is missing.",
|
| 745 |
+
)
|
| 746 |
+
parser.add_argument(
|
| 747 |
+
"--filter-to-pdfs",
|
| 748 |
+
action="store_true",
|
| 749 |
+
help="Only process rows that can be mapped to a provided PDF.",
|
| 750 |
+
)
|
| 751 |
+
parser.add_argument(
|
| 752 |
+
"--allow-pdf-hash-mismatch",
|
| 753 |
+
action="store_true",
|
| 754 |
+
help=(
|
| 755 |
+
"Continue when a PDF filename matches but hash markers do not. "
|
| 756 |
+
"By default, such PDFs are skipped and reported."
|
| 757 |
+
),
|
| 758 |
+
)
|
| 759 |
+
args = parser.parse_args()
|
| 760 |
+
|
| 761 |
+
if not args.legal_csv.exists():
|
| 762 |
+
raise SystemExit(f"Legal CSV not found: {args.legal_csv}")
|
| 763 |
+
if not args.pdf_dir.exists():
|
| 764 |
+
raise SystemExit(f"PDF dir not found: {args.pdf_dir}")
|
| 765 |
+
|
| 766 |
+
if not args.skip_grobid:
|
| 767 |
+
try:
|
| 768 |
+
_run_grobid(args.pdf_dir, args.grobid_out, args.grobid_config)
|
| 769 |
+
except subprocess.CalledProcessError:
|
| 770 |
+
raise SystemExit(
|
| 771 |
+
"GROBID processing failed. Ensure the GROBID server is running "
|
| 772 |
+
"and reachable (default: http://localhost:8070), or supply "
|
| 773 |
+
"--grobid-config with the correct server URL.",
|
| 774 |
+
)
|
| 775 |
+
|
| 776 |
+
tei_index = build_tei_index([args.grobid_out])
|
| 777 |
+
doi_index, arxiv_index = _build_meta_index(tei_index)
|
| 778 |
+
pdf_index = _build_pdf_index(args.pdf_dir)
|
| 779 |
+
doc_cache: Dict[str, Optional[DocIndex]] = {}
|
| 780 |
+
token_cache: Dict[str, Optional[TokenIndex]] = {}
|
| 781 |
+
tei_path_cache: Dict[str, Optional[Path]] = {}
|
| 782 |
+
pdf_token_cache: Dict[Path, TokenIndex] = {}
|
| 783 |
+
pdf_failed: set[Path] = set()
|
| 784 |
+
|
| 785 |
+
with args.legal_csv.open("r", encoding="utf-8", newline="") as handle:
|
| 786 |
+
reader = csv.DictReader(handle)
|
| 787 |
+
legal_rows = list(reader)
|
| 788 |
+
|
| 789 |
+
paper_hashes: Dict[str, list[tuple[int, int, str]]] = {}
|
| 790 |
+
title_to_ids: Dict[str, list[str]] = {}
|
| 791 |
+
id_to_row: Dict[str, dict] = {}
|
| 792 |
+
for row in legal_rows:
|
| 793 |
+
paper_id = (row.get("paper_id") or "").strip()
|
| 794 |
+
if not paper_id:
|
| 795 |
+
continue
|
| 796 |
+
if paper_id not in id_to_row:
|
| 797 |
+
id_to_row[paper_id] = row
|
| 798 |
+
title_key = _normalize_title(row.get("paper_title") or "")
|
| 799 |
+
if title_key:
|
| 800 |
+
title_to_ids.setdefault(title_key, []).append(paper_id)
|
| 801 |
+
specs: list[tuple[int, int, str]] = []
|
| 802 |
+
for prefix in ("definition", "context"):
|
| 803 |
+
spec = _select_hash_specs(row, prefix)
|
| 804 |
+
if spec is None:
|
| 805 |
+
continue
|
| 806 |
+
token_count, _, _ = spec
|
| 807 |
+
if token_count >= 5:
|
| 808 |
+
specs.append(spec)
|
| 809 |
+
if specs:
|
| 810 |
+
paper_hashes.setdefault(paper_id, []).extend(specs)
|
| 811 |
+
for prefix in ("definition", "context"):
|
| 812 |
+
for position in ("head", "mid", "tail"):
|
| 813 |
+
for spec in _select_anchor_spec_list(row, prefix, position):
|
| 814 |
+
if spec and spec[0] >= 5:
|
| 815 |
+
paper_hashes.setdefault(paper_id, []).append(spec)
|
| 816 |
+
|
| 817 |
+
tei_token_indexes: Dict[Path, TokenIndex] = {}
|
| 818 |
+
allowed_stems = set(pdf_index.keys()) if pdf_index else set()
|
| 819 |
+
for tei_path in tei_index.values():
|
| 820 |
+
if allowed_stems:
|
| 821 |
+
stem = _tei_stem(tei_path)
|
| 822 |
+
stem_norm = normalize_paper_id(stem)
|
| 823 |
+
stem_stripped = (
|
| 824 |
+
stem[len("paper_") :] if stem.startswith("paper_") else stem
|
| 825 |
+
)
|
| 826 |
+
if (
|
| 827 |
+
stem not in allowed_stems
|
| 828 |
+
and stem_norm not in allowed_stems
|
| 829 |
+
and stem_stripped not in allowed_stems
|
| 830 |
+
):
|
| 831 |
+
continue
|
| 832 |
+
try:
|
| 833 |
+
doc_index = DocIndex.from_tei(tei_path)
|
| 834 |
+
except Exception:
|
| 835 |
+
continue
|
| 836 |
+
tei_token_indexes[tei_path] = TokenIndex.from_text(doc_index.doc_text)
|
| 837 |
+
|
| 838 |
+
output_rows = []
|
| 839 |
+
missing_papers = set()
|
| 840 |
+
missing_defs = 0
|
| 841 |
+
missing_ctxs = 0
|
| 842 |
+
hydrated_from_pdf = 0
|
| 843 |
+
hydrated_from_anchor = 0
|
| 844 |
+
pdf_hash_mismatches: list[dict[str, str]] = []
|
| 845 |
+
pdf_hash_mismatch_seen: set[tuple[str, str]] = set()
|
| 846 |
+
missing_def_rows: list[dict] = []
|
| 847 |
+
missing_ctx_rows: list[dict] = []
|
| 848 |
+
|
| 849 |
+
for row in legal_rows:
|
| 850 |
+
paper_id = (row.get("paper_id") or "").strip()
|
| 851 |
+
doi = (row.get("paper_doi") or "").strip()
|
| 852 |
+
arxiv = (row.get("paper_arxiv") or "").strip()
|
| 853 |
+
|
| 854 |
+
if paper_id not in doc_cache:
|
| 855 |
+
tei_path = _resolve_tei_path(
|
| 856 |
+
paper_id,
|
| 857 |
+
doi,
|
| 858 |
+
arxiv,
|
| 859 |
+
tei_index,
|
| 860 |
+
doi_index,
|
| 861 |
+
arxiv_index,
|
| 862 |
+
)
|
| 863 |
+
if tei_path is None:
|
| 864 |
+
hash_specs = paper_hashes.get(paper_id, [])
|
| 865 |
+
if hash_specs:
|
| 866 |
+
tei_path = _pick_best_doc(
|
| 867 |
+
tei_token_indexes,
|
| 868 |
+
hash_specs,
|
| 869 |
+
)
|
| 870 |
+
if tei_path is None:
|
| 871 |
+
doc_cache[paper_id] = None
|
| 872 |
+
token_cache[paper_id] = None
|
| 873 |
+
tei_path_cache[paper_id] = None
|
| 874 |
+
else:
|
| 875 |
+
doc_index = DocIndex.from_tei(tei_path)
|
| 876 |
+
doc_cache[paper_id] = doc_index
|
| 877 |
+
token_cache[paper_id] = TokenIndex.from_text(
|
| 878 |
+
doc_index.doc_text,
|
| 879 |
+
)
|
| 880 |
+
tei_path_cache[paper_id] = tei_path
|
| 881 |
+
|
| 882 |
+
doc_index = doc_cache.get(paper_id)
|
| 883 |
+
tei_token_index = token_cache.get(paper_id)
|
| 884 |
+
definition = ""
|
| 885 |
+
context = ""
|
| 886 |
+
pdf_token_index: Optional[TokenIndex] = None
|
| 887 |
+
pdf_path = None
|
| 888 |
+
tei_path = tei_path_cache.get(paper_id)
|
| 889 |
+
pdf_direct_match = False
|
| 890 |
+
if tei_path is not None:
|
| 891 |
+
stem = _tei_stem(tei_path)
|
| 892 |
+
pdf_path = pdf_index.get(stem) or pdf_index.get(
|
| 893 |
+
normalize_paper_id(stem),
|
| 894 |
+
)
|
| 895 |
+
if pdf_path is not None:
|
| 896 |
+
pdf_direct_match = True
|
| 897 |
+
if pdf_path is None:
|
| 898 |
+
for candidate in _candidate_ids(paper_id, doi, arxiv):
|
| 899 |
+
pdf_path = pdf_index.get(candidate)
|
| 900 |
+
if pdf_path:
|
| 901 |
+
pdf_direct_match = True
|
| 902 |
+
break
|
| 903 |
+
if pdf_path is None:
|
| 904 |
+
title_key = _normalize_title(row.get("paper_title") or "")
|
| 905 |
+
for other_id in title_to_ids.get(title_key, []):
|
| 906 |
+
if other_id == paper_id:
|
| 907 |
+
continue
|
| 908 |
+
other_row = id_to_row.get(other_id, {})
|
| 909 |
+
other_doi = (other_row.get("paper_doi") or "").strip()
|
| 910 |
+
other_arxiv = (other_row.get("paper_arxiv") or "").strip()
|
| 911 |
+
for candidate in _candidate_ids(
|
| 912 |
+
other_id,
|
| 913 |
+
other_doi,
|
| 914 |
+
other_arxiv,
|
| 915 |
+
):
|
| 916 |
+
pdf_path = pdf_index.get(candidate)
|
| 917 |
+
if pdf_path:
|
| 918 |
+
pdf_direct_match = True
|
| 919 |
+
break
|
| 920 |
+
if pdf_path:
|
| 921 |
+
break
|
| 922 |
+
hash_specs = paper_hashes.get(paper_id, [])
|
| 923 |
+
if pdf_path is not None and hash_specs:
|
| 924 |
+
if pdf_path not in pdf_token_cache:
|
| 925 |
+
pdf_text = extract_text_from_pdf(pdf_path)
|
| 926 |
+
pdf_token_cache[pdf_path] = TokenIndex.from_text(pdf_text)
|
| 927 |
+
pdf_token_index = pdf_token_cache[pdf_path]
|
| 928 |
+
if _match_doc_by_hash(pdf_token_index, hash_specs) == 0:
|
| 929 |
+
mismatch_key = (paper_id, str(pdf_path))
|
| 930 |
+
if mismatch_key not in pdf_hash_mismatch_seen:
|
| 931 |
+
pdf_hash_mismatch_seen.add(mismatch_key)
|
| 932 |
+
pdf_hash_mismatches.append(
|
| 933 |
+
{"paper_id": paper_id, "pdf": str(pdf_path)},
|
| 934 |
+
)
|
| 935 |
+
if not args.allow_pdf_hash_mismatch:
|
| 936 |
+
if pdf_direct_match:
|
| 937 |
+
print(
|
| 938 |
+
f"Warning: PDF hash markers did not match for {pdf_path.name}; "
|
| 939 |
+
"skipping PDF (use --allow-pdf-hash-mismatch to override).",
|
| 940 |
+
file=sys.stderr,
|
| 941 |
+
)
|
| 942 |
+
pdf_path = None
|
| 943 |
+
pdf_token_index = None
|
| 944 |
+
pdf_direct_match = False
|
| 945 |
+
else:
|
| 946 |
+
print(
|
| 947 |
+
f"Warning: PDF hash markers did not match for {pdf_path.name}; "
|
| 948 |
+
"continuing with direct filename match.",
|
| 949 |
+
file=sys.stderr,
|
| 950 |
+
)
|
| 951 |
+
if pdf_path is None and hash_specs:
|
| 952 |
+
pdf_paths = list({p for p in pdf_index.values()})
|
| 953 |
+
if pdf_paths:
|
| 954 |
+
pdf_path = _pick_best_pdf(
|
| 955 |
+
pdf_paths,
|
| 956 |
+
pdf_token_cache,
|
| 957 |
+
hash_specs,
|
| 958 |
+
)
|
| 959 |
+
|
| 960 |
+
if args.filter_to_pdfs and pdf_path is None:
|
| 961 |
+
continue
|
| 962 |
+
|
| 963 |
+
if pdf_path is not None and pdf_path not in pdf_token_cache:
|
| 964 |
+
try:
|
| 965 |
+
pdf_text = extract_text_from_pdf(pdf_path)
|
| 966 |
+
pdf_token_cache[pdf_path] = TokenIndex.from_text(pdf_text)
|
| 967 |
+
except Exception as exc:
|
| 968 |
+
pdf_token_cache[pdf_path] = TokenIndex.from_text("")
|
| 969 |
+
if pdf_path not in pdf_failed:
|
| 970 |
+
pdf_failed.add(pdf_path)
|
| 971 |
+
print(
|
| 972 |
+
f"Warning: PDF text extraction failed for {pdf_path.name}: {exc}",
|
| 973 |
+
file=sys.stderr,
|
| 974 |
+
)
|
| 975 |
+
pdf_token_index = pdf_token_cache.get(pdf_path) if pdf_path else None
|
| 976 |
+
|
| 977 |
+
if doc_index is None:
|
| 978 |
+
missing_papers.add(paper_id)
|
| 979 |
+
else:
|
| 980 |
+
def_start = row.get("definition_char_start") or ""
|
| 981 |
+
def_end = row.get("definition_char_end") or ""
|
| 982 |
+
ctx_start = row.get("context_char_start") or ""
|
| 983 |
+
ctx_end = row.get("context_char_end") or ""
|
| 984 |
+
|
| 985 |
+
if not definition and pdf_token_index:
|
| 986 |
+
span = _find_pdf_hash_span(row, pdf_token_index, "definition")
|
| 987 |
+
if span:
|
| 988 |
+
definition = _extract_with_trailing_punct(
|
| 989 |
+
pdf_token_index.doc_text,
|
| 990 |
+
span[0],
|
| 991 |
+
span[1],
|
| 992 |
+
)
|
| 993 |
+
hydrated_from_pdf += 1
|
| 994 |
+
if not definition and tei_token_index:
|
| 995 |
+
spec = _select_hash_specs(row, "definition")
|
| 996 |
+
if spec:
|
| 997 |
+
span = tei_token_index.find_span_by_hash(*spec)
|
| 998 |
+
if span:
|
| 999 |
+
definition = _extract_with_trailing_punct(
|
| 1000 |
+
doc_index.doc_text,
|
| 1001 |
+
span[0],
|
| 1002 |
+
span[1],
|
| 1003 |
+
)
|
| 1004 |
+
if not definition and not (def_start and def_end):
|
| 1005 |
+
head_specs = _select_anchor_spec_list(
|
| 1006 |
+
row,
|
| 1007 |
+
"definition",
|
| 1008 |
+
"head",
|
| 1009 |
+
)
|
| 1010 |
+
mid_specs = _select_anchor_spec_list(
|
| 1011 |
+
row,
|
| 1012 |
+
"definition",
|
| 1013 |
+
"mid",
|
| 1014 |
+
)
|
| 1015 |
+
tail_specs = _select_anchor_spec_list(
|
| 1016 |
+
row,
|
| 1017 |
+
"definition",
|
| 1018 |
+
"tail",
|
| 1019 |
+
)
|
| 1020 |
+
expected_len = int(row.get("definition_token_count") or 0)
|
| 1021 |
+
for head_spec in head_specs or [None]:
|
| 1022 |
+
for tail_spec in tail_specs or [None]:
|
| 1023 |
+
if head_spec is None or tail_spec is None:
|
| 1024 |
+
continue
|
| 1025 |
+
span = _find_span_by_anchors(
|
| 1026 |
+
tei_token_index,
|
| 1027 |
+
head_spec,
|
| 1028 |
+
tail_spec,
|
| 1029 |
+
expected_len,
|
| 1030 |
+
mid_specs,
|
| 1031 |
+
require_mid=True,
|
| 1032 |
+
)
|
| 1033 |
+
if span is None:
|
| 1034 |
+
span = _find_span_by_anchors(
|
| 1035 |
+
tei_token_index,
|
| 1036 |
+
head_spec,
|
| 1037 |
+
tail_spec,
|
| 1038 |
+
expected_len,
|
| 1039 |
+
)
|
| 1040 |
+
if span:
|
| 1041 |
+
definition = _extract_with_trailing_punct(
|
| 1042 |
+
doc_index.doc_text,
|
| 1043 |
+
span[0],
|
| 1044 |
+
span[1],
|
| 1045 |
+
)
|
| 1046 |
+
break
|
| 1047 |
+
if definition:
|
| 1048 |
+
break
|
| 1049 |
+
if not definition:
|
| 1050 |
+
head_specs = _select_anchor_spec_list(
|
| 1051 |
+
row,
|
| 1052 |
+
"definition",
|
| 1053 |
+
"head",
|
| 1054 |
+
)
|
| 1055 |
+
mid_specs = _select_anchor_spec_list(
|
| 1056 |
+
row,
|
| 1057 |
+
"definition",
|
| 1058 |
+
"mid",
|
| 1059 |
+
)
|
| 1060 |
+
tail_specs = _select_anchor_spec_list(
|
| 1061 |
+
row,
|
| 1062 |
+
"definition",
|
| 1063 |
+
"tail",
|
| 1064 |
+
)
|
| 1065 |
+
expected_len = int(row.get("definition_token_count") or 0)
|
| 1066 |
+
span = None
|
| 1067 |
+
for head_spec in head_specs:
|
| 1068 |
+
if mid_specs:
|
| 1069 |
+
span = _find_span_from_anchor(
|
| 1070 |
+
tei_token_index,
|
| 1071 |
+
head_spec,
|
| 1072 |
+
expected_len,
|
| 1073 |
+
"head",
|
| 1074 |
+
mid_specs,
|
| 1075 |
+
require_mid=True,
|
| 1076 |
+
)
|
| 1077 |
+
if span is None:
|
| 1078 |
+
span = _find_span_from_anchor(
|
| 1079 |
+
tei_token_index,
|
| 1080 |
+
head_spec,
|
| 1081 |
+
expected_len,
|
| 1082 |
+
"head",
|
| 1083 |
+
mid_specs,
|
| 1084 |
+
require_unique=True,
|
| 1085 |
+
)
|
| 1086 |
+
else:
|
| 1087 |
+
span = _find_span_from_anchor(
|
| 1088 |
+
tei_token_index,
|
| 1089 |
+
head_spec,
|
| 1090 |
+
expected_len,
|
| 1091 |
+
"head",
|
| 1092 |
+
mid_specs,
|
| 1093 |
+
)
|
| 1094 |
+
if span:
|
| 1095 |
+
break
|
| 1096 |
+
if span is None:
|
| 1097 |
+
for tail_spec in tail_specs:
|
| 1098 |
+
if mid_specs:
|
| 1099 |
+
span = _find_span_from_anchor(
|
| 1100 |
+
tei_token_index,
|
| 1101 |
+
tail_spec,
|
| 1102 |
+
expected_len,
|
| 1103 |
+
"tail",
|
| 1104 |
+
mid_specs,
|
| 1105 |
+
require_mid=True,
|
| 1106 |
+
)
|
| 1107 |
+
if span is None:
|
| 1108 |
+
span = _find_span_from_anchor(
|
| 1109 |
+
tei_token_index,
|
| 1110 |
+
tail_spec,
|
| 1111 |
+
expected_len,
|
| 1112 |
+
"tail",
|
| 1113 |
+
mid_specs,
|
| 1114 |
+
require_unique=True,
|
| 1115 |
+
)
|
| 1116 |
+
elif span is None:
|
| 1117 |
+
span = _find_span_from_anchor(
|
| 1118 |
+
tei_token_index,
|
| 1119 |
+
tail_spec,
|
| 1120 |
+
expected_len,
|
| 1121 |
+
"tail",
|
| 1122 |
+
mid_specs,
|
| 1123 |
+
)
|
| 1124 |
+
if span:
|
| 1125 |
+
break
|
| 1126 |
+
if span:
|
| 1127 |
+
definition = _extract_with_trailing_punct(
|
| 1128 |
+
doc_index.doc_text,
|
| 1129 |
+
span[0],
|
| 1130 |
+
span[1],
|
| 1131 |
+
)
|
| 1132 |
+
if not definition and def_start and def_end:
|
| 1133 |
+
definition = _extract_with_trailing_punct(
|
| 1134 |
+
doc_index.doc_text,
|
| 1135 |
+
int(def_start),
|
| 1136 |
+
int(def_end),
|
| 1137 |
+
)
|
| 1138 |
+
|
| 1139 |
+
if not context and pdf_token_index:
|
| 1140 |
+
span = _find_pdf_hash_span(row, pdf_token_index, "context")
|
| 1141 |
+
if span:
|
| 1142 |
+
context = _extract_with_trailing_punct(
|
| 1143 |
+
pdf_token_index.doc_text,
|
| 1144 |
+
span[0],
|
| 1145 |
+
span[1],
|
| 1146 |
+
)
|
| 1147 |
+
hydrated_from_pdf += 1
|
| 1148 |
+
|
| 1149 |
+
if not context and tei_token_index:
|
| 1150 |
+
spec = _select_hash_specs(row, "context")
|
| 1151 |
+
if spec:
|
| 1152 |
+
span = tei_token_index.find_span_by_hash(*spec)
|
| 1153 |
+
if span:
|
| 1154 |
+
context = _extract_with_trailing_punct(
|
| 1155 |
+
doc_index.doc_text,
|
| 1156 |
+
span[0],
|
| 1157 |
+
span[1],
|
| 1158 |
+
)
|
| 1159 |
+
if not context and not (ctx_start and ctx_end):
|
| 1160 |
+
head_specs = _select_anchor_spec_list(
|
| 1161 |
+
row,
|
| 1162 |
+
"context",
|
| 1163 |
+
"head",
|
| 1164 |
+
)
|
| 1165 |
+
mid_specs = _select_anchor_spec_list(
|
| 1166 |
+
row,
|
| 1167 |
+
"context",
|
| 1168 |
+
"mid",
|
| 1169 |
+
)
|
| 1170 |
+
tail_specs = _select_anchor_spec_list(
|
| 1171 |
+
row,
|
| 1172 |
+
"context",
|
| 1173 |
+
"tail",
|
| 1174 |
+
)
|
| 1175 |
+
expected_len = int(row.get("context_token_count") or 0)
|
| 1176 |
+
for head_spec in head_specs or [None]:
|
| 1177 |
+
for tail_spec in tail_specs or [None]:
|
| 1178 |
+
if head_spec is None or tail_spec is None:
|
| 1179 |
+
continue
|
| 1180 |
+
span = _find_span_by_anchors(
|
| 1181 |
+
tei_token_index,
|
| 1182 |
+
head_spec,
|
| 1183 |
+
tail_spec,
|
| 1184 |
+
expected_len,
|
| 1185 |
+
mid_specs,
|
| 1186 |
+
require_mid=True,
|
| 1187 |
+
)
|
| 1188 |
+
if span is None:
|
| 1189 |
+
span = _find_span_by_anchors(
|
| 1190 |
+
tei_token_index,
|
| 1191 |
+
head_spec,
|
| 1192 |
+
tail_spec,
|
| 1193 |
+
expected_len,
|
| 1194 |
+
)
|
| 1195 |
+
if span:
|
| 1196 |
+
context = _extract_with_trailing_punct(
|
| 1197 |
+
doc_index.doc_text,
|
| 1198 |
+
span[0],
|
| 1199 |
+
span[1],
|
| 1200 |
+
)
|
| 1201 |
+
break
|
| 1202 |
+
if context:
|
| 1203 |
+
break
|
| 1204 |
+
if not context:
|
| 1205 |
+
head_specs = _select_anchor_spec_list(
|
| 1206 |
+
row,
|
| 1207 |
+
"context",
|
| 1208 |
+
"head",
|
| 1209 |
+
)
|
| 1210 |
+
mid_specs = _select_anchor_spec_list(
|
| 1211 |
+
row,
|
| 1212 |
+
"context",
|
| 1213 |
+
"mid",
|
| 1214 |
+
)
|
| 1215 |
+
tail_specs = _select_anchor_spec_list(
|
| 1216 |
+
row,
|
| 1217 |
+
"context",
|
| 1218 |
+
"tail",
|
| 1219 |
+
)
|
| 1220 |
+
expected_len = int(row.get("context_token_count") or 0)
|
| 1221 |
+
span = None
|
| 1222 |
+
for head_spec in head_specs:
|
| 1223 |
+
if mid_specs:
|
| 1224 |
+
span = _find_span_from_anchor(
|
| 1225 |
+
tei_token_index,
|
| 1226 |
+
head_spec,
|
| 1227 |
+
expected_len,
|
| 1228 |
+
"head",
|
| 1229 |
+
mid_specs,
|
| 1230 |
+
require_mid=True,
|
| 1231 |
+
)
|
| 1232 |
+
if span is None:
|
| 1233 |
+
span = _find_span_from_anchor(
|
| 1234 |
+
tei_token_index,
|
| 1235 |
+
head_spec,
|
| 1236 |
+
expected_len,
|
| 1237 |
+
"head",
|
| 1238 |
+
mid_specs,
|
| 1239 |
+
require_unique=True,
|
| 1240 |
+
)
|
| 1241 |
+
else:
|
| 1242 |
+
span = _find_span_from_anchor(
|
| 1243 |
+
tei_token_index,
|
| 1244 |
+
head_spec,
|
| 1245 |
+
expected_len,
|
| 1246 |
+
"head",
|
| 1247 |
+
mid_specs,
|
| 1248 |
+
)
|
| 1249 |
+
if span:
|
| 1250 |
+
break
|
| 1251 |
+
if span is None:
|
| 1252 |
+
for tail_spec in tail_specs:
|
| 1253 |
+
if mid_specs:
|
| 1254 |
+
span = _find_span_from_anchor(
|
| 1255 |
+
tei_token_index,
|
| 1256 |
+
tail_spec,
|
| 1257 |
+
expected_len,
|
| 1258 |
+
"tail",
|
| 1259 |
+
mid_specs,
|
| 1260 |
+
require_mid=True,
|
| 1261 |
+
)
|
| 1262 |
+
if span is None:
|
| 1263 |
+
span = _find_span_from_anchor(
|
| 1264 |
+
tei_token_index,
|
| 1265 |
+
tail_spec,
|
| 1266 |
+
expected_len,
|
| 1267 |
+
"tail",
|
| 1268 |
+
mid_specs,
|
| 1269 |
+
require_unique=True,
|
| 1270 |
+
)
|
| 1271 |
+
elif span is None:
|
| 1272 |
+
span = _find_span_from_anchor(
|
| 1273 |
+
tei_token_index,
|
| 1274 |
+
tail_spec,
|
| 1275 |
+
expected_len,
|
| 1276 |
+
"tail",
|
| 1277 |
+
mid_specs,
|
| 1278 |
+
)
|
| 1279 |
+
if span:
|
| 1280 |
+
break
|
| 1281 |
+
if span:
|
| 1282 |
+
context = _extract_with_trailing_punct(
|
| 1283 |
+
doc_index.doc_text,
|
| 1284 |
+
span[0],
|
| 1285 |
+
span[1],
|
| 1286 |
+
)
|
| 1287 |
+
if not context and ctx_start and ctx_end:
|
| 1288 |
+
context = _extract_with_trailing_punct(
|
| 1289 |
+
doc_index.doc_text,
|
| 1290 |
+
int(ctx_start),
|
| 1291 |
+
int(ctx_end),
|
| 1292 |
+
)
|
| 1293 |
+
|
| 1294 |
+
if not definition and pdf_path is not None and pdf_token_index:
|
| 1295 |
+
spec = _select_hash_specs(row, "definition")
|
| 1296 |
+
if spec:
|
| 1297 |
+
span = pdf_token_index.find_span_by_hash(*spec)
|
| 1298 |
+
if span:
|
| 1299 |
+
definition = _extract_with_trailing_punct(
|
| 1300 |
+
pdf_token_index.doc_text,
|
| 1301 |
+
span[0],
|
| 1302 |
+
span[1],
|
| 1303 |
+
)
|
| 1304 |
+
hydrated_from_pdf += 1
|
| 1305 |
+
if not definition:
|
| 1306 |
+
head_specs = _select_anchor_spec_list(
|
| 1307 |
+
row,
|
| 1308 |
+
"definition",
|
| 1309 |
+
"head",
|
| 1310 |
+
)
|
| 1311 |
+
mid_specs = _select_anchor_spec_list(
|
| 1312 |
+
row,
|
| 1313 |
+
"definition",
|
| 1314 |
+
"mid",
|
| 1315 |
+
)
|
| 1316 |
+
tail_specs = _select_anchor_spec_list(
|
| 1317 |
+
row,
|
| 1318 |
+
"definition",
|
| 1319 |
+
"tail",
|
| 1320 |
+
)
|
| 1321 |
+
expected_len = int(row.get("definition_token_count") or 0)
|
| 1322 |
+
for head_spec in head_specs or [None]:
|
| 1323 |
+
for tail_spec in tail_specs or [None]:
|
| 1324 |
+
if head_spec is None or tail_spec is None:
|
| 1325 |
+
continue
|
| 1326 |
+
span = _find_span_by_anchors(
|
| 1327 |
+
pdf_token_index,
|
| 1328 |
+
head_spec,
|
| 1329 |
+
tail_spec,
|
| 1330 |
+
expected_len,
|
| 1331 |
+
mid_specs,
|
| 1332 |
+
require_mid=True,
|
| 1333 |
+
)
|
| 1334 |
+
if span is None:
|
| 1335 |
+
span = _find_span_by_anchors(
|
| 1336 |
+
pdf_token_index,
|
| 1337 |
+
head_spec,
|
| 1338 |
+
tail_spec,
|
| 1339 |
+
expected_len,
|
| 1340 |
+
)
|
| 1341 |
+
if span:
|
| 1342 |
+
definition = _extract_with_trailing_punct(
|
| 1343 |
+
pdf_token_index.doc_text,
|
| 1344 |
+
span[0],
|
| 1345 |
+
span[1],
|
| 1346 |
+
)
|
| 1347 |
+
hydrated_from_pdf += 1
|
| 1348 |
+
hydrated_from_anchor += 1
|
| 1349 |
+
break
|
| 1350 |
+
if definition:
|
| 1351 |
+
break
|
| 1352 |
+
if not definition:
|
| 1353 |
+
head_specs = _select_anchor_spec_list(
|
| 1354 |
+
row,
|
| 1355 |
+
"definition",
|
| 1356 |
+
"head",
|
| 1357 |
+
)
|
| 1358 |
+
mid_specs = _select_anchor_spec_list(
|
| 1359 |
+
row,
|
| 1360 |
+
"definition",
|
| 1361 |
+
"mid",
|
| 1362 |
+
)
|
| 1363 |
+
tail_specs = _select_anchor_spec_list(
|
| 1364 |
+
row,
|
| 1365 |
+
"definition",
|
| 1366 |
+
"tail",
|
| 1367 |
+
)
|
| 1368 |
+
expected_len = int(row.get("definition_token_count") or 0)
|
| 1369 |
+
span = None
|
| 1370 |
+
for head_spec in head_specs:
|
| 1371 |
+
if mid_specs:
|
| 1372 |
+
span = _find_span_from_anchor(
|
| 1373 |
+
pdf_token_index,
|
| 1374 |
+
head_spec,
|
| 1375 |
+
expected_len,
|
| 1376 |
+
"head",
|
| 1377 |
+
mid_specs,
|
| 1378 |
+
require_mid=True,
|
| 1379 |
+
)
|
| 1380 |
+
if span is None:
|
| 1381 |
+
span = _find_span_from_anchor(
|
| 1382 |
+
pdf_token_index,
|
| 1383 |
+
head_spec,
|
| 1384 |
+
expected_len,
|
| 1385 |
+
"head",
|
| 1386 |
+
mid_specs,
|
| 1387 |
+
require_unique=True,
|
| 1388 |
+
)
|
| 1389 |
+
else:
|
| 1390 |
+
span = _find_span_from_anchor(
|
| 1391 |
+
pdf_token_index,
|
| 1392 |
+
head_spec,
|
| 1393 |
+
expected_len,
|
| 1394 |
+
"head",
|
| 1395 |
+
mid_specs,
|
| 1396 |
+
)
|
| 1397 |
+
if span:
|
| 1398 |
+
break
|
| 1399 |
+
if span is None:
|
| 1400 |
+
for tail_spec in tail_specs:
|
| 1401 |
+
if mid_specs:
|
| 1402 |
+
span = _find_span_from_anchor(
|
| 1403 |
+
pdf_token_index,
|
| 1404 |
+
tail_spec,
|
| 1405 |
+
expected_len,
|
| 1406 |
+
"tail",
|
| 1407 |
+
mid_specs,
|
| 1408 |
+
require_mid=True,
|
| 1409 |
+
)
|
| 1410 |
+
if span is None:
|
| 1411 |
+
span = _find_span_from_anchor(
|
| 1412 |
+
pdf_token_index,
|
| 1413 |
+
tail_spec,
|
| 1414 |
+
expected_len,
|
| 1415 |
+
"tail",
|
| 1416 |
+
mid_specs,
|
| 1417 |
+
require_unique=True,
|
| 1418 |
+
)
|
| 1419 |
+
else:
|
| 1420 |
+
span = _find_span_from_anchor(
|
| 1421 |
+
pdf_token_index,
|
| 1422 |
+
tail_spec,
|
| 1423 |
+
expected_len,
|
| 1424 |
+
"tail",
|
| 1425 |
+
mid_specs,
|
| 1426 |
+
)
|
| 1427 |
+
if span:
|
| 1428 |
+
break
|
| 1429 |
+
if span:
|
| 1430 |
+
definition = _extract_with_trailing_punct(
|
| 1431 |
+
pdf_token_index.doc_text,
|
| 1432 |
+
span[0],
|
| 1433 |
+
span[1],
|
| 1434 |
+
)
|
| 1435 |
+
hydrated_from_pdf += 1
|
| 1436 |
+
hydrated_from_anchor += 1
|
| 1437 |
+
|
| 1438 |
+
if not definition:
|
| 1439 |
+
missing_defs += 1
|
| 1440 |
+
missing_def_rows.append(
|
| 1441 |
+
{
|
| 1442 |
+
"paper_id": paper_id,
|
| 1443 |
+
"concept": row.get("concept", ""),
|
| 1444 |
+
"reason": "missing_definition",
|
| 1445 |
+
},
|
| 1446 |
+
)
|
| 1447 |
+
|
| 1448 |
+
if not context and pdf_path is not None and pdf_token_index:
|
| 1449 |
+
spec = _select_hash_specs(row, "context")
|
| 1450 |
+
if spec:
|
| 1451 |
+
span = pdf_token_index.find_span_by_hash(*spec)
|
| 1452 |
+
if span:
|
| 1453 |
+
context = _extract_with_trailing_punct(
|
| 1454 |
+
pdf_token_index.doc_text,
|
| 1455 |
+
span[0],
|
| 1456 |
+
span[1],
|
| 1457 |
+
)
|
| 1458 |
+
hydrated_from_pdf += 1
|
| 1459 |
+
if not context:
|
| 1460 |
+
head_specs = _select_anchor_spec_list(
|
| 1461 |
+
row,
|
| 1462 |
+
"context",
|
| 1463 |
+
"head",
|
| 1464 |
+
)
|
| 1465 |
+
mid_specs = _select_anchor_spec_list(
|
| 1466 |
+
row,
|
| 1467 |
+
"context",
|
| 1468 |
+
"mid",
|
| 1469 |
+
)
|
| 1470 |
+
tail_specs = _select_anchor_spec_list(
|
| 1471 |
+
row,
|
| 1472 |
+
"context",
|
| 1473 |
+
"tail",
|
| 1474 |
+
)
|
| 1475 |
+
expected_len = int(row.get("context_token_count") or 0)
|
| 1476 |
+
for head_spec in head_specs or [None]:
|
| 1477 |
+
for tail_spec in tail_specs or [None]:
|
| 1478 |
+
if head_spec is None or tail_spec is None:
|
| 1479 |
+
continue
|
| 1480 |
+
span = _find_span_by_anchors(
|
| 1481 |
+
pdf_token_index,
|
| 1482 |
+
head_spec,
|
| 1483 |
+
tail_spec,
|
| 1484 |
+
expected_len,
|
| 1485 |
+
mid_specs,
|
| 1486 |
+
require_mid=True,
|
| 1487 |
+
)
|
| 1488 |
+
if span is None:
|
| 1489 |
+
span = _find_span_by_anchors(
|
| 1490 |
+
pdf_token_index,
|
| 1491 |
+
head_spec,
|
| 1492 |
+
tail_spec,
|
| 1493 |
+
expected_len,
|
| 1494 |
+
)
|
| 1495 |
+
if span:
|
| 1496 |
+
context = _extract_with_trailing_punct(
|
| 1497 |
+
pdf_token_index.doc_text,
|
| 1498 |
+
span[0],
|
| 1499 |
+
span[1],
|
| 1500 |
+
)
|
| 1501 |
+
hydrated_from_pdf += 1
|
| 1502 |
+
hydrated_from_anchor += 1
|
| 1503 |
+
break
|
| 1504 |
+
if context:
|
| 1505 |
+
break
|
| 1506 |
+
if not context:
|
| 1507 |
+
head_specs = _select_anchor_spec_list(
|
| 1508 |
+
row,
|
| 1509 |
+
"context",
|
| 1510 |
+
"head",
|
| 1511 |
+
)
|
| 1512 |
+
mid_specs = _select_anchor_spec_list(
|
| 1513 |
+
row,
|
| 1514 |
+
"context",
|
| 1515 |
+
"mid",
|
| 1516 |
+
)
|
| 1517 |
+
tail_specs = _select_anchor_spec_list(
|
| 1518 |
+
row,
|
| 1519 |
+
"context",
|
| 1520 |
+
"tail",
|
| 1521 |
+
)
|
| 1522 |
+
expected_len = int(row.get("context_token_count") or 0)
|
| 1523 |
+
span = None
|
| 1524 |
+
for head_spec in head_specs:
|
| 1525 |
+
if mid_specs:
|
| 1526 |
+
span = _find_span_from_anchor(
|
| 1527 |
+
pdf_token_index,
|
| 1528 |
+
head_spec,
|
| 1529 |
+
expected_len,
|
| 1530 |
+
"head",
|
| 1531 |
+
mid_specs,
|
| 1532 |
+
require_mid=True,
|
| 1533 |
+
)
|
| 1534 |
+
if span is None:
|
| 1535 |
+
span = _find_span_from_anchor(
|
| 1536 |
+
pdf_token_index,
|
| 1537 |
+
head_spec,
|
| 1538 |
+
expected_len,
|
| 1539 |
+
"head",
|
| 1540 |
+
mid_specs,
|
| 1541 |
+
require_unique=True,
|
| 1542 |
+
)
|
| 1543 |
+
else:
|
| 1544 |
+
span = _find_span_from_anchor(
|
| 1545 |
+
pdf_token_index,
|
| 1546 |
+
head_spec,
|
| 1547 |
+
expected_len,
|
| 1548 |
+
"head",
|
| 1549 |
+
mid_specs,
|
| 1550 |
+
)
|
| 1551 |
+
if span:
|
| 1552 |
+
break
|
| 1553 |
+
if span is None:
|
| 1554 |
+
for tail_spec in tail_specs:
|
| 1555 |
+
if mid_specs:
|
| 1556 |
+
span = _find_span_from_anchor(
|
| 1557 |
+
pdf_token_index,
|
| 1558 |
+
tail_spec,
|
| 1559 |
+
expected_len,
|
| 1560 |
+
"tail",
|
| 1561 |
+
mid_specs,
|
| 1562 |
+
require_mid=True,
|
| 1563 |
+
)
|
| 1564 |
+
if span is None:
|
| 1565 |
+
span = _find_span_from_anchor(
|
| 1566 |
+
pdf_token_index,
|
| 1567 |
+
tail_spec,
|
| 1568 |
+
expected_len,
|
| 1569 |
+
"tail",
|
| 1570 |
+
mid_specs,
|
| 1571 |
+
require_unique=True,
|
| 1572 |
+
)
|
| 1573 |
+
else:
|
| 1574 |
+
span = _find_span_from_anchor(
|
| 1575 |
+
pdf_token_index,
|
| 1576 |
+
tail_spec,
|
| 1577 |
+
expected_len,
|
| 1578 |
+
"tail",
|
| 1579 |
+
mid_specs,
|
| 1580 |
+
)
|
| 1581 |
+
if span:
|
| 1582 |
+
break
|
| 1583 |
+
if span:
|
| 1584 |
+
context = _extract_with_trailing_punct(
|
| 1585 |
+
pdf_token_index.doc_text,
|
| 1586 |
+
span[0],
|
| 1587 |
+
span[1],
|
| 1588 |
+
)
|
| 1589 |
+
hydrated_from_pdf += 1
|
| 1590 |
+
hydrated_from_anchor += 1
|
| 1591 |
+
|
| 1592 |
+
if not context:
|
| 1593 |
+
missing_ctxs += 1
|
| 1594 |
+
missing_ctx_rows.append(
|
| 1595 |
+
{
|
| 1596 |
+
"paper_id": paper_id,
|
| 1597 |
+
"concept": row.get("concept", ""),
|
| 1598 |
+
"reason": "missing_context",
|
| 1599 |
+
},
|
| 1600 |
+
)
|
| 1601 |
+
|
| 1602 |
+
def_preserve_lines = _row_flag(
|
| 1603 |
+
row,
|
| 1604 |
+
"definition_preserve_linebreaks",
|
| 1605 |
+
)
|
| 1606 |
+
ctx_preserve_lines = _row_flag(
|
| 1607 |
+
row,
|
| 1608 |
+
"context_preserve_linebreaks",
|
| 1609 |
+
)
|
| 1610 |
+
def_preserve_hyph = _row_flag(
|
| 1611 |
+
row,
|
| 1612 |
+
"definition_preserve_hyphenation",
|
| 1613 |
+
)
|
| 1614 |
+
ctx_preserve_hyph = _row_flag(
|
| 1615 |
+
row,
|
| 1616 |
+
"context_preserve_hyphenation",
|
| 1617 |
+
)
|
| 1618 |
+
def_keep_bracket = _row_flag(
|
| 1619 |
+
row,
|
| 1620 |
+
"definition_has_bracket_citation",
|
| 1621 |
+
True,
|
| 1622 |
+
)
|
| 1623 |
+
def_keep_paren = _row_flag(
|
| 1624 |
+
row,
|
| 1625 |
+
"definition_has_paren_citation",
|
| 1626 |
+
True,
|
| 1627 |
+
)
|
| 1628 |
+
def_split_letter_digit = not _row_flag(
|
| 1629 |
+
row,
|
| 1630 |
+
"definition_has_letter_digit",
|
| 1631 |
+
)
|
| 1632 |
+
ctx_keep_bracket = _row_flag(
|
| 1633 |
+
row,
|
| 1634 |
+
"context_has_bracket_citation",
|
| 1635 |
+
True,
|
| 1636 |
+
)
|
| 1637 |
+
ctx_keep_paren = _row_flag(
|
| 1638 |
+
row,
|
| 1639 |
+
"context_has_paren_citation",
|
| 1640 |
+
True,
|
| 1641 |
+
)
|
| 1642 |
+
ctx_split_letter_digit = not _row_flag(
|
| 1643 |
+
row,
|
| 1644 |
+
"context_has_letter_digit",
|
| 1645 |
+
)
|
| 1646 |
+
|
| 1647 |
+
output_rows.append(
|
| 1648 |
+
{
|
| 1649 |
+
"paper_id": paper_id,
|
| 1650 |
+
"paper_title": row.get("paper_title", ""),
|
| 1651 |
+
"paper_doi": doi,
|
| 1652 |
+
"paper_arxiv": arxiv,
|
| 1653 |
+
"concept": row.get("concept", ""),
|
| 1654 |
+
"definition": _ensure_trailing_punct(
|
| 1655 |
+
_postprocess_text(
|
| 1656 |
+
definition,
|
| 1657 |
+
int(row.get("definition_token_count") or 0),
|
| 1658 |
+
def_preserve_lines,
|
| 1659 |
+
def_preserve_hyph,
|
| 1660 |
+
def_keep_bracket,
|
| 1661 |
+
def_keep_paren,
|
| 1662 |
+
def_split_letter_digit,
|
| 1663 |
+
),
|
| 1664 |
+
row.get("definition_end_punct", ""),
|
| 1665 |
+
),
|
| 1666 |
+
"context": _ensure_trailing_punct(
|
| 1667 |
+
_postprocess_text(
|
| 1668 |
+
context,
|
| 1669 |
+
int(row.get("context_token_count") or 0),
|
| 1670 |
+
ctx_preserve_lines,
|
| 1671 |
+
ctx_preserve_hyph,
|
| 1672 |
+
ctx_keep_bracket,
|
| 1673 |
+
ctx_keep_paren,
|
| 1674 |
+
ctx_split_letter_digit,
|
| 1675 |
+
),
|
| 1676 |
+
row.get("context_end_punct", ""),
|
| 1677 |
+
),
|
| 1678 |
+
"definition_type": row.get("definition_type", ""),
|
| 1679 |
+
"source_file": row.get("source_file", ""),
|
| 1680 |
+
"is_out_of_domain": row.get("is_out_of_domain", ""),
|
| 1681 |
+
},
|
| 1682 |
+
)
|
| 1683 |
+
|
| 1684 |
+
args.output_csv.parent.mkdir(parents=True, exist_ok=True)
|
| 1685 |
+
with args.output_csv.open("w", encoding="utf-8", newline="") as handle:
|
| 1686 |
+
fieldnames = [
|
| 1687 |
+
"paper_id",
|
| 1688 |
+
"paper_title",
|
| 1689 |
+
"paper_doi",
|
| 1690 |
+
"paper_arxiv",
|
| 1691 |
+
"concept",
|
| 1692 |
+
"definition",
|
| 1693 |
+
"context",
|
| 1694 |
+
"definition_type",
|
| 1695 |
+
"source_file",
|
| 1696 |
+
"is_out_of_domain",
|
| 1697 |
+
]
|
| 1698 |
+
writer = csv.DictWriter(handle, fieldnames=fieldnames)
|
| 1699 |
+
writer.writeheader()
|
| 1700 |
+
for row in output_rows:
|
| 1701 |
+
writer.writerow(row)
|
| 1702 |
+
|
| 1703 |
+
print(f"Wrote hydrated CSV to {args.output_csv}")
|
| 1704 |
+
print(f"Missing TEI for {len(missing_papers)} papers")
|
| 1705 |
+
print(f"Missing definition spans: {missing_defs}")
|
| 1706 |
+
print(f"Missing context spans: {missing_ctxs}")
|
| 1707 |
+
print(
|
| 1708 |
+
"Hydrated from PDF fallback: "
|
| 1709 |
+
f"{hydrated_from_pdf} (anchors used: {hydrated_from_anchor})",
|
| 1710 |
+
)
|
| 1711 |
+
if args.report is not None:
|
| 1712 |
+
report_lines = []
|
| 1713 |
+
if missing_papers:
|
| 1714 |
+
report_lines.append("Missing papers:")
|
| 1715 |
+
for paper_id in sorted(missing_papers):
|
| 1716 |
+
report_lines.append(f"- {paper_id}")
|
| 1717 |
+
report_lines.append("")
|
| 1718 |
+
if pdf_hash_mismatches:
|
| 1719 |
+
report_lines.append(
|
| 1720 |
+
"PDF hash mismatches (filename matched, hash did not):",
|
| 1721 |
+
)
|
| 1722 |
+
for item in pdf_hash_mismatches:
|
| 1723 |
+
report_lines.append(
|
| 1724 |
+
f"- {item['paper_id']} | {item['pdf']}",
|
| 1725 |
+
)
|
| 1726 |
+
report_lines.append(
|
| 1727 |
+
"Note: rerun with --allow-pdf-hash-mismatch to continue with these PDFs.",
|
| 1728 |
+
)
|
| 1729 |
+
report_lines.append("")
|
| 1730 |
+
report_lines.append(f"Missing definition spans: {missing_defs}")
|
| 1731 |
+
report_lines.append(f"Missing context spans: {missing_ctxs}")
|
| 1732 |
+
if missing_def_rows:
|
| 1733 |
+
report_lines.append("")
|
| 1734 |
+
report_lines.append("Missing definitions (paper_id | concept):")
|
| 1735 |
+
for item in missing_def_rows:
|
| 1736 |
+
report_lines.append(
|
| 1737 |
+
f"- {item['paper_id']} | {item['concept']}",
|
| 1738 |
+
)
|
| 1739 |
+
if missing_ctx_rows:
|
| 1740 |
+
report_lines.append("")
|
| 1741 |
+
report_lines.append("Missing contexts (paper_id | concept):")
|
| 1742 |
+
for item in missing_ctx_rows:
|
| 1743 |
+
report_lines.append(
|
| 1744 |
+
f"- {item['paper_id']} | {item['concept']}",
|
| 1745 |
+
)
|
| 1746 |
+
args.report.parent.mkdir(parents=True, exist_ok=True)
|
| 1747 |
+
args.report.write_text(
|
| 1748 |
+
"\n".join(report_lines) + "\n",
|
| 1749 |
+
encoding="utf-8",
|
| 1750 |
+
)
|
| 1751 |
+
print(f"Wrote report to {args.report}")
|
| 1752 |
+
if args.require_complete and (missing_defs or missing_ctxs):
|
| 1753 |
+
raise SystemExit(
|
| 1754 |
+
"Hydration incomplete: "
|
| 1755 |
+
f"{missing_defs} definitions, {missing_ctxs} contexts missing.",
|
| 1756 |
+
)
|
| 1757 |
+
|
| 1758 |
+
|
| 1759 |
+
if __name__ == "__main__":
|
| 1760 |
+
main()
|
scripts/list_defextra_pdfs.py
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
import argparse
|
| 4 |
+
import csv
|
| 5 |
+
import re
|
| 6 |
+
from pathlib import Path
|
| 7 |
+
from typing import Dict, Iterable, List
|
| 8 |
+
|
| 9 |
+
import sys
|
| 10 |
+
|
| 11 |
+
try:
|
| 12 |
+
from scripts.defextra_markers import (
|
| 13 |
+
normalize_paper_id,
|
| 14 |
+
normalize_arxiv,
|
| 15 |
+
normalize_doi,
|
| 16 |
+
)
|
| 17 |
+
from scripts.defextra_pdf_aliases import candidate_pdf_aliases
|
| 18 |
+
except ModuleNotFoundError as exc:
|
| 19 |
+
if exc.name != "scripts":
|
| 20 |
+
raise
|
| 21 |
+
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
| 22 |
+
if str(PROJECT_ROOT) not in sys.path:
|
| 23 |
+
sys.path.insert(0, str(PROJECT_ROOT))
|
| 24 |
+
from scripts.defextra_markers import (
|
| 25 |
+
normalize_paper_id,
|
| 26 |
+
normalize_arxiv,
|
| 27 |
+
normalize_doi,
|
| 28 |
+
)
|
| 29 |
+
from scripts.defextra_pdf_aliases import candidate_pdf_aliases
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
S2_ID_RE = re.compile(r"^[0-9a-f]{40}$", re.IGNORECASE)
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
def _safe_join(values: Iterable[str]) -> str:
|
| 36 |
+
return ";".join(v for v in values if v)
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
def _semanticscholar_url(paper_id: str) -> str:
|
| 40 |
+
if S2_ID_RE.match(paper_id):
|
| 41 |
+
return f"https://www.semanticscholar.org/paper/{paper_id}"
|
| 42 |
+
return ""
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
def _acl_url(paper_id: str) -> str:
|
| 46 |
+
if paper_id.startswith("https://aclanthology.org/"):
|
| 47 |
+
return paper_id
|
| 48 |
+
if re.match(r"^[0-9]{4}\\.[a-z-]+\\.[0-9]+$", paper_id, re.IGNORECASE):
|
| 49 |
+
return f"https://aclanthology.org/{paper_id}"
|
| 50 |
+
return ""
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
def _doi_url(doi: str, paper_id: str) -> str:
|
| 54 |
+
if doi:
|
| 55 |
+
return f"https://doi.org/{normalize_doi(doi)}"
|
| 56 |
+
if paper_id.startswith("10."):
|
| 57 |
+
return f"https://doi.org/{normalize_doi(paper_id)}"
|
| 58 |
+
if "doi.org/" in paper_id:
|
| 59 |
+
return f"https://doi.org/{normalize_doi(paper_id)}"
|
| 60 |
+
return ""
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
def _arxiv_url(arxiv: str, paper_id: str) -> str:
|
| 64 |
+
if arxiv:
|
| 65 |
+
return f"https://arxiv.org/abs/{normalize_arxiv(arxiv)}"
|
| 66 |
+
match = re.search(r"arxiv\\.org/(abs|pdf)/([^?#]+)", paper_id)
|
| 67 |
+
if match:
|
| 68 |
+
return f"https://arxiv.org/abs/{match.group(2).replace('.pdf', '')}"
|
| 69 |
+
return ""
|
| 70 |
+
|
| 71 |
+
|
| 72 |
+
def _collect_papers(rows: List[Dict[str, str]]) -> Dict[str, Dict[str, str]]:
|
| 73 |
+
papers: Dict[str, Dict[str, str]] = {}
|
| 74 |
+
for row in rows:
|
| 75 |
+
paper_id = (row.get("paper_id") or "").strip()
|
| 76 |
+
if not paper_id:
|
| 77 |
+
continue
|
| 78 |
+
record = papers.setdefault(
|
| 79 |
+
paper_id,
|
| 80 |
+
{
|
| 81 |
+
"paper_id": paper_id,
|
| 82 |
+
"paper_title": (row.get("paper_title") or "").strip(),
|
| 83 |
+
"paper_doi": (row.get("paper_doi") or "").strip(),
|
| 84 |
+
"paper_arxiv": (row.get("paper_arxiv") or "").strip(),
|
| 85 |
+
},
|
| 86 |
+
)
|
| 87 |
+
if not record["paper_title"] and row.get("paper_title"):
|
| 88 |
+
record["paper_title"] = row.get("paper_title", "").strip()
|
| 89 |
+
if not record["paper_doi"] and row.get("paper_doi"):
|
| 90 |
+
record["paper_doi"] = row.get("paper_doi", "").strip()
|
| 91 |
+
if not record["paper_arxiv"] and row.get("paper_arxiv"):
|
| 92 |
+
record["paper_arxiv"] = row.get("paper_arxiv", "").strip()
|
| 93 |
+
return papers
|
| 94 |
+
|
| 95 |
+
|
| 96 |
+
def main() -> None:
|
| 97 |
+
parser = argparse.ArgumentParser(
|
| 98 |
+
description=(
|
| 99 |
+
"List required PDFs for DefExtra and generate helper links."
|
| 100 |
+
),
|
| 101 |
+
)
|
| 102 |
+
parser.add_argument(
|
| 103 |
+
"--legal-csv",
|
| 104 |
+
type=Path,
|
| 105 |
+
default=Path("data/defextra_legal.csv"),
|
| 106 |
+
help="Path to DefExtra legal CSV.",
|
| 107 |
+
)
|
| 108 |
+
parser.add_argument(
|
| 109 |
+
"--output-csv",
|
| 110 |
+
type=Path,
|
| 111 |
+
default=None,
|
| 112 |
+
help="Optional output CSV path.",
|
| 113 |
+
)
|
| 114 |
+
parser.add_argument(
|
| 115 |
+
"--output-md",
|
| 116 |
+
type=Path,
|
| 117 |
+
default=None,
|
| 118 |
+
help="Optional output Markdown list path.",
|
| 119 |
+
)
|
| 120 |
+
parser.add_argument(
|
| 121 |
+
"--limit",
|
| 122 |
+
type=int,
|
| 123 |
+
default=10,
|
| 124 |
+
help="How many entries to print to stdout (0 = none).",
|
| 125 |
+
)
|
| 126 |
+
args = parser.parse_args()
|
| 127 |
+
|
| 128 |
+
if not args.legal_csv.exists():
|
| 129 |
+
raise SystemExit(f"Legal CSV not found: {args.legal_csv}")
|
| 130 |
+
|
| 131 |
+
with args.legal_csv.open("r", encoding="utf-8", newline="") as handle:
|
| 132 |
+
rows = list(csv.DictReader(handle))
|
| 133 |
+
|
| 134 |
+
papers = _collect_papers(rows)
|
| 135 |
+
output_rows: List[Dict[str, str]] = []
|
| 136 |
+
|
| 137 |
+
for paper_id, record in sorted(papers.items()):
|
| 138 |
+
doi = record.get("paper_doi", "")
|
| 139 |
+
arxiv = record.get("paper_arxiv", "")
|
| 140 |
+
normalized_id = normalize_paper_id(paper_id)
|
| 141 |
+
aliases = candidate_pdf_aliases(paper_id, doi, arxiv)
|
| 142 |
+
output_rows.append(
|
| 143 |
+
{
|
| 144 |
+
"paper_id": paper_id,
|
| 145 |
+
"normalized_id": normalized_id,
|
| 146 |
+
"paper_title": record.get("paper_title", ""),
|
| 147 |
+
"paper_doi": doi,
|
| 148 |
+
"paper_arxiv": arxiv,
|
| 149 |
+
"preferred_pdf_name": f"{normalized_id}.pdf",
|
| 150 |
+
"alias_pdf_names": _safe_join(
|
| 151 |
+
f"{alias}.pdf" for alias in aliases
|
| 152 |
+
),
|
| 153 |
+
"url_semanticscholar": _semanticscholar_url(paper_id),
|
| 154 |
+
"url_doi": _doi_url(doi, paper_id),
|
| 155 |
+
"url_arxiv": _arxiv_url(arxiv, paper_id),
|
| 156 |
+
"url_acl": _acl_url(paper_id),
|
| 157 |
+
},
|
| 158 |
+
)
|
| 159 |
+
|
| 160 |
+
if args.output_csv:
|
| 161 |
+
args.output_csv.parent.mkdir(parents=True, exist_ok=True)
|
| 162 |
+
with args.output_csv.open("w", encoding="utf-8", newline="") as handle:
|
| 163 |
+
fieldnames = list(output_rows[0].keys()) if output_rows else []
|
| 164 |
+
writer = csv.DictWriter(handle, fieldnames=fieldnames)
|
| 165 |
+
writer.writeheader()
|
| 166 |
+
for row in output_rows:
|
| 167 |
+
writer.writerow(row)
|
| 168 |
+
print(f"Wrote {len(output_rows)} rows to {args.output_csv}")
|
| 169 |
+
|
| 170 |
+
if args.output_md:
|
| 171 |
+
lines = ["# DefExtra required PDFs", ""]
|
| 172 |
+
for row in output_rows:
|
| 173 |
+
line = f"- {row['paper_id']} — {row['paper_title']}"
|
| 174 |
+
links = [
|
| 175 |
+
row["url_semanticscholar"],
|
| 176 |
+
row["url_doi"],
|
| 177 |
+
row["url_arxiv"],
|
| 178 |
+
row["url_acl"],
|
| 179 |
+
]
|
| 180 |
+
links = [link for link in links if link]
|
| 181 |
+
if links:
|
| 182 |
+
line += " (" + ", ".join(links) + ")"
|
| 183 |
+
lines.append(line)
|
| 184 |
+
args.output_md.parent.mkdir(parents=True, exist_ok=True)
|
| 185 |
+
args.output_md.write_text("\n".join(lines) + "\n", encoding="utf-8")
|
| 186 |
+
print(f"Wrote {len(output_rows)} rows to {args.output_md}")
|
| 187 |
+
|
| 188 |
+
if args.limit > 0:
|
| 189 |
+
for row in output_rows[: args.limit]:
|
| 190 |
+
print(
|
| 191 |
+
f"{row['paper_id']} | {row['preferred_pdf_name']} | "
|
| 192 |
+
f"{row['url_semanticscholar'] or row['url_doi'] or row['url_arxiv'] or row['url_acl']}",
|
| 193 |
+
)
|
| 194 |
+
|
| 195 |
+
|
| 196 |
+
if __name__ == "__main__":
|
| 197 |
+
main()
|
scripts/pdf_to_grobid.py
ADDED
|
@@ -0,0 +1,213 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env env python3
|
| 2 |
+
|
| 3 |
+
import argparse
|
| 4 |
+
import json
|
| 5 |
+
import os
|
| 6 |
+
import shutil
|
| 7 |
+
import sys
|
| 8 |
+
import tempfile
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
|
| 11 |
+
try:
|
| 12 |
+
from grobid_client.grobid_client import GrobidClient
|
| 13 |
+
except ImportError:
|
| 14 |
+
print("Error: grobid-client-python is not installed.")
|
| 15 |
+
print("Please install it with: pip install grobid-client-python")
|
| 16 |
+
sys.exit(1)
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
def process_pdfs_with_grobid(
|
| 20 |
+
input_folder: str,
|
| 21 |
+
output_folder: str,
|
| 22 |
+
config_path: str | None = None,
|
| 23 |
+
):
|
| 24 |
+
"""
|
| 25 |
+
Process all PDF files in the input folder with GROBID and save TEI XML to output folder.
|
| 26 |
+
|
| 27 |
+
Args:
|
| 28 |
+
input_folder: Path to folder containing PDF files
|
| 29 |
+
output_folder: Path to folder where XML files will be saved
|
| 30 |
+
config_path: Path to GROBID client configuration file
|
| 31 |
+
"""
|
| 32 |
+
# Validate input folder
|
| 33 |
+
if not os.path.exists(input_folder):
|
| 34 |
+
print(f"Error: Input folder '{input_folder}' does not exist.")
|
| 35 |
+
sys.exit(1)
|
| 36 |
+
|
| 37 |
+
if not os.path.isdir(input_folder):
|
| 38 |
+
print(f"Error: '{input_folder}' is not a directory.")
|
| 39 |
+
sys.exit(1)
|
| 40 |
+
|
| 41 |
+
# Create output folder if it doesn't exist
|
| 42 |
+
os.makedirs(output_folder, exist_ok=True)
|
| 43 |
+
|
| 44 |
+
# Find all PDF files in input folder
|
| 45 |
+
pdf_files = list(Path(input_folder).glob("*.pdf"))
|
| 46 |
+
pdf_files.extend(list(Path(input_folder).glob("*.PDF")))
|
| 47 |
+
valid_pdf_files = []
|
| 48 |
+
for pdf_file in pdf_files:
|
| 49 |
+
if not pdf_file.exists():
|
| 50 |
+
print(f"Warning: skipping missing/broken PDF path: {pdf_file}")
|
| 51 |
+
continue
|
| 52 |
+
valid_pdf_files.append(pdf_file)
|
| 53 |
+
pdf_files = valid_pdf_files
|
| 54 |
+
|
| 55 |
+
if not pdf_files:
|
| 56 |
+
print(f"No PDF files found in '{input_folder}'")
|
| 57 |
+
return
|
| 58 |
+
|
| 59 |
+
print(f"Found {len(pdf_files)} PDF file(s) to process")
|
| 60 |
+
|
| 61 |
+
# Initialize GROBID client
|
| 62 |
+
temp_config_path = None
|
| 63 |
+
try:
|
| 64 |
+
if config_path and os.path.exists(config_path):
|
| 65 |
+
client = GrobidClient(config_path=config_path)
|
| 66 |
+
else:
|
| 67 |
+
default_config = {
|
| 68 |
+
"grobid_server": "http://localhost:8070",
|
| 69 |
+
"batch_size": 1000,
|
| 70 |
+
"sleep_time": 5,
|
| 71 |
+
"timeout": 60,
|
| 72 |
+
}
|
| 73 |
+
temp_handle = tempfile.NamedTemporaryFile(
|
| 74 |
+
mode="w",
|
| 75 |
+
suffix=".json",
|
| 76 |
+
delete=False,
|
| 77 |
+
)
|
| 78 |
+
json.dump(default_config, temp_handle)
|
| 79 |
+
temp_handle.close()
|
| 80 |
+
temp_config_path = temp_handle.name
|
| 81 |
+
client = GrobidClient(config_path=temp_config_path)
|
| 82 |
+
except Exception as e:
|
| 83 |
+
print(f"Error initializing GROBID client: {e}")
|
| 84 |
+
if config_path:
|
| 85 |
+
print(f"Make sure the config file exists at '{config_path}'")
|
| 86 |
+
else:
|
| 87 |
+
print("Provide --config config.json if required.")
|
| 88 |
+
sys.exit(1)
|
| 89 |
+
|
| 90 |
+
# Create temporary directories for processing
|
| 91 |
+
temp_input_dir = tempfile.mkdtemp()
|
| 92 |
+
temp_output_dir = tempfile.mkdtemp()
|
| 93 |
+
|
| 94 |
+
try:
|
| 95 |
+
# Copy PDFs to temporary input directory
|
| 96 |
+
print("\nPreparing files for processing...")
|
| 97 |
+
for pdf_file in pdf_files:
|
| 98 |
+
pdf_filename = os.path.basename(pdf_file)
|
| 99 |
+
temp_pdf_path = os.path.join(temp_input_dir, pdf_filename)
|
| 100 |
+
shutil.copy2(pdf_file, temp_pdf_path)
|
| 101 |
+
print(f" - {pdf_filename}")
|
| 102 |
+
|
| 103 |
+
# Process with GROBID
|
| 104 |
+
print(f"\nProcessing {len(pdf_files)} PDF(s) with GROBID...")
|
| 105 |
+
print(
|
| 106 |
+
"This may take a while depending on the number and size of files...",
|
| 107 |
+
)
|
| 108 |
+
|
| 109 |
+
client.process(
|
| 110 |
+
"processFulltextDocument",
|
| 111 |
+
temp_input_dir,
|
| 112 |
+
output=temp_output_dir,
|
| 113 |
+
)
|
| 114 |
+
|
| 115 |
+
# Copy results to output folder
|
| 116 |
+
print("\nSaving results...")
|
| 117 |
+
processed_count = 0
|
| 118 |
+
failed_count = 0
|
| 119 |
+
|
| 120 |
+
for pdf_file in pdf_files:
|
| 121 |
+
pdf_filename = os.path.basename(pdf_file)
|
| 122 |
+
|
| 123 |
+
# Expected output filename from GROBID
|
| 124 |
+
output_filename = (
|
| 125 |
+
f"{os.path.splitext(pdf_filename)[0]}.grobid.tei.xml"
|
| 126 |
+
)
|
| 127 |
+
temp_output_path = os.path.join(temp_output_dir, output_filename)
|
| 128 |
+
|
| 129 |
+
if os.path.exists(temp_output_path):
|
| 130 |
+
# Copy to final output directory
|
| 131 |
+
final_output_path = os.path.join(
|
| 132 |
+
output_folder,
|
| 133 |
+
output_filename,
|
| 134 |
+
)
|
| 135 |
+
shutil.copy2(temp_output_path, final_output_path)
|
| 136 |
+
print(f" ✓ {pdf_filename} -> {output_filename}")
|
| 137 |
+
processed_count += 1
|
| 138 |
+
else:
|
| 139 |
+
print(
|
| 140 |
+
f" ✗ {pdf_filename} - Processing failed (no output generated)",
|
| 141 |
+
)
|
| 142 |
+
failed_count += 1
|
| 143 |
+
|
| 144 |
+
# Summary
|
| 145 |
+
print(f"\n{'=' * 60}")
|
| 146 |
+
print("Processing complete!")
|
| 147 |
+
print(f"Successfully processed: {processed_count}/{len(pdf_files)}")
|
| 148 |
+
if failed_count > 0:
|
| 149 |
+
print(f"Failed: {failed_count}/{len(pdf_files)}")
|
| 150 |
+
print(f"Output saved to: {os.path.abspath(output_folder)}")
|
| 151 |
+
print(f"{'=' * 60}")
|
| 152 |
+
|
| 153 |
+
except Exception as e:
|
| 154 |
+
print(f"\nError during processing: {e}")
|
| 155 |
+
sys.exit(1)
|
| 156 |
+
|
| 157 |
+
finally:
|
| 158 |
+
# Clean up temporary directories
|
| 159 |
+
shutil.rmtree(temp_input_dir, ignore_errors=True)
|
| 160 |
+
shutil.rmtree(temp_output_dir, ignore_errors=True)
|
| 161 |
+
if temp_config_path and os.path.exists(temp_config_path):
|
| 162 |
+
try:
|
| 163 |
+
os.remove(temp_config_path)
|
| 164 |
+
except OSError:
|
| 165 |
+
pass
|
| 166 |
+
|
| 167 |
+
|
| 168 |
+
def main():
|
| 169 |
+
"""Main entry point for the script."""
|
| 170 |
+
parser = argparse.ArgumentParser(
|
| 171 |
+
description="Process PDF files with GROBID and extract TEI XML",
|
| 172 |
+
formatter_class=argparse.RawDescriptionHelpFormatter,
|
| 173 |
+
epilog="""
|
| 174 |
+
Examples:
|
| 175 |
+
python grobid_batch_processor.py --input_path ./pdfs --output_path ./output
|
| 176 |
+
python grobid_batch_processor.py --input_path ./pdfs --output_path ./output --config ./my_config.json
|
| 177 |
+
|
| 178 |
+
Note: Make sure GROBID server is running before executing this script.
|
| 179 |
+
See: https://grobid.readthedocs.io/en/latest/Grobid-service/
|
| 180 |
+
""",
|
| 181 |
+
)
|
| 182 |
+
|
| 183 |
+
parser.add_argument(
|
| 184 |
+
"--input_folder",
|
| 185 |
+
help="Path to folder containing PDF files to process",
|
| 186 |
+
)
|
| 187 |
+
|
| 188 |
+
parser.add_argument(
|
| 189 |
+
"--output_folder",
|
| 190 |
+
help="Path to folder where XML output files will be saved",
|
| 191 |
+
)
|
| 192 |
+
|
| 193 |
+
parser.add_argument(
|
| 194 |
+
"--config",
|
| 195 |
+
default=None,
|
| 196 |
+
help="Path to GROBID client configuration file (optional).",
|
| 197 |
+
)
|
| 198 |
+
|
| 199 |
+
args = parser.parse_args()
|
| 200 |
+
|
| 201 |
+
print("=" * 60)
|
| 202 |
+
print("GROBID Batch PDF Processor")
|
| 203 |
+
print("=" * 60)
|
| 204 |
+
|
| 205 |
+
process_pdfs_with_grobid(
|
| 206 |
+
args.input_folder,
|
| 207 |
+
args.output_folder,
|
| 208 |
+
args.config,
|
| 209 |
+
)
|
| 210 |
+
|
| 211 |
+
|
| 212 |
+
if __name__ == "__main__":
|
| 213 |
+
main()
|
scripts/prepare_defextra_legal.py
ADDED
|
@@ -0,0 +1,877 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
# ruff: noqa: E402
|
| 4 |
+
|
| 5 |
+
import argparse
|
| 6 |
+
import csv
|
| 7 |
+
import re
|
| 8 |
+
import sys
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
from typing import Dict, Optional
|
| 11 |
+
|
| 12 |
+
try:
|
| 13 |
+
from scripts.defextra_markers import (
|
| 14 |
+
ANCHOR_WINDOW,
|
| 15 |
+
ANCHOR_WINDOW_ALT,
|
| 16 |
+
DocIndex,
|
| 17 |
+
HASH_VERSION,
|
| 18 |
+
TokenIndex,
|
| 19 |
+
build_tei_index,
|
| 20 |
+
hash_token_sequence,
|
| 21 |
+
tokenize_text,
|
| 22 |
+
normalize_paper_id,
|
| 23 |
+
)
|
| 24 |
+
except ModuleNotFoundError as exc:
|
| 25 |
+
if exc.name != "scripts":
|
| 26 |
+
raise
|
| 27 |
+
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
| 28 |
+
if str(PROJECT_ROOT) not in sys.path:
|
| 29 |
+
sys.path.insert(0, str(PROJECT_ROOT))
|
| 30 |
+
from scripts.defextra_markers import (
|
| 31 |
+
ANCHOR_WINDOW,
|
| 32 |
+
ANCHOR_WINDOW_ALT,
|
| 33 |
+
DocIndex,
|
| 34 |
+
HASH_VERSION,
|
| 35 |
+
TokenIndex,
|
| 36 |
+
build_tei_index,
|
| 37 |
+
hash_token_sequence,
|
| 38 |
+
tokenize_text,
|
| 39 |
+
normalize_paper_id,
|
| 40 |
+
)
|
| 41 |
+
|
| 42 |
+
TRAILING_PUNCT = {".", ",", ";", ":", "?", "!"}
|
| 43 |
+
TRAILING_QUOTES = {"'", '"', "”", "’", ")", "]"}
|
| 44 |
+
CITATION_BRACKET_RE = re.compile(r"\[[0-9][0-9,;\s\-–]*\]")
|
| 45 |
+
CITATION_PAREN_RE = re.compile(r"\([^)]*\d{4}[^)]*\)")
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
def _extract_end_punct(text: str) -> str:
|
| 49 |
+
trimmed = text.rstrip()
|
| 50 |
+
if not trimmed:
|
| 51 |
+
return ""
|
| 52 |
+
i = len(trimmed) - 1
|
| 53 |
+
while i >= 0 and trimmed[i] in TRAILING_QUOTES:
|
| 54 |
+
i -= 1
|
| 55 |
+
if i >= 0 and trimmed[i] in TRAILING_PUNCT:
|
| 56 |
+
return trimmed[i]
|
| 57 |
+
return ""
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
def _spec(
|
| 61 |
+
count: str,
|
| 62 |
+
hash64: str,
|
| 63 |
+
sha: str,
|
| 64 |
+
) -> Optional[tuple[int, int, str]]:
|
| 65 |
+
if not count or not hash64 or not sha:
|
| 66 |
+
return None
|
| 67 |
+
try:
|
| 68 |
+
return int(count), int(hash64), sha
|
| 69 |
+
except ValueError:
|
| 70 |
+
return None
|
| 71 |
+
|
| 72 |
+
|
| 73 |
+
def _build_mid_candidates(
|
| 74 |
+
token_index: TokenIndex,
|
| 75 |
+
mid_specs: list[tuple[int, int, str]],
|
| 76 |
+
) -> list[tuple[int, int]]:
|
| 77 |
+
if not mid_specs:
|
| 78 |
+
return []
|
| 79 |
+
candidates: list[tuple[int, int]] = []
|
| 80 |
+
for spec in mid_specs:
|
| 81 |
+
for position in token_index.find_token_positions_by_hash(*spec):
|
| 82 |
+
candidates.append((position, spec[0]))
|
| 83 |
+
return candidates
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
def _span_has_mid(
|
| 87 |
+
mid_candidates: list[tuple[int, int]],
|
| 88 |
+
start_idx: int,
|
| 89 |
+
end_idx: int,
|
| 90 |
+
) -> bool:
|
| 91 |
+
for mid_start, mid_len in mid_candidates:
|
| 92 |
+
mid_end = mid_start + mid_len - 1
|
| 93 |
+
if mid_start >= start_idx and mid_end <= end_idx:
|
| 94 |
+
return True
|
| 95 |
+
return False
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
def _find_span_by_anchors(
|
| 99 |
+
token_index: TokenIndex,
|
| 100 |
+
head_spec: Optional[tuple[int, int, str]],
|
| 101 |
+
tail_spec: Optional[tuple[int, int, str]],
|
| 102 |
+
expected_len: int,
|
| 103 |
+
mid_specs: Optional[list[tuple[int, int, str]]] = None,
|
| 104 |
+
) -> Optional[tuple[int, int]]:
|
| 105 |
+
if head_spec is None or tail_spec is None or expected_len <= 0:
|
| 106 |
+
return None
|
| 107 |
+
head_positions = token_index.find_token_positions_by_hash(*head_spec)
|
| 108 |
+
tail_positions = token_index.find_token_positions_by_hash(*tail_spec)
|
| 109 |
+
if not head_positions or not tail_positions:
|
| 110 |
+
return None
|
| 111 |
+
mid_candidates = _build_mid_candidates(token_index, mid_specs or [])
|
| 112 |
+
best = None
|
| 113 |
+
best_diff = None
|
| 114 |
+
tol = max(5, int(expected_len * 0.3))
|
| 115 |
+
min_len = max(1, expected_len // 2)
|
| 116 |
+
max_len = expected_len * 3
|
| 117 |
+
for head_start in head_positions:
|
| 118 |
+
head_end = head_start + head_spec[0] - 1
|
| 119 |
+
for tail_start in tail_positions:
|
| 120 |
+
tail_end = tail_start + tail_spec[0] - 1
|
| 121 |
+
if tail_end < head_end:
|
| 122 |
+
continue
|
| 123 |
+
if mid_candidates and not _span_has_mid(
|
| 124 |
+
mid_candidates,
|
| 125 |
+
head_start,
|
| 126 |
+
tail_end,
|
| 127 |
+
):
|
| 128 |
+
continue
|
| 129 |
+
length = tail_end - head_start + 1
|
| 130 |
+
if length < min_len or length > max_len:
|
| 131 |
+
continue
|
| 132 |
+
if length < expected_len - tol or length > expected_len + tol:
|
| 133 |
+
continue
|
| 134 |
+
diff = abs(length - expected_len)
|
| 135 |
+
if best_diff is None or diff < best_diff:
|
| 136 |
+
best_diff = diff
|
| 137 |
+
best = (head_start, tail_end)
|
| 138 |
+
if best is None:
|
| 139 |
+
return None
|
| 140 |
+
start_char = token_index.spans[best[0]][0]
|
| 141 |
+
end_char = token_index.spans[best[1]][1]
|
| 142 |
+
return start_char, end_char
|
| 143 |
+
|
| 144 |
+
|
| 145 |
+
def _find_span_from_anchor(
|
| 146 |
+
token_index: TokenIndex,
|
| 147 |
+
anchor_spec: Optional[tuple[int, int, str]],
|
| 148 |
+
expected_len: int,
|
| 149 |
+
position: str,
|
| 150 |
+
mid_specs: Optional[list[tuple[int, int, str]]] = None,
|
| 151 |
+
) -> Optional[tuple[int, int]]:
|
| 152 |
+
if anchor_spec is None or expected_len <= 0:
|
| 153 |
+
return None
|
| 154 |
+
positions = token_index.find_token_positions_by_hash(*anchor_spec)
|
| 155 |
+
if not positions:
|
| 156 |
+
return None
|
| 157 |
+
mid_candidates = _build_mid_candidates(token_index, mid_specs or [])
|
| 158 |
+
if position == "tail":
|
| 159 |
+
positions = list(reversed(positions))
|
| 160 |
+
for anchor_start in positions:
|
| 161 |
+
if position == "head":
|
| 162 |
+
start_idx = anchor_start
|
| 163 |
+
end_idx = anchor_start + expected_len - 1
|
| 164 |
+
else:
|
| 165 |
+
anchor_end = anchor_start + anchor_spec[0] - 1
|
| 166 |
+
end_idx = anchor_end
|
| 167 |
+
start_idx = end_idx - expected_len + 1
|
| 168 |
+
if start_idx < 0 or end_idx >= len(token_index.tokens):
|
| 169 |
+
continue
|
| 170 |
+
if mid_candidates and not _span_has_mid(
|
| 171 |
+
mid_candidates,
|
| 172 |
+
start_idx,
|
| 173 |
+
end_idx,
|
| 174 |
+
):
|
| 175 |
+
continue
|
| 176 |
+
start_char = token_index.spans[start_idx][0]
|
| 177 |
+
end_char = token_index.spans[end_idx][1]
|
| 178 |
+
return start_char, end_char
|
| 179 |
+
return None
|
| 180 |
+
|
| 181 |
+
|
| 182 |
+
def _candidate_ids(paper_id: str, doi: str, arxiv: str) -> list[str]:
|
| 183 |
+
candidates = [paper_id, normalize_paper_id(paper_id)]
|
| 184 |
+
if doi:
|
| 185 |
+
candidates.append(doi)
|
| 186 |
+
if arxiv:
|
| 187 |
+
candidates.append(arxiv)
|
| 188 |
+
seen = set()
|
| 189 |
+
ordered = []
|
| 190 |
+
for item in candidates:
|
| 191 |
+
value = item.strip()
|
| 192 |
+
if value and value not in seen:
|
| 193 |
+
seen.add(value)
|
| 194 |
+
ordered.append(value)
|
| 195 |
+
return ordered
|
| 196 |
+
|
| 197 |
+
|
| 198 |
+
def _resolve_tei_path(
|
| 199 |
+
paper_id: str,
|
| 200 |
+
doi: str,
|
| 201 |
+
arxiv: str,
|
| 202 |
+
tei_index: Dict[str, Path],
|
| 203 |
+
) -> Optional[Path]:
|
| 204 |
+
for candidate in _candidate_ids(paper_id, doi, arxiv):
|
| 205 |
+
if candidate in tei_index:
|
| 206 |
+
return tei_index[candidate]
|
| 207 |
+
if candidate.startswith("paper_"):
|
| 208 |
+
stripped = candidate[len("paper_") :]
|
| 209 |
+
if stripped in tei_index:
|
| 210 |
+
return tei_index[stripped]
|
| 211 |
+
return None
|
| 212 |
+
|
| 213 |
+
|
| 214 |
+
def main() -> None:
|
| 215 |
+
parser = argparse.ArgumentParser(
|
| 216 |
+
description="Build legal DefExtra CSV with GROBID markers.",
|
| 217 |
+
)
|
| 218 |
+
parser.add_argument(
|
| 219 |
+
"--input-csv",
|
| 220 |
+
type=Path,
|
| 221 |
+
default=Path("results/paper_results/defextra_hf.csv"),
|
| 222 |
+
help="Source DefExtra CSV (contains excerpts).",
|
| 223 |
+
)
|
| 224 |
+
parser.add_argument(
|
| 225 |
+
"--output-csv",
|
| 226 |
+
type=Path,
|
| 227 |
+
default=Path("results/paper_results/defextra_legal.csv"),
|
| 228 |
+
help="Output legal DefExtra CSV (no excerpts).",
|
| 229 |
+
)
|
| 230 |
+
parser.add_argument(
|
| 231 |
+
"--tei-dir",
|
| 232 |
+
type=Path,
|
| 233 |
+
nargs="+",
|
| 234 |
+
default=[
|
| 235 |
+
Path("ManualPDFsGROBID/manual_pdfs_grobid"),
|
| 236 |
+
Path("ManualPDFsGROBID/new_grobid"),
|
| 237 |
+
],
|
| 238 |
+
help="Directories with GROBID TEI files.",
|
| 239 |
+
)
|
| 240 |
+
parser.add_argument(
|
| 241 |
+
"--report",
|
| 242 |
+
type=Path,
|
| 243 |
+
default=None,
|
| 244 |
+
help="Optional report path for missing TEI spans.",
|
| 245 |
+
)
|
| 246 |
+
args = parser.parse_args()
|
| 247 |
+
|
| 248 |
+
if not args.input_csv.exists():
|
| 249 |
+
raise SystemExit(f"Input CSV not found: {args.input_csv}")
|
| 250 |
+
|
| 251 |
+
tei_index = build_tei_index(args.tei_dir)
|
| 252 |
+
doc_cache: Dict[str, Optional[DocIndex]] = {}
|
| 253 |
+
token_cache: Dict[str, Optional[TokenIndex]] = {}
|
| 254 |
+
|
| 255 |
+
rows = []
|
| 256 |
+
with args.input_csv.open("r", encoding="utf-8", newline="") as handle:
|
| 257 |
+
reader = csv.DictReader(handle)
|
| 258 |
+
for row in reader:
|
| 259 |
+
rows.append(row)
|
| 260 |
+
|
| 261 |
+
output_rows = []
|
| 262 |
+
missing_tei = 0
|
| 263 |
+
missing_def = 0
|
| 264 |
+
missing_ctx = 0
|
| 265 |
+
missing_def_rows: list[dict] = []
|
| 266 |
+
missing_ctx_rows: list[dict] = []
|
| 267 |
+
def_hash_available = 0
|
| 268 |
+
ctx_hash_available = 0
|
| 269 |
+
def_anchor_available = 0
|
| 270 |
+
ctx_anchor_available = 0
|
| 271 |
+
|
| 272 |
+
for row in rows:
|
| 273 |
+
paper_id = (row.get("paper_id") or "").strip()
|
| 274 |
+
doi = (row.get("paper_doi") or "").strip()
|
| 275 |
+
arxiv = (row.get("paper_arxiv") or "").strip()
|
| 276 |
+
definition = row.get("definition") or ""
|
| 277 |
+
context = row.get("context") or ""
|
| 278 |
+
|
| 279 |
+
if paper_id not in doc_cache:
|
| 280 |
+
tei_path = _resolve_tei_path(paper_id, doi, arxiv, tei_index)
|
| 281 |
+
if tei_path is None:
|
| 282 |
+
doc_cache[paper_id] = None
|
| 283 |
+
token_cache[paper_id] = None
|
| 284 |
+
else:
|
| 285 |
+
doc_index = DocIndex.from_tei(tei_path)
|
| 286 |
+
doc_cache[paper_id] = doc_index
|
| 287 |
+
token_cache[paper_id] = TokenIndex.from_text(
|
| 288 |
+
doc_index.doc_text,
|
| 289 |
+
)
|
| 290 |
+
|
| 291 |
+
doc_index = doc_cache.get(paper_id)
|
| 292 |
+
def_start = def_end = ctx_start = ctx_end = ""
|
| 293 |
+
def_match = ctx_match = "missing"
|
| 294 |
+
def_hash64 = def_sha = def_tok_len = ""
|
| 295 |
+
ctx_hash64 = ctx_sha = ctx_tok_len = ""
|
| 296 |
+
def_head_hash64 = def_head_sha = def_head_len = ""
|
| 297 |
+
def_tail_hash64 = def_tail_sha = def_tail_len = ""
|
| 298 |
+
def_mid_hash64 = def_mid_sha = def_mid_len = ""
|
| 299 |
+
def_head_alt_hash64 = def_head_alt_sha = def_head_alt_len = ""
|
| 300 |
+
def_tail_alt_hash64 = def_tail_alt_sha = def_tail_alt_len = ""
|
| 301 |
+
def_mid_alt_hash64 = def_mid_alt_sha = def_mid_alt_len = ""
|
| 302 |
+
ctx_head_hash64 = ctx_head_sha = ctx_head_len = ""
|
| 303 |
+
ctx_tail_hash64 = ctx_tail_sha = ctx_tail_len = ""
|
| 304 |
+
ctx_mid_hash64 = ctx_mid_sha = ctx_mid_len = ""
|
| 305 |
+
ctx_head_alt_hash64 = ctx_head_alt_sha = ctx_head_alt_len = ""
|
| 306 |
+
ctx_tail_alt_hash64 = ctx_tail_alt_sha = ctx_tail_alt_len = ""
|
| 307 |
+
ctx_mid_alt_hash64 = ctx_mid_alt_sha = ctx_mid_alt_len = ""
|
| 308 |
+
def_anchor_has = False
|
| 309 |
+
ctx_anchor_has = False
|
| 310 |
+
def_preserve_linebreaks = "true" if "\n" in definition else "false"
|
| 311 |
+
ctx_preserve_linebreaks = "true" if "\n" in context else "false"
|
| 312 |
+
def_preserve_hyphenation = (
|
| 313 |
+
"true"
|
| 314 |
+
if re.search(r"[A-Za-z]-\s+[A-Za-z]", definition)
|
| 315 |
+
else "false"
|
| 316 |
+
)
|
| 317 |
+
ctx_preserve_hyphenation = (
|
| 318 |
+
"true" if re.search(r"[A-Za-z]-\s+[A-Za-z]", context) else "false"
|
| 319 |
+
)
|
| 320 |
+
def_has_bracket_citation = (
|
| 321 |
+
"true" if CITATION_BRACKET_RE.search(definition) else "false"
|
| 322 |
+
)
|
| 323 |
+
def_has_paren_citation = (
|
| 324 |
+
"true" if CITATION_PAREN_RE.search(definition) else "false"
|
| 325 |
+
)
|
| 326 |
+
def_has_letter_digit = (
|
| 327 |
+
"true" if re.search(r"[A-Za-z][0-9]", definition) else "false"
|
| 328 |
+
)
|
| 329 |
+
ctx_has_bracket_citation = (
|
| 330 |
+
"true" if CITATION_BRACKET_RE.search(context) else "false"
|
| 331 |
+
)
|
| 332 |
+
ctx_has_paren_citation = (
|
| 333 |
+
"true" if CITATION_PAREN_RE.search(context) else "false"
|
| 334 |
+
)
|
| 335 |
+
ctx_has_letter_digit = (
|
| 336 |
+
"true" if re.search(r"[A-Za-z][0-9]", context) else "false"
|
| 337 |
+
)
|
| 338 |
+
def_end_punct = _extract_end_punct(definition)
|
| 339 |
+
ctx_end_punct = _extract_end_punct(context)
|
| 340 |
+
|
| 341 |
+
def_tokens, _ = tokenize_text(definition, return_spans=False)
|
| 342 |
+
if def_tokens:
|
| 343 |
+
h64, sha, tok_len = hash_token_sequence(def_tokens)
|
| 344 |
+
def_hash64 = str(h64)
|
| 345 |
+
def_sha = sha
|
| 346 |
+
def_tok_len = str(tok_len)
|
| 347 |
+
def_hash_available += 1
|
| 348 |
+
if tok_len >= ANCHOR_WINDOW:
|
| 349 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 350 |
+
def_tokens[:ANCHOR_WINDOW],
|
| 351 |
+
)
|
| 352 |
+
def_head_hash64 = str(h64)
|
| 353 |
+
def_head_sha = sha
|
| 354 |
+
def_head_len = str(tok_len)
|
| 355 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 356 |
+
def_tokens[-ANCHOR_WINDOW:],
|
| 357 |
+
)
|
| 358 |
+
def_tail_hash64 = str(h64)
|
| 359 |
+
def_tail_sha = sha
|
| 360 |
+
def_tail_len = str(tok_len)
|
| 361 |
+
mid_start = max(0, (len(def_tokens) - ANCHOR_WINDOW) // 2)
|
| 362 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 363 |
+
def_tokens[mid_start : mid_start + ANCHOR_WINDOW],
|
| 364 |
+
)
|
| 365 |
+
def_mid_hash64 = str(h64)
|
| 366 |
+
def_mid_sha = sha
|
| 367 |
+
def_mid_len = str(tok_len)
|
| 368 |
+
def_anchor_has = True
|
| 369 |
+
if tok_len >= 2:
|
| 370 |
+
alt_window = (
|
| 371 |
+
ANCHOR_WINDOW_ALT
|
| 372 |
+
if tok_len >= ANCHOR_WINDOW_ALT
|
| 373 |
+
else max(2, tok_len - 1)
|
| 374 |
+
)
|
| 375 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 376 |
+
def_tokens[:alt_window],
|
| 377 |
+
)
|
| 378 |
+
def_head_alt_hash64 = str(h64)
|
| 379 |
+
def_head_alt_sha = sha
|
| 380 |
+
def_head_alt_len = str(tok_len)
|
| 381 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 382 |
+
def_tokens[-alt_window:],
|
| 383 |
+
)
|
| 384 |
+
def_tail_alt_hash64 = str(h64)
|
| 385 |
+
def_tail_alt_sha = sha
|
| 386 |
+
def_tail_alt_len = str(tok_len)
|
| 387 |
+
mid_start = max(0, (len(def_tokens) - alt_window) // 2)
|
| 388 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 389 |
+
def_tokens[mid_start : mid_start + alt_window],
|
| 390 |
+
)
|
| 391 |
+
def_mid_alt_hash64 = str(h64)
|
| 392 |
+
def_mid_alt_sha = sha
|
| 393 |
+
def_mid_alt_len = str(tok_len)
|
| 394 |
+
def_anchor_has = True
|
| 395 |
+
|
| 396 |
+
ctx_tokens, _ = tokenize_text(context, return_spans=False)
|
| 397 |
+
if ctx_tokens:
|
| 398 |
+
h64, sha, tok_len = hash_token_sequence(ctx_tokens)
|
| 399 |
+
ctx_hash64 = str(h64)
|
| 400 |
+
ctx_sha = sha
|
| 401 |
+
ctx_tok_len = str(tok_len)
|
| 402 |
+
ctx_hash_available += 1
|
| 403 |
+
if tok_len >= ANCHOR_WINDOW:
|
| 404 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 405 |
+
ctx_tokens[:ANCHOR_WINDOW],
|
| 406 |
+
)
|
| 407 |
+
ctx_head_hash64 = str(h64)
|
| 408 |
+
ctx_head_sha = sha
|
| 409 |
+
ctx_head_len = str(tok_len)
|
| 410 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 411 |
+
ctx_tokens[-ANCHOR_WINDOW:],
|
| 412 |
+
)
|
| 413 |
+
ctx_tail_hash64 = str(h64)
|
| 414 |
+
ctx_tail_sha = sha
|
| 415 |
+
ctx_tail_len = str(tok_len)
|
| 416 |
+
mid_start = max(0, (len(ctx_tokens) - ANCHOR_WINDOW) // 2)
|
| 417 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 418 |
+
ctx_tokens[mid_start : mid_start + ANCHOR_WINDOW],
|
| 419 |
+
)
|
| 420 |
+
ctx_mid_hash64 = str(h64)
|
| 421 |
+
ctx_mid_sha = sha
|
| 422 |
+
ctx_mid_len = str(tok_len)
|
| 423 |
+
ctx_anchor_has = True
|
| 424 |
+
if tok_len >= 2:
|
| 425 |
+
alt_window = (
|
| 426 |
+
ANCHOR_WINDOW_ALT
|
| 427 |
+
if tok_len >= ANCHOR_WINDOW_ALT
|
| 428 |
+
else max(2, tok_len - 1)
|
| 429 |
+
)
|
| 430 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 431 |
+
ctx_tokens[:alt_window],
|
| 432 |
+
)
|
| 433 |
+
ctx_head_alt_hash64 = str(h64)
|
| 434 |
+
ctx_head_alt_sha = sha
|
| 435 |
+
ctx_head_alt_len = str(tok_len)
|
| 436 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 437 |
+
ctx_tokens[-alt_window:],
|
| 438 |
+
)
|
| 439 |
+
ctx_tail_alt_hash64 = str(h64)
|
| 440 |
+
ctx_tail_alt_sha = sha
|
| 441 |
+
ctx_tail_alt_len = str(tok_len)
|
| 442 |
+
mid_start = max(0, (len(ctx_tokens) - alt_window) // 2)
|
| 443 |
+
h64, sha, tok_len = hash_token_sequence(
|
| 444 |
+
ctx_tokens[mid_start : mid_start + alt_window],
|
| 445 |
+
)
|
| 446 |
+
ctx_mid_alt_hash64 = str(h64)
|
| 447 |
+
ctx_mid_alt_sha = sha
|
| 448 |
+
ctx_mid_alt_len = str(tok_len)
|
| 449 |
+
ctx_anchor_has = True
|
| 450 |
+
|
| 451 |
+
if def_anchor_has:
|
| 452 |
+
def_anchor_available += 1
|
| 453 |
+
if ctx_anchor_has:
|
| 454 |
+
ctx_anchor_available += 1
|
| 455 |
+
|
| 456 |
+
def_head_spec = _spec(def_head_len, def_head_hash64, def_head_sha)
|
| 457 |
+
def_head_alt_spec = _spec(
|
| 458 |
+
def_head_alt_len,
|
| 459 |
+
def_head_alt_hash64,
|
| 460 |
+
def_head_alt_sha,
|
| 461 |
+
)
|
| 462 |
+
def_mid_spec = _spec(def_mid_len, def_mid_hash64, def_mid_sha)
|
| 463 |
+
def_mid_alt_spec = _spec(
|
| 464 |
+
def_mid_alt_len,
|
| 465 |
+
def_mid_alt_hash64,
|
| 466 |
+
def_mid_alt_sha,
|
| 467 |
+
)
|
| 468 |
+
def_tail_spec = _spec(def_tail_len, def_tail_hash64, def_tail_sha)
|
| 469 |
+
def_tail_alt_spec = _spec(
|
| 470 |
+
def_tail_alt_len,
|
| 471 |
+
def_tail_alt_hash64,
|
| 472 |
+
def_tail_alt_sha,
|
| 473 |
+
)
|
| 474 |
+
ctx_head_spec = _spec(ctx_head_len, ctx_head_hash64, ctx_head_sha)
|
| 475 |
+
ctx_head_alt_spec = _spec(
|
| 476 |
+
ctx_head_alt_len,
|
| 477 |
+
ctx_head_alt_hash64,
|
| 478 |
+
ctx_head_alt_sha,
|
| 479 |
+
)
|
| 480 |
+
ctx_mid_spec = _spec(ctx_mid_len, ctx_mid_hash64, ctx_mid_sha)
|
| 481 |
+
ctx_mid_alt_spec = _spec(
|
| 482 |
+
ctx_mid_alt_len,
|
| 483 |
+
ctx_mid_alt_hash64,
|
| 484 |
+
ctx_mid_alt_sha,
|
| 485 |
+
)
|
| 486 |
+
ctx_tail_spec = _spec(ctx_tail_len, ctx_tail_hash64, ctx_tail_sha)
|
| 487 |
+
ctx_tail_alt_spec = _spec(
|
| 488 |
+
ctx_tail_alt_len,
|
| 489 |
+
ctx_tail_alt_hash64,
|
| 490 |
+
ctx_tail_alt_sha,
|
| 491 |
+
)
|
| 492 |
+
|
| 493 |
+
token_index = token_cache.get(paper_id)
|
| 494 |
+
if doc_index is None:
|
| 495 |
+
missing_tei += 1
|
| 496 |
+
else:
|
| 497 |
+
def_span = doc_index.find_span(definition)
|
| 498 |
+
if def_span is not None:
|
| 499 |
+
def_start, def_end, def_match = def_span
|
| 500 |
+
else:
|
| 501 |
+
if def_hash64 and def_sha and def_tok_len and token_index:
|
| 502 |
+
span = token_index.find_span_by_hash(
|
| 503 |
+
int(def_tok_len),
|
| 504 |
+
int(def_hash64),
|
| 505 |
+
def_sha,
|
| 506 |
+
)
|
| 507 |
+
if span:
|
| 508 |
+
def_start, def_end = span
|
| 509 |
+
def_match = "hash"
|
| 510 |
+
else:
|
| 511 |
+
expected_len = int(def_tok_len or 0)
|
| 512 |
+
head_specs = [
|
| 513 |
+
spec
|
| 514 |
+
for spec in (def_head_spec, def_head_alt_spec)
|
| 515 |
+
if spec
|
| 516 |
+
]
|
| 517 |
+
mid_specs = [
|
| 518 |
+
spec
|
| 519 |
+
for spec in (def_mid_spec, def_mid_alt_spec)
|
| 520 |
+
if spec
|
| 521 |
+
]
|
| 522 |
+
tail_specs = [
|
| 523 |
+
spec
|
| 524 |
+
for spec in (def_tail_spec, def_tail_alt_spec)
|
| 525 |
+
if spec
|
| 526 |
+
]
|
| 527 |
+
if token_index and expected_len and head_specs and tail_specs:
|
| 528 |
+
for head in head_specs:
|
| 529 |
+
for tail in tail_specs:
|
| 530 |
+
anchor_span = _find_span_by_anchors(
|
| 531 |
+
token_index,
|
| 532 |
+
head,
|
| 533 |
+
tail,
|
| 534 |
+
expected_len,
|
| 535 |
+
mid_specs=mid_specs,
|
| 536 |
+
)
|
| 537 |
+
if anchor_span:
|
| 538 |
+
def_start, def_end = anchor_span
|
| 539 |
+
def_match = "anchor"
|
| 540 |
+
break
|
| 541 |
+
if def_match == "anchor":
|
| 542 |
+
break
|
| 543 |
+
if (
|
| 544 |
+
def_match != "anchor"
|
| 545 |
+
and token_index
|
| 546 |
+
and expected_len
|
| 547 |
+
):
|
| 548 |
+
for spec, position in (
|
| 549 |
+
(def_head_spec, "head"),
|
| 550 |
+
(def_head_alt_spec, "head"),
|
| 551 |
+
(def_mid_spec, "head"),
|
| 552 |
+
(def_mid_alt_spec, "head"),
|
| 553 |
+
(def_tail_spec, "tail"),
|
| 554 |
+
(def_tail_alt_spec, "tail"),
|
| 555 |
+
):
|
| 556 |
+
anchor_span = _find_span_from_anchor(
|
| 557 |
+
token_index,
|
| 558 |
+
spec,
|
| 559 |
+
expected_len,
|
| 560 |
+
position,
|
| 561 |
+
mid_specs=mid_specs,
|
| 562 |
+
)
|
| 563 |
+
if anchor_span:
|
| 564 |
+
def_start, def_end = anchor_span
|
| 565 |
+
def_match = "anchor"
|
| 566 |
+
break
|
| 567 |
+
if def_match != "anchor":
|
| 568 |
+
missing_def += 1
|
| 569 |
+
missing_def_rows.append(
|
| 570 |
+
{
|
| 571 |
+
"paper_id": paper_id,
|
| 572 |
+
"concept": row.get("concept", ""),
|
| 573 |
+
"reason": "missing_definition_span",
|
| 574 |
+
},
|
| 575 |
+
)
|
| 576 |
+
else:
|
| 577 |
+
missing_def += 1
|
| 578 |
+
missing_def_rows.append(
|
| 579 |
+
{
|
| 580 |
+
"paper_id": paper_id,
|
| 581 |
+
"concept": row.get("concept", ""),
|
| 582 |
+
"reason": "missing_definition_span",
|
| 583 |
+
},
|
| 584 |
+
)
|
| 585 |
+
|
| 586 |
+
ctx_span = doc_index.find_span(context)
|
| 587 |
+
if ctx_span is not None:
|
| 588 |
+
ctx_start, ctx_end, ctx_match = ctx_span
|
| 589 |
+
else:
|
| 590 |
+
if ctx_hash64 and ctx_sha and ctx_tok_len and token_index:
|
| 591 |
+
span = token_index.find_span_by_hash(
|
| 592 |
+
int(ctx_tok_len),
|
| 593 |
+
int(ctx_hash64),
|
| 594 |
+
ctx_sha,
|
| 595 |
+
)
|
| 596 |
+
if span:
|
| 597 |
+
ctx_start, ctx_end = span
|
| 598 |
+
ctx_match = "hash"
|
| 599 |
+
else:
|
| 600 |
+
expected_len = int(ctx_tok_len or 0)
|
| 601 |
+
head_specs = [
|
| 602 |
+
spec
|
| 603 |
+
for spec in (ctx_head_spec, ctx_head_alt_spec)
|
| 604 |
+
if spec
|
| 605 |
+
]
|
| 606 |
+
mid_specs = [
|
| 607 |
+
spec
|
| 608 |
+
for spec in (ctx_mid_spec, ctx_mid_alt_spec)
|
| 609 |
+
if spec
|
| 610 |
+
]
|
| 611 |
+
tail_specs = [
|
| 612 |
+
spec
|
| 613 |
+
for spec in (ctx_tail_spec, ctx_tail_alt_spec)
|
| 614 |
+
if spec
|
| 615 |
+
]
|
| 616 |
+
if token_index and expected_len and head_specs and tail_specs:
|
| 617 |
+
for head in head_specs:
|
| 618 |
+
for tail in tail_specs:
|
| 619 |
+
anchor_span = _find_span_by_anchors(
|
| 620 |
+
token_index,
|
| 621 |
+
head,
|
| 622 |
+
tail,
|
| 623 |
+
expected_len,
|
| 624 |
+
mid_specs=mid_specs,
|
| 625 |
+
)
|
| 626 |
+
if anchor_span:
|
| 627 |
+
ctx_start, ctx_end = anchor_span
|
| 628 |
+
ctx_match = "anchor"
|
| 629 |
+
break
|
| 630 |
+
if ctx_match == "anchor":
|
| 631 |
+
break
|
| 632 |
+
if (
|
| 633 |
+
ctx_match != "anchor"
|
| 634 |
+
and token_index
|
| 635 |
+
and expected_len
|
| 636 |
+
):
|
| 637 |
+
for spec, position in (
|
| 638 |
+
(ctx_head_spec, "head"),
|
| 639 |
+
(ctx_head_alt_spec, "head"),
|
| 640 |
+
(ctx_mid_spec, "head"),
|
| 641 |
+
(ctx_mid_alt_spec, "head"),
|
| 642 |
+
(ctx_tail_spec, "tail"),
|
| 643 |
+
(ctx_tail_alt_spec, "tail"),
|
| 644 |
+
):
|
| 645 |
+
anchor_span = _find_span_from_anchor(
|
| 646 |
+
token_index,
|
| 647 |
+
spec,
|
| 648 |
+
expected_len,
|
| 649 |
+
position,
|
| 650 |
+
mid_specs=mid_specs,
|
| 651 |
+
)
|
| 652 |
+
if anchor_span:
|
| 653 |
+
ctx_start, ctx_end = anchor_span
|
| 654 |
+
ctx_match = "anchor"
|
| 655 |
+
break
|
| 656 |
+
if ctx_match != "anchor":
|
| 657 |
+
missing_ctx += 1
|
| 658 |
+
missing_ctx_rows.append(
|
| 659 |
+
{
|
| 660 |
+
"paper_id": paper_id,
|
| 661 |
+
"concept": row.get("concept", ""),
|
| 662 |
+
"reason": "missing_context_span",
|
| 663 |
+
},
|
| 664 |
+
)
|
| 665 |
+
else:
|
| 666 |
+
missing_ctx += 1
|
| 667 |
+
missing_ctx_rows.append(
|
| 668 |
+
{
|
| 669 |
+
"paper_id": paper_id,
|
| 670 |
+
"concept": row.get("concept", ""),
|
| 671 |
+
"reason": "missing_context_span",
|
| 672 |
+
},
|
| 673 |
+
)
|
| 674 |
+
|
| 675 |
+
output_rows.append(
|
| 676 |
+
{
|
| 677 |
+
"paper_id": paper_id,
|
| 678 |
+
"paper_title": row.get("paper_title", ""),
|
| 679 |
+
"paper_doi": doi,
|
| 680 |
+
"paper_arxiv": arxiv,
|
| 681 |
+
"concept": row.get("concept", ""),
|
| 682 |
+
"definition_type": row.get("definition_type", ""),
|
| 683 |
+
"source_file": row.get("source_file", ""),
|
| 684 |
+
"is_out_of_domain": row.get("is_out_of_domain", ""),
|
| 685 |
+
"definition_preserve_linebreaks": def_preserve_linebreaks,
|
| 686 |
+
"context_preserve_linebreaks": ctx_preserve_linebreaks,
|
| 687 |
+
"definition_preserve_hyphenation": def_preserve_hyphenation,
|
| 688 |
+
"context_preserve_hyphenation": ctx_preserve_hyphenation,
|
| 689 |
+
"definition_has_bracket_citation": def_has_bracket_citation,
|
| 690 |
+
"definition_has_paren_citation": def_has_paren_citation,
|
| 691 |
+
"definition_has_letter_digit": def_has_letter_digit,
|
| 692 |
+
"context_has_bracket_citation": ctx_has_bracket_citation,
|
| 693 |
+
"context_has_paren_citation": ctx_has_paren_citation,
|
| 694 |
+
"context_has_letter_digit": ctx_has_letter_digit,
|
| 695 |
+
"definition_end_punct": def_end_punct,
|
| 696 |
+
"context_end_punct": ctx_end_punct,
|
| 697 |
+
"marker_version": "grobid_text_v1",
|
| 698 |
+
"hash_version": HASH_VERSION,
|
| 699 |
+
"definition_char_start": def_start,
|
| 700 |
+
"definition_char_end": def_end,
|
| 701 |
+
"definition_match": def_match,
|
| 702 |
+
"definition_hash64": def_hash64,
|
| 703 |
+
"definition_sha256": def_sha,
|
| 704 |
+
"definition_token_count": def_tok_len,
|
| 705 |
+
"definition_head_hash64": def_head_hash64,
|
| 706 |
+
"definition_head_sha256": def_head_sha,
|
| 707 |
+
"definition_head_token_count": def_head_len,
|
| 708 |
+
"definition_mid_hash64": def_mid_hash64,
|
| 709 |
+
"definition_mid_sha256": def_mid_sha,
|
| 710 |
+
"definition_mid_token_count": def_mid_len,
|
| 711 |
+
"definition_tail_hash64": def_tail_hash64,
|
| 712 |
+
"definition_tail_sha256": def_tail_sha,
|
| 713 |
+
"definition_tail_token_count": def_tail_len,
|
| 714 |
+
"definition_head_alt_hash64": def_head_alt_hash64,
|
| 715 |
+
"definition_head_alt_sha256": def_head_alt_sha,
|
| 716 |
+
"definition_head_alt_token_count": def_head_alt_len,
|
| 717 |
+
"definition_mid_alt_hash64": def_mid_alt_hash64,
|
| 718 |
+
"definition_mid_alt_sha256": def_mid_alt_sha,
|
| 719 |
+
"definition_mid_alt_token_count": def_mid_alt_len,
|
| 720 |
+
"definition_tail_alt_hash64": def_tail_alt_hash64,
|
| 721 |
+
"definition_tail_alt_sha256": def_tail_alt_sha,
|
| 722 |
+
"definition_tail_alt_token_count": def_tail_alt_len,
|
| 723 |
+
"context_char_start": ctx_start,
|
| 724 |
+
"context_char_end": ctx_end,
|
| 725 |
+
"context_match": ctx_match,
|
| 726 |
+
"context_hash64": ctx_hash64,
|
| 727 |
+
"context_sha256": ctx_sha,
|
| 728 |
+
"context_token_count": ctx_tok_len,
|
| 729 |
+
"context_head_hash64": ctx_head_hash64,
|
| 730 |
+
"context_head_sha256": ctx_head_sha,
|
| 731 |
+
"context_head_token_count": ctx_head_len,
|
| 732 |
+
"context_mid_hash64": ctx_mid_hash64,
|
| 733 |
+
"context_mid_sha256": ctx_mid_sha,
|
| 734 |
+
"context_mid_token_count": ctx_mid_len,
|
| 735 |
+
"context_tail_hash64": ctx_tail_hash64,
|
| 736 |
+
"context_tail_sha256": ctx_tail_sha,
|
| 737 |
+
"context_tail_token_count": ctx_tail_len,
|
| 738 |
+
"context_head_alt_hash64": ctx_head_alt_hash64,
|
| 739 |
+
"context_head_alt_sha256": ctx_head_alt_sha,
|
| 740 |
+
"context_head_alt_token_count": ctx_head_alt_len,
|
| 741 |
+
"context_mid_alt_hash64": ctx_mid_alt_hash64,
|
| 742 |
+
"context_mid_alt_sha256": ctx_mid_alt_sha,
|
| 743 |
+
"context_mid_alt_token_count": ctx_mid_alt_len,
|
| 744 |
+
"context_tail_alt_hash64": ctx_tail_alt_hash64,
|
| 745 |
+
"context_tail_alt_sha256": ctx_tail_alt_sha,
|
| 746 |
+
"context_tail_alt_token_count": ctx_tail_alt_len,
|
| 747 |
+
},
|
| 748 |
+
)
|
| 749 |
+
|
| 750 |
+
fieldnames = [
|
| 751 |
+
"paper_id",
|
| 752 |
+
"paper_title",
|
| 753 |
+
"paper_doi",
|
| 754 |
+
"paper_arxiv",
|
| 755 |
+
"concept",
|
| 756 |
+
"definition_type",
|
| 757 |
+
"source_file",
|
| 758 |
+
"is_out_of_domain",
|
| 759 |
+
"definition_preserve_linebreaks",
|
| 760 |
+
"context_preserve_linebreaks",
|
| 761 |
+
"definition_preserve_hyphenation",
|
| 762 |
+
"context_preserve_hyphenation",
|
| 763 |
+
"definition_has_bracket_citation",
|
| 764 |
+
"definition_has_paren_citation",
|
| 765 |
+
"definition_has_letter_digit",
|
| 766 |
+
"context_has_bracket_citation",
|
| 767 |
+
"context_has_paren_citation",
|
| 768 |
+
"context_has_letter_digit",
|
| 769 |
+
"definition_end_punct",
|
| 770 |
+
"context_end_punct",
|
| 771 |
+
"marker_version",
|
| 772 |
+
"hash_version",
|
| 773 |
+
"definition_char_start",
|
| 774 |
+
"definition_char_end",
|
| 775 |
+
"definition_match",
|
| 776 |
+
"definition_hash64",
|
| 777 |
+
"definition_sha256",
|
| 778 |
+
"definition_token_count",
|
| 779 |
+
"definition_head_hash64",
|
| 780 |
+
"definition_head_sha256",
|
| 781 |
+
"definition_head_token_count",
|
| 782 |
+
"definition_mid_hash64",
|
| 783 |
+
"definition_mid_sha256",
|
| 784 |
+
"definition_mid_token_count",
|
| 785 |
+
"definition_tail_hash64",
|
| 786 |
+
"definition_tail_sha256",
|
| 787 |
+
"definition_tail_token_count",
|
| 788 |
+
"definition_head_alt_hash64",
|
| 789 |
+
"definition_head_alt_sha256",
|
| 790 |
+
"definition_head_alt_token_count",
|
| 791 |
+
"definition_mid_alt_hash64",
|
| 792 |
+
"definition_mid_alt_sha256",
|
| 793 |
+
"definition_mid_alt_token_count",
|
| 794 |
+
"definition_tail_alt_hash64",
|
| 795 |
+
"definition_tail_alt_sha256",
|
| 796 |
+
"definition_tail_alt_token_count",
|
| 797 |
+
"context_char_start",
|
| 798 |
+
"context_char_end",
|
| 799 |
+
"context_match",
|
| 800 |
+
"context_hash64",
|
| 801 |
+
"context_sha256",
|
| 802 |
+
"context_token_count",
|
| 803 |
+
"context_head_hash64",
|
| 804 |
+
"context_head_sha256",
|
| 805 |
+
"context_head_token_count",
|
| 806 |
+
"context_mid_hash64",
|
| 807 |
+
"context_mid_sha256",
|
| 808 |
+
"context_mid_token_count",
|
| 809 |
+
"context_tail_hash64",
|
| 810 |
+
"context_tail_sha256",
|
| 811 |
+
"context_tail_token_count",
|
| 812 |
+
"context_head_alt_hash64",
|
| 813 |
+
"context_head_alt_sha256",
|
| 814 |
+
"context_head_alt_token_count",
|
| 815 |
+
"context_mid_alt_hash64",
|
| 816 |
+
"context_mid_alt_sha256",
|
| 817 |
+
"context_mid_alt_token_count",
|
| 818 |
+
"context_tail_alt_hash64",
|
| 819 |
+
"context_tail_alt_sha256",
|
| 820 |
+
"context_tail_alt_token_count",
|
| 821 |
+
]
|
| 822 |
+
|
| 823 |
+
args.output_csv.parent.mkdir(parents=True, exist_ok=True)
|
| 824 |
+
with args.output_csv.open("w", encoding="utf-8", newline="") as handle:
|
| 825 |
+
writer = csv.DictWriter(handle, fieldnames=fieldnames)
|
| 826 |
+
writer.writeheader()
|
| 827 |
+
for row in output_rows:
|
| 828 |
+
writer.writerow(row)
|
| 829 |
+
|
| 830 |
+
total = len(output_rows)
|
| 831 |
+
print(f"Wrote {total} rows to {args.output_csv}")
|
| 832 |
+
print(
|
| 833 |
+
"Exact TEI spans missing - "
|
| 834 |
+
f"TEI: {missing_tei}, def spans: {missing_def}, ctx spans: {missing_ctx}",
|
| 835 |
+
)
|
| 836 |
+
print(
|
| 837 |
+
"Hash/anchor markers available - "
|
| 838 |
+
f"def hash: {def_hash_available}/{total}, "
|
| 839 |
+
f"ctx hash: {ctx_hash_available}/{total}, "
|
| 840 |
+
f"def anchors: {def_anchor_available}/{total}, "
|
| 841 |
+
f"ctx anchors: {ctx_anchor_available}/{total}",
|
| 842 |
+
)
|
| 843 |
+
print(
|
| 844 |
+
"Note: missing exact TEI spans do not block hydration; "
|
| 845 |
+
"hash/anchor markers are used as the primary fallback.",
|
| 846 |
+
)
|
| 847 |
+
|
| 848 |
+
if args.report is not None:
|
| 849 |
+
report_lines = []
|
| 850 |
+
if missing_tei:
|
| 851 |
+
report_lines.append(f"Missing TEI: {missing_tei}")
|
| 852 |
+
report_lines.append(f"Missing definition spans: {missing_def}")
|
| 853 |
+
report_lines.append(f"Missing context spans: {missing_ctx}")
|
| 854 |
+
if missing_def_rows:
|
| 855 |
+
report_lines.append("")
|
| 856 |
+
report_lines.append("Missing definitions (paper_id | concept):")
|
| 857 |
+
for item in missing_def_rows:
|
| 858 |
+
report_lines.append(
|
| 859 |
+
f"- {item['paper_id']} | {item['concept']}",
|
| 860 |
+
)
|
| 861 |
+
if missing_ctx_rows:
|
| 862 |
+
report_lines.append("")
|
| 863 |
+
report_lines.append("Missing contexts (paper_id | concept):")
|
| 864 |
+
for item in missing_ctx_rows:
|
| 865 |
+
report_lines.append(
|
| 866 |
+
f"- {item['paper_id']} | {item['concept']}",
|
| 867 |
+
)
|
| 868 |
+
args.report.parent.mkdir(parents=True, exist_ok=True)
|
| 869 |
+
args.report.write_text(
|
| 870 |
+
"\n".join(report_lines) + "\n",
|
| 871 |
+
encoding="utf-8",
|
| 872 |
+
)
|
| 873 |
+
print(f"Wrote report to {args.report}")
|
| 874 |
+
|
| 875 |
+
|
| 876 |
+
if __name__ == "__main__":
|
| 877 |
+
main()
|
scripts/report_defextra_status.py
ADDED
|
@@ -0,0 +1,249 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from __future__ import annotations
|
| 2 |
+
|
| 3 |
+
# ruff: noqa: E402
|
| 4 |
+
|
| 5 |
+
import argparse
|
| 6 |
+
import csv
|
| 7 |
+
import re
|
| 8 |
+
import time
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
from typing import Dict, Tuple
|
| 11 |
+
|
| 12 |
+
try:
|
| 13 |
+
from scripts.defextra_markers import normalize_paper_id
|
| 14 |
+
except ModuleNotFoundError as exc:
|
| 15 |
+
if exc.name != "scripts":
|
| 16 |
+
raise
|
| 17 |
+
import sys
|
| 18 |
+
|
| 19 |
+
PROJECT_ROOT = Path(__file__).resolve().parent.parent
|
| 20 |
+
if str(PROJECT_ROOT) not in sys.path:
|
| 21 |
+
sys.path.insert(0, str(PROJECT_ROOT))
|
| 22 |
+
from scripts.defextra_markers import normalize_paper_id
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
def _normalize_title(title: str) -> str:
|
| 26 |
+
return " ".join((title or "").lower().split())
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
def _load_csv(path: Path) -> list[dict]:
|
| 30 |
+
with path.open(encoding="utf-8", newline="") as handle:
|
| 31 |
+
return list(csv.DictReader(handle))
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
def _parse_missing_report(path: Path) -> Tuple[list[str], list[str]]:
|
| 35 |
+
missing_defs: list[str] = []
|
| 36 |
+
missing_ctxs: list[str] = []
|
| 37 |
+
if not path.exists():
|
| 38 |
+
return missing_defs, missing_ctxs
|
| 39 |
+
section = None
|
| 40 |
+
for line in path.read_text(encoding="utf-8").splitlines():
|
| 41 |
+
line = line.strip()
|
| 42 |
+
if line.startswith("Missing definitions"):
|
| 43 |
+
section = "def"
|
| 44 |
+
continue
|
| 45 |
+
if line.startswith("Missing contexts"):
|
| 46 |
+
section = "ctx"
|
| 47 |
+
continue
|
| 48 |
+
if not line.startswith("-"):
|
| 49 |
+
continue
|
| 50 |
+
item = line[1:].strip()
|
| 51 |
+
if section == "def":
|
| 52 |
+
missing_defs.append(item)
|
| 53 |
+
elif section == "ctx":
|
| 54 |
+
missing_ctxs.append(item)
|
| 55 |
+
return missing_defs, missing_ctxs
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
def _index_recent_pdfs(
|
| 59 |
+
pdf_dir: Path,
|
| 60 |
+
cutoff_ts: float,
|
| 61 |
+
) -> Dict[str, Path]:
|
| 62 |
+
index: Dict[str, Path] = {}
|
| 63 |
+
if not pdf_dir.exists():
|
| 64 |
+
return index
|
| 65 |
+
version_re = re.compile(r"^(?P<base>.+?)(v\d+)$", re.IGNORECASE)
|
| 66 |
+
arxiv_re = re.compile(r"^(?P<base>\d{4}\.\d{4,5})v\d+$", re.IGNORECASE)
|
| 67 |
+
pii_re = re.compile(r"(S\d{8,})", re.IGNORECASE)
|
| 68 |
+
for suffix in ("*.pdf", "*.PDF"):
|
| 69 |
+
for path in pdf_dir.rglob(suffix):
|
| 70 |
+
try:
|
| 71 |
+
if path.stat().st_mtime < cutoff_ts:
|
| 72 |
+
continue
|
| 73 |
+
except OSError:
|
| 74 |
+
continue
|
| 75 |
+
stem = path.stem
|
| 76 |
+
keys = {stem, stem.lower(), normalize_paper_id(stem)}
|
| 77 |
+
if stem.startswith("paper_"):
|
| 78 |
+
stripped = stem[len("paper_") :]
|
| 79 |
+
keys.update({stripped, stripped.lower(), normalize_paper_id(stripped)})
|
| 80 |
+
if stem.endswith("_fixed") or stem.endswith("-fixed"):
|
| 81 |
+
base = stem[: -len("_fixed")] if stem.endswith("_fixed") else stem[: -len("-fixed")]
|
| 82 |
+
if base:
|
| 83 |
+
keys.update({base, base.lower(), normalize_paper_id(base)})
|
| 84 |
+
match = arxiv_re.match(stem)
|
| 85 |
+
if match:
|
| 86 |
+
base = match.group("base")
|
| 87 |
+
keys.update({base, base.lower(), normalize_paper_id(base)})
|
| 88 |
+
match = version_re.match(stem)
|
| 89 |
+
if match:
|
| 90 |
+
base = match.group("base")
|
| 91 |
+
keys.update({base, base.lower(), normalize_paper_id(base)})
|
| 92 |
+
pii_match = pii_re.search(stem)
|
| 93 |
+
if pii_match:
|
| 94 |
+
pii = pii_match.group(1)
|
| 95 |
+
keys.update({pii, pii.lower(), normalize_paper_id(pii)})
|
| 96 |
+
for key in keys:
|
| 97 |
+
if key:
|
| 98 |
+
index.setdefault(key, path)
|
| 99 |
+
return index
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
def main() -> None:
|
| 103 |
+
parser = argparse.ArgumentParser(
|
| 104 |
+
description="Report DefExtra hydration coverage and missing spans.",
|
| 105 |
+
)
|
| 106 |
+
parser.add_argument(
|
| 107 |
+
"--legal-csv",
|
| 108 |
+
type=Path,
|
| 109 |
+
default=Path("results/paper_results/defextra_legal_tablefix.csv"),
|
| 110 |
+
help="Legal CSV used for hydration.",
|
| 111 |
+
)
|
| 112 |
+
parser.add_argument(
|
| 113 |
+
"--legal-report",
|
| 114 |
+
type=Path,
|
| 115 |
+
default=Path("results/paper_results/defextra_legal_tablefix_report.txt"),
|
| 116 |
+
help="Report generated by prepare_defextra_legal.py.",
|
| 117 |
+
)
|
| 118 |
+
parser.add_argument(
|
| 119 |
+
"--hydrated-csv",
|
| 120 |
+
type=Path,
|
| 121 |
+
default=Path("results/paper_results/defextra_hydrated_tablefix_test.csv"),
|
| 122 |
+
help="Hydrated CSV from hydrate_defextra.py.",
|
| 123 |
+
)
|
| 124 |
+
parser.add_argument(
|
| 125 |
+
"--pdf-dir",
|
| 126 |
+
type=Path,
|
| 127 |
+
default=Path("ManualPDFsGROBID/manual_pdfs/manual_pdfs"),
|
| 128 |
+
help="Directory with user PDFs (used to tag recent downloads).",
|
| 129 |
+
)
|
| 130 |
+
parser.add_argument(
|
| 131 |
+
"--recent-days",
|
| 132 |
+
type=int,
|
| 133 |
+
default=7,
|
| 134 |
+
help="How many days count as 'recent' for PDF downloads.",
|
| 135 |
+
)
|
| 136 |
+
parser.add_argument(
|
| 137 |
+
"--output",
|
| 138 |
+
type=Path,
|
| 139 |
+
default=None,
|
| 140 |
+
help="Optional report output path.",
|
| 141 |
+
)
|
| 142 |
+
args = parser.parse_args()
|
| 143 |
+
|
| 144 |
+
legal_rows = _load_csv(args.legal_csv)
|
| 145 |
+
hydrated_rows = _load_csv(args.hydrated_csv) if args.hydrated_csv.exists() else []
|
| 146 |
+
|
| 147 |
+
ref_ids = {row.get("paper_id", "") for row in legal_rows if row.get("paper_id")}
|
| 148 |
+
hyd_ids = {row.get("paper_id", "") for row in hydrated_rows if row.get("paper_id")}
|
| 149 |
+
missing_papers = sorted(ref_ids - hyd_ids)
|
| 150 |
+
|
| 151 |
+
missing_defs, missing_ctxs = _parse_missing_report(args.legal_report)
|
| 152 |
+
|
| 153 |
+
idx = {
|
| 154 |
+
(row.get("paper_id", ""), row.get("concept", "")): row
|
| 155 |
+
for row in legal_rows
|
| 156 |
+
}
|
| 157 |
+
|
| 158 |
+
implicit_defs = []
|
| 159 |
+
implicit_ctxs = []
|
| 160 |
+
for item in missing_defs:
|
| 161 |
+
try:
|
| 162 |
+
pid, concept = [p.strip() for p in item.split("|", 1)]
|
| 163 |
+
except ValueError:
|
| 164 |
+
continue
|
| 165 |
+
row = idx.get((pid, concept))
|
| 166 |
+
if row and (row.get("definition_type") or "").strip().lower() == "implicit":
|
| 167 |
+
implicit_defs.append(item)
|
| 168 |
+
for item in missing_ctxs:
|
| 169 |
+
try:
|
| 170 |
+
pid, concept = [p.strip() for p in item.split("|", 1)]
|
| 171 |
+
except ValueError:
|
| 172 |
+
continue
|
| 173 |
+
row = idx.get((pid, concept))
|
| 174 |
+
if row and (row.get("definition_type") or "").strip().lower() == "implicit":
|
| 175 |
+
implicit_ctxs.append(item)
|
| 176 |
+
|
| 177 |
+
cutoff_ts = time.time() - (args.recent_days * 86400)
|
| 178 |
+
recent_index = _index_recent_pdfs(args.pdf_dir, cutoff_ts)
|
| 179 |
+
|
| 180 |
+
recent_missing_defs = []
|
| 181 |
+
recent_missing_ctxs = []
|
| 182 |
+
recent_missing_papers = []
|
| 183 |
+
|
| 184 |
+
for pid in missing_papers:
|
| 185 |
+
if pid in recent_index or normalize_paper_id(pid) in recent_index:
|
| 186 |
+
recent_missing_papers.append(pid)
|
| 187 |
+
|
| 188 |
+
for item in missing_defs:
|
| 189 |
+
try:
|
| 190 |
+
pid, concept = [p.strip() for p in item.split("|", 1)]
|
| 191 |
+
except ValueError:
|
| 192 |
+
continue
|
| 193 |
+
if pid in recent_index or normalize_paper_id(pid) in recent_index:
|
| 194 |
+
recent_missing_defs.append(item)
|
| 195 |
+
|
| 196 |
+
for item in missing_ctxs:
|
| 197 |
+
try:
|
| 198 |
+
pid, concept = [p.strip() for p in item.split("|", 1)]
|
| 199 |
+
except ValueError:
|
| 200 |
+
continue
|
| 201 |
+
if pid in recent_index or normalize_paper_id(pid) in recent_index:
|
| 202 |
+
recent_missing_ctxs.append(item)
|
| 203 |
+
|
| 204 |
+
lines = []
|
| 205 |
+
lines.append(f"Missing papers (no hydrated rows): {len(missing_papers)}")
|
| 206 |
+
for pid in missing_papers:
|
| 207 |
+
lines.append(f"- {pid}")
|
| 208 |
+
lines.append("")
|
| 209 |
+
lines.append(f"Missing definition spans marked implicit: {len(implicit_defs)}")
|
| 210 |
+
for item in implicit_defs:
|
| 211 |
+
lines.append(f"- {item}")
|
| 212 |
+
lines.append("")
|
| 213 |
+
lines.append(f"Missing context spans marked implicit: {len(implicit_ctxs)}")
|
| 214 |
+
for item in implicit_ctxs:
|
| 215 |
+
lines.append(f"- {item}")
|
| 216 |
+
lines.append("")
|
| 217 |
+
lines.append(
|
| 218 |
+
f"Missing papers with recent PDFs (<= {args.recent_days} days): "
|
| 219 |
+
f"{len(recent_missing_papers)}"
|
| 220 |
+
)
|
| 221 |
+
for pid in recent_missing_papers:
|
| 222 |
+
lines.append(f"- {pid}")
|
| 223 |
+
lines.append("")
|
| 224 |
+
lines.append(
|
| 225 |
+
f"Missing definition spans with recent PDFs (<= {args.recent_days} days): "
|
| 226 |
+
f"{len(recent_missing_defs)}"
|
| 227 |
+
)
|
| 228 |
+
for item in recent_missing_defs:
|
| 229 |
+
lines.append(f"- {item}")
|
| 230 |
+
lines.append("")
|
| 231 |
+
lines.append(
|
| 232 |
+
f"Missing context spans with recent PDFs (<= {args.recent_days} days): "
|
| 233 |
+
f"{len(recent_missing_ctxs)}"
|
| 234 |
+
)
|
| 235 |
+
for item in recent_missing_ctxs:
|
| 236 |
+
lines.append(f"- {item}")
|
| 237 |
+
lines.append("")
|
| 238 |
+
|
| 239 |
+
output = "\n".join(lines) + "\n"
|
| 240 |
+
if args.output is not None:
|
| 241 |
+
args.output.parent.mkdir(parents=True, exist_ok=True)
|
| 242 |
+
args.output.write_text(output, encoding="utf-8")
|
| 243 |
+
print(f"Wrote report to {args.output}")
|
| 244 |
+
else:
|
| 245 |
+
print(output)
|
| 246 |
+
|
| 247 |
+
|
| 248 |
+
if __name__ == "__main__":
|
| 249 |
+
main()
|