Datasets:

Languages:
English
ArXiv:
License:
bitwise31337 commited on
Commit
5cbe47d
·
verified ·
1 Parent(s): c9624b2

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -67,6 +67,7 @@ uv run python scripts/hydrate_defextra.py \
67
 
68
  - [`docs/defextra_hydration.md`](docs/defextra_hydration.md) (technical details, CLI flags, markers).
69
  - [`docs/get_pdfs.md`](docs/get_pdfs.md) (how to find PDFs).
 
70
  - See [`docs/defextra_hydration.md`](docs/defextra_hydration.md) for technical details and [`docs/get_pdfs.md`](docs/get_pdfs.md) for PDF sources.
71
 
72
  ## Expected minor mismatches
 
67
 
68
  - [`docs/defextra_hydration.md`](docs/defextra_hydration.md) (technical details, CLI flags, markers).
69
  - [`docs/get_pdfs.md`](docs/get_pdfs.md) (how to find PDFs).
70
+ - [`docs/mismatch_examples.md`](docs/mismatch_examples.md) (real examples of mismatch types and fixes).
71
  - See [`docs/defextra_hydration.md`](docs/defextra_hydration.md) for technical details and [`docs/get_pdfs.md`](docs/get_pdfs.md) for PDF sources.
72
 
73
  ## Expected minor mismatches
defextra_required_pdfs.md CHANGED
@@ -61,16 +61,16 @@
61
  - c1e92f1be2387d14dcfaa5e1640a9939724a312a — TITLE: An empirical examination of echo chambers in US climate policy networks (https://www.semanticscholar.org/paper/c1e92f1be2387d14dcfaa5e1640a9939724a312a)
62
  - c84a169e6df175c4662012d3ba7dbf8fa1b5abc9 — ‘Fake news’ is the invention of a liar: How false information circulates within the hybrid news system (https://www.semanticscholar.org/paper/c84a169e6df175c4662012d3ba7dbf8fa1b5abc9, https://doi.org/10.1177/0011392119837536)
63
  - daw084 — Just a subtle difference? Findings from a systematic review on definitions of nutrition literacy and food literacy (https://doi.org/10.1093/heapro/daw084)
64
- - doi:10.1145/3677092 —
65
  - dx.doi.org/https://doi.org/10.1016/j.ipm.2021.102505 — (https://doi.org/dx.doi.org/https://doi.org/10.1016/j.ipm.2021.102505)
66
  - eb29476dd81aefedf2896db42f039f003a0ec5bf — Organic or Local? Investigating Consumer Preference for Fresh Produce Using a Choice Experiment with Real Economic Incentives (https://www.semanticscholar.org/paper/eb29476dd81aefedf2896db42f039f003a0ec5bf)
67
  - frai-06-1225093 — Rationalization for explainable NLP: a survey
68
  - https://aclanthology.org/2021.findings-emnlp.101 — (https://aclanthology.org/2021.findings-emnlp.101)
69
  - https://aclanthology.org/2024.lrec-main.952 — (https://aclanthology.org/2024.lrec-main.952)
70
- - https://arxiv.org/abs/2312.16148 —
71
- - https://link.springer.com/article/10.1007/s00799-018-0261-y —
72
- - https://media-bias-research.org/wp-content/uploads/2024/07/Preprint_ICWSM_25_NewsUnfold —
73
- - https://www.sciencedirect.com/science/article/pii/S0957417423021437 —
74
  - icomputing.0124 — A Survey of Task Planning with Large Language Models (https://doi.org/10.34133/icomputing.0124)
75
  - s10462-022-10338-7 — A survey on narrative extraction from textual data (https://doi.org/10.1007/s10462-022-10338-7)
76
  - s10816-016-9274-2 — Quality Assurance in Archaeological Survey (https://doi.org/10.1007/s10816-016-9274-2)
 
61
  - c1e92f1be2387d14dcfaa5e1640a9939724a312a — TITLE: An empirical examination of echo chambers in US climate policy networks (https://www.semanticscholar.org/paper/c1e92f1be2387d14dcfaa5e1640a9939724a312a)
62
  - c84a169e6df175c4662012d3ba7dbf8fa1b5abc9 — ‘Fake news’ is the invention of a liar: How false information circulates within the hybrid news system (https://www.semanticscholar.org/paper/c84a169e6df175c4662012d3ba7dbf8fa1b5abc9, https://doi.org/10.1177/0011392119837536)
63
  - daw084 — Just a subtle difference? Findings from a systematic review on definitions of nutrition literacy and food literacy (https://doi.org/10.1093/heapro/daw084)
64
+ - doi:10.1145/3677092 —
65
  - dx.doi.org/https://doi.org/10.1016/j.ipm.2021.102505 — (https://doi.org/dx.doi.org/https://doi.org/10.1016/j.ipm.2021.102505)
66
  - eb29476dd81aefedf2896db42f039f003a0ec5bf — Organic or Local? Investigating Consumer Preference for Fresh Produce Using a Choice Experiment with Real Economic Incentives (https://www.semanticscholar.org/paper/eb29476dd81aefedf2896db42f039f003a0ec5bf)
67
  - frai-06-1225093 — Rationalization for explainable NLP: a survey
68
  - https://aclanthology.org/2021.findings-emnlp.101 — (https://aclanthology.org/2021.findings-emnlp.101)
69
  - https://aclanthology.org/2024.lrec-main.952 — (https://aclanthology.org/2024.lrec-main.952)
70
+ - https://arxiv.org/abs/2312.16148 —
71
+ - https://link.springer.com/article/10.1007/s00799-018-0261-y —
72
+ - https://media-bias-research.org/wp-content/uploads/2024/07/Preprint_ICWSM_25_NewsUnfold —
73
+ - https://www.sciencedirect.com/science/article/pii/S0957417423021437 —
74
  - icomputing.0124 — A Survey of Task Planning with Large Language Models (https://doi.org/10.34133/icomputing.0124)
75
  - s10462-022-10338-7 — A survey on narrative extraction from textual data (https://doi.org/10.1007/s10462-022-10338-7)
76
  - s10816-016-9274-2 — Quality Assurance in Archaeological Survey (https://doi.org/10.1007/s10816-016-9274-2)
docs/defextra_hydration.md CHANGED
@@ -127,3 +127,5 @@ uv run python scripts/hydrate_defextra.py \
127
 
128
  - Small mismatches are expected due to PDF/GROBID text normalization.
129
  - Missing exact TEI spans do **not** block hydration; hash/anchor markers are used as fallback.
 
 
 
127
 
128
  - Small mismatches are expected due to PDF/GROBID text normalization.
129
  - Missing exact TEI spans do **not** block hydration; hash/anchor markers are used as fallback.
130
+ - Exact TEI spans are validated against stored hashes; if they do not match, citation‑stripped hash/anchor matching is used instead.
131
+ - See [`docs/mismatch_examples.md`](mismatch_examples.md) for concrete examples and fixes.
docs/mismatch_examples.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Mismatch classes (hydration vs reference)
2
+
3
+ This page summarizes common difference types seen in the latest hydrated run.
4
+ Examples are **short fragments** with surrounding text removed.
5
+
6
+ - Total differences: 144
7
+ - wording_change: 74
8
+ - punctuation_only: 50
9
+ - casing: 8
10
+ - hyphenation: 5
11
+ - digit_letter_spacing: 3
12
+ - truncation: 2
13
+ - citation_spacing: 1
14
+ - header_or_boilerplate: 1
15
+
16
+ ---
17
+ ## Wording / lexical differences
18
+
19
+ Small wording changes (e.g., singular/plural) or tokenization artifacts.
20
+
21
+ | Paper | Concept | Field | Reference (fragment) | Hydrated (fragment) |
22
+ | --- | --- | --- | --- | --- |
23
+ | 016f7a076ac272db106fbcea056752c7307f676a | specification error | context | …Conducted mainly in the late 1970s and 1980s, wave 3 of researchwitnessed yet another round o… | …Conducted mainly in the late 1970 s and 1980 s, wave 3 of research witnessed yet another round… |
24
+ | 0209e602acaeab882fee84e244caf574cf345ef9 | ratio bias/numerosity bias | context | …ault. In the classic ratio bias task derived from Piaget and Inhelder (1951/1975), participants are offered a prize if th… | …ault. In the classic ratio bias task derived from Piaget andInhelder (1951/1975), participants are offered a prize if th… |
25
+ | 033b21cf1c6d3bdae587e673452b994443bf3546 | narrative | context | …he bare minimum Simply put, narrative isthe representation of a… | …bare minimum Simply put, narrative is the representation of … |
26
+
27
+ ## Punctuation-only differences
28
+
29
+ Only punctuation differs (e.g., comma, quote style).
30
+
31
+ | Paper | Concept | Field | Reference (fragment) | Hydrated (fragment) |
32
+ | --- | --- | --- | --- | --- |
33
+ | 0f7eda998bbce003745ff2fdbcaa1d9a8119368b | echo chamber | context | …ces of news, then we impose on ourselves a narrowed and selfreinforcing epistemic filter, which leaves out contrary view… | …ces of news, then we impose on ourselves a narrowed and self-reinforcing epistemic filter, which leaves out contrary view… |
34
+ | 17734113f254a64b3bae312713edba3b1e34fb56 | Post-truth Era | context | …Indeed, today we live in what some have called a “post-truth” era, which is characterized by digital disinform… | …Indeed, today we live in what some have called a "post-truth" era, which is characterized by digital disinform… |
35
+ | 1901.00596v4 | network embedding | definition | …ional vector representation of a node which preserves a node’s topological information… | …ional vector representation of a node which preserves a node's topological information… |
36
+
37
+ ## Casing differences
38
+
39
+ Same text except for upper/lower case.
40
+
41
+ | Paper | Concept | Field | Reference (fragment) | Hydrated (fragment) |
42
+ | --- | --- | --- | --- | --- |
43
+ | 033b21cf1c6d3bdae587e673452b994443bf3546 | narrative | definition | …The representation of an event or a series of events.… | …the representation of an event or a series of events.… |
44
+ | 05bfced33d92944b7a0672490c371342d28ee076 | observational bias | definition | …Observed data differs systematically from the unobserved data… | …observed data differs systematically from the unobserved data… |
45
+ | 1538c4777271ae6abb542801dac01423f4d566ad | publication bias | definition | …Significant results are more likely to be published while non… | …significant results are more likely to be published while non… |
46
+
47
+ ## Hyphenation / line-break joins
48
+
49
+ Hyphenation caused by line breaks is joined differently.
50
+
51
+ | Paper | Concept | Field | Reference (fragment) | Hydrated (fragment) |
52
+ | --- | --- | --- | --- | --- |
53
+ | 3584741 | computational narrative representations | context | …cused review, we will focus exclusively on event-based narra- tive representations. Thus, we define computational narrativ… | …cused review, we will focus exclusively on event-based narrative representations. Thus, we define computational narrativ… |
54
+ | 3584741 | metro maps method | context | …ro maps [98, 99] are an extension of the Connect the Dots ap- proach that represents more than a single storyline using a … | …ro maps [98, 99] are an extension of the Connect the Dots approach that represents more than a single storyline using a … |
55
+ | 3584741 | open source intelligence (OSINT) | context | …]. Although OSINT data sources leverage more than just tradi- tional news articles [38], OSINT could still benefit from ne… | …]. Although OSINT data sources leverage more than just traditional news articles [38], OSINT could still benefit from ne… |
56
+
57
+ ## Letter–digit spacing
58
+
59
+ Spacing between digits/letters differs (`bias4` vs `bias 4`).
60
+
61
+ | Paper | Concept | Field | Reference (fragment) | Hydrated (fragment) |
62
+ | --- | --- | --- | --- | --- |
63
+ | 0090023afc66cd2741568599057f4e82b566137c | omitted variable bias | context | …Omitted Variable Bias. Omitted variable bias4 occurs when one or more important variables are left out o… | …Omitted Variable Bias. Omitted variable bias 4 occurs when one or more important variables are left out o… |
64
+ | 016f7a076ac272db106fbcea056752c7307f676a | selection bias | context | …cts of race on sentencing. Conducted mainly in the late 1970s and 1980s, wave 3 of research witnessed yet another round … | …cts of race on sentencing. Conducted mainly in the late 1970 s and 1980 s, wave 3 of research witnessed yet another round… |
65
+ | 235c4f33d5bfc81bfa09a2458fcc0e42ef4454dc | propaganda | context | …It published workbooks and held seminars in the early 1930s aimed at promoting the ideal of "self-determination," rega… | …It published workbooks and held seminars in the early 1930 s aimed at promoting the ideal of "self-determination," rega… |
66
+
67
+ ## Truncation (missing tail)
68
+
69
+ Hydrated text is missing the end of the reference span.
70
+
71
+ | Paper | Concept | Field | Reference (fragment) | Hydrated (fragment) |
72
+ | --- | --- | --- | --- | --- |
73
+ | 016f7a076ac272db106fbcea056752c7307f676a | specification error | definition | …the omission of explanatory variable… | …the omission of explanatory variables… |
74
+ | 0d23df558a30492946059c017343a431dc3dc172 | inter-media agenda-setting | definition | …ed theory to explain how content transfers between news medi… | …ed theory to explain how content transfers between news media… |
75
+
76
+ ## Citation spacing/formatting differences
77
+
78
+ Citation formatting differs (e.g., `[155, 164]` vs `[155,164]`).
79
+
80
+ | Paper | Concept | Field | Reference (fragment) | Hydrated (fragment) |
81
+ | --- | --- | --- | --- | --- |
82
+ | frai-06-1225093 | Explainability | context | …ndent, and AI embraces a wide variety of tasks (Miller, 2019a). We treat explainability as a specialization of interpret… | …ndent, and AI embraces a wide variety of tasks (Miller, 2019 a). We treat explainability as a specialization of interpret… |
83
+
84
+ ## Header/boilerplate inserted
85
+
86
+ Hydrated text includes header/boilerplate not in reference.
87
+
88
+ | Paper | Concept | Field | Reference (fragment) | Hydrated (fragment) |
89
+ | --- | --- | --- | --- | --- |
90
+ | https://arxiv.org/abs/2312.16148 | spin bias | context | …ndencies between words and phrases must be considered [110]. Spin Bias describes a form of bias introduced either by lea… | …ndencies between words and phrases must be considered [110]. Manuscript submitted to ACM The Media Bias Taxonomy Spin Bias describes a form of bias introduced either by lea… |
scripts/build_defextra_test_pdfs.py CHANGED
@@ -63,7 +63,11 @@ def _build_pdf_index(
63
  index.setdefault(stripped, path)
64
  index.setdefault(stripped.lower(), path)
65
  if stem.endswith("_fixed") or stem.endswith("-fixed"):
66
- base = stem[: -len("_fixed")] if stem.endswith("_fixed") else stem[: -len("-fixed")]
 
 
 
 
67
  if base:
68
  index[base] = path
69
  index[base.lower()] = path
 
63
  index.setdefault(stripped, path)
64
  index.setdefault(stripped.lower(), path)
65
  if stem.endswith("_fixed") or stem.endswith("-fixed"):
66
+ base = (
67
+ stem[: -len("_fixed")]
68
+ if stem.endswith("_fixed")
69
+ else stem[: -len("-fixed")]
70
+ )
71
  if base:
72
  index[base] = path
73
  index[base.lower()] = path
scripts/defextra_markers.py CHANGED
@@ -257,6 +257,62 @@ HYPHEN_CHARS = {
257
  }
258
  SOFT_HYPHEN = "\u00ad"
259
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
260
 
261
  def tokenize_text(
262
  text: str,
 
257
  }
258
  SOFT_HYPHEN = "\u00ad"
259
 
260
+ CITATION_BRACKET_RE = re.compile(r"\[[^\]]{0,120}\]")
261
+ CITATION_PAREN_RE = re.compile(r"\([^\)]{0,120}\)")
262
+
263
+
264
+ def _looks_like_bracket_citation(text: str) -> bool:
265
+ return any(ch.isdigit() for ch in text)
266
+
267
+
268
+ def _looks_like_paren_citation(text: str) -> bool:
269
+ if not any(ch.isdigit() for ch in text):
270
+ return False
271
+ lowered = text.lower()
272
+ if "et al" in lowered:
273
+ return True
274
+ if re.search(r"\b(19|20)\d{2}\b", text):
275
+ return True
276
+ return False
277
+
278
+
279
+ def strip_citations(
280
+ text: str,
281
+ *,
282
+ strip_brackets: bool = True,
283
+ strip_parens: bool = False,
284
+ ) -> str:
285
+ if not text:
286
+ return text
287
+ spans: list[tuple[int, int]] = []
288
+ if strip_brackets:
289
+ for match in CITATION_BRACKET_RE.finditer(text):
290
+ if _looks_like_bracket_citation(match.group(0)):
291
+ spans.append((match.start(), match.end()))
292
+ if strip_parens:
293
+ for match in CITATION_PAREN_RE.finditer(text):
294
+ if _looks_like_paren_citation(match.group(0)):
295
+ spans.append((match.start(), match.end()))
296
+ if not spans:
297
+ return text
298
+ spans.sort()
299
+ merged: list[tuple[int, int]] = []
300
+ for start, end in spans:
301
+ if not merged or start > merged[-1][1]:
302
+ merged.append((start, end))
303
+ else:
304
+ merged[-1] = (merged[-1][0], max(merged[-1][1], end))
305
+ parts = []
306
+ cursor = 0
307
+ for start, end in merged:
308
+ if cursor < start:
309
+ parts.append(text[cursor:start])
310
+ parts.append(" ")
311
+ cursor = end
312
+ if cursor < len(text):
313
+ parts.append(text[cursor:])
314
+ return "".join(parts)
315
+
316
 
317
  def tokenize_text(
318
  text: str,
scripts/hydrate_defextra.py CHANGED
@@ -20,9 +20,11 @@ try:
20
  doi_suffix,
21
  extract_ids_from_tei,
22
  extract_text_from_pdf,
 
23
  normalize_arxiv,
24
  normalize_doi,
25
  normalize_paper_id,
 
26
  tokenize_text,
27
  )
28
  from scripts.defextra_pdf_aliases import candidate_pdf_aliases
@@ -40,9 +42,11 @@ except ModuleNotFoundError as exc:
40
  doi_suffix,
41
  extract_ids_from_tei,
42
  extract_text_from_pdf,
 
43
  normalize_arxiv,
44
  normalize_doi,
45
  normalize_paper_id,
 
46
  tokenize_text,
47
  )
48
  from scripts.defextra_pdf_aliases import candidate_pdf_aliases
@@ -152,8 +156,9 @@ def _cleanup_spacing(text: str) -> str:
152
  if not text:
153
  return text
154
  value = text
155
- value = value.replace("“", "\"").replace("”", "\"")
156
  value = value.replace("’", "'").replace("‘", "'")
 
157
  def _dash_repl(match: re.Match[str]) -> str:
158
  run = match.group(0)
159
  return "--" if len(run) >= 2 else "-"
@@ -287,6 +292,34 @@ def _find_pdf_hash_span(
287
  return None
288
 
289
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
290
  def _candidate_ids(paper_id: str, doi: str, arxiv: str) -> list[str]:
291
  candidates = [
292
  paper_id,
@@ -387,7 +420,11 @@ def _build_pdf_index(pdf_dir: Path) -> Dict[str, Path]:
387
  index.setdefault(stripped, path)
388
  index.setdefault(normalize_paper_id(stripped), path)
389
  if stem.endswith("_fixed") or stem.endswith("-fixed"):
390
- base = stem[: -len("_fixed")] if stem.endswith("_fixed") else stem[: -len("-fixed")]
 
 
 
 
391
  if base:
392
  index[base] = path
393
  index[normalize_paper_id(base)] = path
@@ -780,8 +817,31 @@ def main() -> None:
780
  token_cache: Dict[str, Optional[TokenIndex]] = {}
781
  tei_path_cache: Dict[str, Optional[Path]] = {}
782
  pdf_token_cache: Dict[Path, TokenIndex] = {}
 
 
783
  pdf_failed: set[Path] = set()
784
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
785
  with args.legal_csv.open("r", encoding="utf-8", newline="") as handle:
786
  reader = csv.DictReader(handle)
787
  legal_rows = list(reader)
@@ -981,6 +1041,11 @@ def main() -> None:
981
  def_end = row.get("definition_char_end") or ""
982
  ctx_start = row.get("context_char_start") or ""
983
  ctx_end = row.get("context_char_end") or ""
 
 
 
 
 
984
 
985
  if not definition and pdf_token_index:
986
  span = _find_pdf_hash_span(row, pdf_token_index, "definition")
@@ -991,6 +1056,27 @@ def main() -> None:
991
  span[1],
992
  )
993
  hydrated_from_pdf += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
994
  if not definition and tei_token_index:
995
  spec = _select_hash_specs(row, "definition")
996
  if spec:
@@ -1001,6 +1087,22 @@ def main() -> None:
1001
  span[0],
1002
  span[1],
1003
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1004
  if not definition and not (def_start and def_end):
1005
  head_specs = _select_anchor_spec_list(
1006
  row,
@@ -1130,11 +1232,13 @@ def main() -> None:
1130
  span[1],
1131
  )
1132
  if not definition and def_start and def_end:
1133
- definition = _extract_with_trailing_punct(
1134
  doc_index.doc_text,
1135
  int(def_start),
1136
  int(def_end),
1137
  )
 
 
1138
 
1139
  if not context and pdf_token_index:
1140
  span = _find_pdf_hash_span(row, pdf_token_index, "context")
@@ -1145,6 +1249,27 @@ def main() -> None:
1145
  span[1],
1146
  )
1147
  hydrated_from_pdf += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1148
 
1149
  if not context and tei_token_index:
1150
  spec = _select_hash_specs(row, "context")
@@ -1156,6 +1281,22 @@ def main() -> None:
1156
  span[0],
1157
  span[1],
1158
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1159
  if not context and not (ctx_start and ctx_end):
1160
  head_specs = _select_anchor_spec_list(
1161
  row,
@@ -1285,11 +1426,13 @@ def main() -> None:
1285
  span[1],
1286
  )
1287
  if not context and ctx_start and ctx_end:
1288
- context = _extract_with_trailing_punct(
1289
  doc_index.doc_text,
1290
  int(ctx_start),
1291
  int(ctx_end),
1292
  )
 
 
1293
 
1294
  if not definition and pdf_path is not None and pdf_token_index:
1295
  spec = _select_hash_specs(row, "definition")
 
20
  doi_suffix,
21
  extract_ids_from_tei,
22
  extract_text_from_pdf,
23
+ hash_token_sequence,
24
  normalize_arxiv,
25
  normalize_doi,
26
  normalize_paper_id,
27
+ strip_citations,
28
  tokenize_text,
29
  )
30
  from scripts.defextra_pdf_aliases import candidate_pdf_aliases
 
42
  doi_suffix,
43
  extract_ids_from_tei,
44
  extract_text_from_pdf,
45
+ hash_token_sequence,
46
  normalize_arxiv,
47
  normalize_doi,
48
  normalize_paper_id,
49
+ strip_citations,
50
  tokenize_text,
51
  )
52
  from scripts.defextra_pdf_aliases import candidate_pdf_aliases
 
156
  if not text:
157
  return text
158
  value = text
159
+ value = value.replace("“", '"').replace("”", '"')
160
  value = value.replace("’", "'").replace("‘", "'")
161
+
162
  def _dash_repl(match: re.Match[str]) -> str:
163
  run = match.group(0)
164
  return "--" if len(run) >= 2 else "-"
 
292
  return None
293
 
294
 
295
+ def _bool_flag(value: str) -> bool:
296
+ return (value or "").strip().lower() == "true"
297
+
298
+
299
+ def _strip_flags(row: dict, prefix: str) -> tuple[bool, bool]:
300
+ keep_bracket = _bool_flag(row.get(f"{prefix}_has_bracket_citation", ""))
301
+ keep_paren = _bool_flag(row.get(f"{prefix}_has_paren_citation", ""))
302
+ return (not keep_bracket), (not keep_paren)
303
+
304
+
305
+ def _span_matches_hash(row: dict, text: str, prefix: str) -> bool:
306
+ if not text:
307
+ return False
308
+ expected_hash = row.get(f"{prefix}_hash64") or ""
309
+ expected_sha = row.get(f"{prefix}_sha256") or ""
310
+ if not expected_hash or not expected_sha:
311
+ return False
312
+ strip_brackets, strip_parens = _strip_flags(row, prefix)
313
+ check_text = strip_citations(
314
+ text,
315
+ strip_brackets=strip_brackets,
316
+ strip_parens=strip_parens,
317
+ )
318
+ tokens, _ = tokenize_text(check_text, return_spans=True)
319
+ hash64, sha, _ = hash_token_sequence(tokens)
320
+ return str(hash64) == str(expected_hash) and sha == expected_sha
321
+
322
+
323
  def _candidate_ids(paper_id: str, doi: str, arxiv: str) -> list[str]:
324
  candidates = [
325
  paper_id,
 
420
  index.setdefault(stripped, path)
421
  index.setdefault(normalize_paper_id(stripped), path)
422
  if stem.endswith("_fixed") or stem.endswith("-fixed"):
423
+ base = (
424
+ stem[: -len("_fixed")]
425
+ if stem.endswith("_fixed")
426
+ else stem[: -len("-fixed")]
427
+ )
428
  if base:
429
  index[base] = path
430
  index[normalize_paper_id(base)] = path
 
817
  token_cache: Dict[str, Optional[TokenIndex]] = {}
818
  tei_path_cache: Dict[str, Optional[Path]] = {}
819
  pdf_token_cache: Dict[Path, TokenIndex] = {}
820
+ pdf_token_cache_stripped: Dict[tuple[Path, bool, bool], TokenIndex] = {}
821
+ tei_token_cache_stripped: Dict[tuple[Path, bool, bool], TokenIndex] = {}
822
  pdf_failed: set[Path] = set()
823
 
824
+ def _get_stripped_index(
825
+ cache: Dict[tuple[Path, bool, bool], TokenIndex],
826
+ source_path: Optional[Path],
827
+ source_text: str,
828
+ strip_brackets: bool,
829
+ strip_parens: bool,
830
+ ) -> Optional[TokenIndex]:
831
+ if source_path is None:
832
+ return None
833
+ if not strip_brackets and not strip_parens:
834
+ return None
835
+ key = (source_path, strip_brackets, strip_parens)
836
+ if key not in cache:
837
+ stripped = strip_citations(
838
+ source_text,
839
+ strip_brackets=strip_brackets,
840
+ strip_parens=strip_parens,
841
+ )
842
+ cache[key] = TokenIndex.from_text(stripped)
843
+ return cache[key]
844
+
845
  with args.legal_csv.open("r", encoding="utf-8", newline="") as handle:
846
  reader = csv.DictReader(handle)
847
  legal_rows = list(reader)
 
1041
  def_end = row.get("definition_char_end") or ""
1042
  ctx_start = row.get("context_char_start") or ""
1043
  ctx_end = row.get("context_char_end") or ""
1044
+ def_strip_brackets, def_strip_parens = _strip_flags(
1045
+ row,
1046
+ "definition",
1047
+ )
1048
+ ctx_strip_brackets, ctx_strip_parens = _strip_flags(row, "context")
1049
 
1050
  if not definition and pdf_token_index:
1051
  span = _find_pdf_hash_span(row, pdf_token_index, "definition")
 
1056
  span[1],
1057
  )
1058
  hydrated_from_pdf += 1
1059
+ if not definition:
1060
+ stripped_index = _get_stripped_index(
1061
+ pdf_token_cache_stripped,
1062
+ pdf_path,
1063
+ pdf_token_index.doc_text,
1064
+ def_strip_brackets,
1065
+ def_strip_parens,
1066
+ )
1067
+ if stripped_index is not None:
1068
+ span = _find_pdf_hash_span(
1069
+ row,
1070
+ stripped_index,
1071
+ "definition",
1072
+ )
1073
+ if span:
1074
+ definition = _extract_with_trailing_punct(
1075
+ stripped_index.doc_text,
1076
+ span[0],
1077
+ span[1],
1078
+ )
1079
+ hydrated_from_pdf += 1
1080
  if not definition and tei_token_index:
1081
  spec = _select_hash_specs(row, "definition")
1082
  if spec:
 
1087
  span[0],
1088
  span[1],
1089
  )
1090
+ if not definition:
1091
+ stripped_index = _get_stripped_index(
1092
+ tei_token_cache_stripped,
1093
+ tei_path,
1094
+ doc_index.doc_text,
1095
+ def_strip_brackets,
1096
+ def_strip_parens,
1097
+ )
1098
+ if stripped_index is not None:
1099
+ span = stripped_index.find_span_by_hash(*spec)
1100
+ if span:
1101
+ definition = _extract_with_trailing_punct(
1102
+ stripped_index.doc_text,
1103
+ span[0],
1104
+ span[1],
1105
+ )
1106
  if not definition and not (def_start and def_end):
1107
  head_specs = _select_anchor_spec_list(
1108
  row,
 
1232
  span[1],
1233
  )
1234
  if not definition and def_start and def_end:
1235
+ candidate = _extract_with_trailing_punct(
1236
  doc_index.doc_text,
1237
  int(def_start),
1238
  int(def_end),
1239
  )
1240
+ if _span_matches_hash(row, candidate, "definition"):
1241
+ definition = candidate
1242
 
1243
  if not context and pdf_token_index:
1244
  span = _find_pdf_hash_span(row, pdf_token_index, "context")
 
1249
  span[1],
1250
  )
1251
  hydrated_from_pdf += 1
1252
+ if not context:
1253
+ stripped_index = _get_stripped_index(
1254
+ pdf_token_cache_stripped,
1255
+ pdf_path,
1256
+ pdf_token_index.doc_text,
1257
+ ctx_strip_brackets,
1258
+ ctx_strip_parens,
1259
+ )
1260
+ if stripped_index is not None:
1261
+ span = _find_pdf_hash_span(
1262
+ row,
1263
+ stripped_index,
1264
+ "context",
1265
+ )
1266
+ if span:
1267
+ context = _extract_with_trailing_punct(
1268
+ stripped_index.doc_text,
1269
+ span[0],
1270
+ span[1],
1271
+ )
1272
+ hydrated_from_pdf += 1
1273
 
1274
  if not context and tei_token_index:
1275
  spec = _select_hash_specs(row, "context")
 
1281
  span[0],
1282
  span[1],
1283
  )
1284
+ if not context:
1285
+ stripped_index = _get_stripped_index(
1286
+ tei_token_cache_stripped,
1287
+ tei_path,
1288
+ doc_index.doc_text,
1289
+ ctx_strip_brackets,
1290
+ ctx_strip_parens,
1291
+ )
1292
+ if stripped_index is not None:
1293
+ span = stripped_index.find_span_by_hash(*spec)
1294
+ if span:
1295
+ context = _extract_with_trailing_punct(
1296
+ stripped_index.doc_text,
1297
+ span[0],
1298
+ span[1],
1299
+ )
1300
  if not context and not (ctx_start and ctx_end):
1301
  head_specs = _select_anchor_spec_list(
1302
  row,
 
1426
  span[1],
1427
  )
1428
  if not context and ctx_start and ctx_end:
1429
+ candidate = _extract_with_trailing_punct(
1430
  doc_index.doc_text,
1431
  int(ctx_start),
1432
  int(ctx_end),
1433
  )
1434
+ if _span_matches_hash(row, candidate, "context"):
1435
+ context = candidate
1436
 
1437
  if not definition and pdf_path is not None and pdf_token_index:
1438
  spec = _select_hash_specs(row, "definition")
scripts/prepare_defextra_legal.py CHANGED
@@ -524,7 +524,12 @@ def main() -> None:
524
  for spec in (def_tail_spec, def_tail_alt_spec)
525
  if spec
526
  ]
527
- if token_index and expected_len and head_specs and tail_specs:
 
 
 
 
 
528
  for head in head_specs:
529
  for tail in tail_specs:
530
  anchor_span = _find_span_by_anchors(
@@ -613,7 +618,12 @@ def main() -> None:
613
  for spec in (ctx_tail_spec, ctx_tail_alt_spec)
614
  if spec
615
  ]
616
- if token_index and expected_len and head_specs and tail_specs:
 
 
 
 
 
617
  for head in head_specs:
618
  for tail in tail_specs:
619
  anchor_span = _find_span_by_anchors(
 
524
  for spec in (def_tail_spec, def_tail_alt_spec)
525
  if spec
526
  ]
527
+ if (
528
+ token_index
529
+ and expected_len
530
+ and head_specs
531
+ and tail_specs
532
+ ):
533
  for head in head_specs:
534
  for tail in tail_specs:
535
  anchor_span = _find_span_by_anchors(
 
618
  for spec in (ctx_tail_spec, ctx_tail_alt_spec)
619
  if spec
620
  ]
621
+ if (
622
+ token_index
623
+ and expected_len
624
+ and head_specs
625
+ and tail_specs
626
+ ):
627
  for head in head_specs:
628
  for tail in tail_specs:
629
  anchor_span = _find_span_by_anchors(
scripts/report_defextra_status.py CHANGED
@@ -76,9 +76,15 @@ def _index_recent_pdfs(
76
  keys = {stem, stem.lower(), normalize_paper_id(stem)}
77
  if stem.startswith("paper_"):
78
  stripped = stem[len("paper_") :]
79
- keys.update({stripped, stripped.lower(), normalize_paper_id(stripped)})
 
 
80
  if stem.endswith("_fixed") or stem.endswith("-fixed"):
81
- base = stem[: -len("_fixed")] if stem.endswith("_fixed") else stem[: -len("-fixed")]
 
 
 
 
82
  if base:
83
  keys.update({base, base.lower(), normalize_paper_id(base)})
84
  match = arxiv_re.match(stem)
@@ -112,13 +118,17 @@ def main() -> None:
112
  parser.add_argument(
113
  "--legal-report",
114
  type=Path,
115
- default=Path("results/paper_results/defextra_legal_tablefix_report.txt"),
 
 
116
  help="Report generated by prepare_defextra_legal.py.",
117
  )
118
  parser.add_argument(
119
  "--hydrated-csv",
120
  type=Path,
121
- default=Path("results/paper_results/defextra_hydrated_tablefix_test.csv"),
 
 
122
  help="Hydrated CSV from hydrate_defextra.py.",
123
  )
124
  parser.add_argument(
@@ -142,10 +152,16 @@ def main() -> None:
142
  args = parser.parse_args()
143
 
144
  legal_rows = _load_csv(args.legal_csv)
145
- hydrated_rows = _load_csv(args.hydrated_csv) if args.hydrated_csv.exists() else []
 
 
146
 
147
- ref_ids = {row.get("paper_id", "") for row in legal_rows if row.get("paper_id")}
148
- hyd_ids = {row.get("paper_id", "") for row in hydrated_rows if row.get("paper_id")}
 
 
 
 
149
  missing_papers = sorted(ref_ids - hyd_ids)
150
 
151
  missing_defs, missing_ctxs = _parse_missing_report(args.legal_report)
@@ -163,7 +179,11 @@ def main() -> None:
163
  except ValueError:
164
  continue
165
  row = idx.get((pid, concept))
166
- if row and (row.get("definition_type") or "").strip().lower() == "implicit":
 
 
 
 
167
  implicit_defs.append(item)
168
  for item in missing_ctxs:
169
  try:
@@ -171,7 +191,11 @@ def main() -> None:
171
  except ValueError:
172
  continue
173
  row = idx.get((pid, concept))
174
- if row and (row.get("definition_type") or "").strip().lower() == "implicit":
 
 
 
 
175
  implicit_ctxs.append(item)
176
 
177
  cutoff_ts = time.time() - (args.recent_days * 86400)
@@ -206,31 +230,35 @@ def main() -> None:
206
  for pid in missing_papers:
207
  lines.append(f"- {pid}")
208
  lines.append("")
209
- lines.append(f"Missing definition spans marked implicit: {len(implicit_defs)}")
 
 
210
  for item in implicit_defs:
211
  lines.append(f"- {item}")
212
  lines.append("")
213
- lines.append(f"Missing context spans marked implicit: {len(implicit_ctxs)}")
 
 
214
  for item in implicit_ctxs:
215
  lines.append(f"- {item}")
216
  lines.append("")
217
  lines.append(
218
  f"Missing papers with recent PDFs (<= {args.recent_days} days): "
219
- f"{len(recent_missing_papers)}"
220
  )
221
  for pid in recent_missing_papers:
222
  lines.append(f"- {pid}")
223
  lines.append("")
224
  lines.append(
225
  f"Missing definition spans with recent PDFs (<= {args.recent_days} days): "
226
- f"{len(recent_missing_defs)}"
227
  )
228
  for item in recent_missing_defs:
229
  lines.append(f"- {item}")
230
  lines.append("")
231
  lines.append(
232
  f"Missing context spans with recent PDFs (<= {args.recent_days} days): "
233
- f"{len(recent_missing_ctxs)}"
234
  )
235
  for item in recent_missing_ctxs:
236
  lines.append(f"- {item}")
 
76
  keys = {stem, stem.lower(), normalize_paper_id(stem)}
77
  if stem.startswith("paper_"):
78
  stripped = stem[len("paper_") :]
79
+ keys.update(
80
+ {stripped, stripped.lower(), normalize_paper_id(stripped)},
81
+ )
82
  if stem.endswith("_fixed") or stem.endswith("-fixed"):
83
+ base = (
84
+ stem[: -len("_fixed")]
85
+ if stem.endswith("_fixed")
86
+ else stem[: -len("-fixed")]
87
+ )
88
  if base:
89
  keys.update({base, base.lower(), normalize_paper_id(base)})
90
  match = arxiv_re.match(stem)
 
118
  parser.add_argument(
119
  "--legal-report",
120
  type=Path,
121
+ default=Path(
122
+ "results/paper_results/defextra_legal_tablefix_report.txt",
123
+ ),
124
  help="Report generated by prepare_defextra_legal.py.",
125
  )
126
  parser.add_argument(
127
  "--hydrated-csv",
128
  type=Path,
129
+ default=Path(
130
+ "results/paper_results/defextra_hydrated_tablefix_test.csv",
131
+ ),
132
  help="Hydrated CSV from hydrate_defextra.py.",
133
  )
134
  parser.add_argument(
 
152
  args = parser.parse_args()
153
 
154
  legal_rows = _load_csv(args.legal_csv)
155
+ hydrated_rows = (
156
+ _load_csv(args.hydrated_csv) if args.hydrated_csv.exists() else []
157
+ )
158
 
159
+ ref_ids = {
160
+ row.get("paper_id", "") for row in legal_rows if row.get("paper_id")
161
+ }
162
+ hyd_ids = {
163
+ row.get("paper_id", "") for row in hydrated_rows if row.get("paper_id")
164
+ }
165
  missing_papers = sorted(ref_ids - hyd_ids)
166
 
167
  missing_defs, missing_ctxs = _parse_missing_report(args.legal_report)
 
179
  except ValueError:
180
  continue
181
  row = idx.get((pid, concept))
182
+ if (
183
+ row
184
+ and (row.get("definition_type") or "").strip().lower()
185
+ == "implicit"
186
+ ):
187
  implicit_defs.append(item)
188
  for item in missing_ctxs:
189
  try:
 
191
  except ValueError:
192
  continue
193
  row = idx.get((pid, concept))
194
+ if (
195
+ row
196
+ and (row.get("definition_type") or "").strip().lower()
197
+ == "implicit"
198
+ ):
199
  implicit_ctxs.append(item)
200
 
201
  cutoff_ts = time.time() - (args.recent_days * 86400)
 
230
  for pid in missing_papers:
231
  lines.append(f"- {pid}")
232
  lines.append("")
233
+ lines.append(
234
+ f"Missing definition spans marked implicit: {len(implicit_defs)}",
235
+ )
236
  for item in implicit_defs:
237
  lines.append(f"- {item}")
238
  lines.append("")
239
+ lines.append(
240
+ f"Missing context spans marked implicit: {len(implicit_ctxs)}",
241
+ )
242
  for item in implicit_ctxs:
243
  lines.append(f"- {item}")
244
  lines.append("")
245
  lines.append(
246
  f"Missing papers with recent PDFs (<= {args.recent_days} days): "
247
+ f"{len(recent_missing_papers)}",
248
  )
249
  for pid in recent_missing_papers:
250
  lines.append(f"- {pid}")
251
  lines.append("")
252
  lines.append(
253
  f"Missing definition spans with recent PDFs (<= {args.recent_days} days): "
254
+ f"{len(recent_missing_defs)}",
255
  )
256
  for item in recent_missing_defs:
257
  lines.append(f"- {item}")
258
  lines.append("")
259
  lines.append(
260
  f"Missing context spans with recent PDFs (<= {args.recent_days} days): "
261
+ f"{len(recent_missing_ctxs)}",
262
  )
263
  for item in recent_missing_ctxs:
264
  lines.append(f"- {item}")