Update dataset card: single sqlite file architecture
Browse files
README.md
CHANGED
|
@@ -21,18 +21,72 @@ size_categories:
|
|
| 21 |
|
| 22 |
Synonym databases for deterministic grading in [GenomicsBench](https://github.com/AfterQuery/GenomicsBench), a realism-first benchmark for AI coding agents in computational biology.
|
| 23 |
|
| 24 |
-
GenomicsBench ships with dozens of prebuilt graders for highly specific biological outputs.
|
| 25 |
|
| 26 |
-
##
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
-
|
|
| 31 |
|---|---|---|---|---|
|
| 32 |
-
| `
|
| 33 |
-
| `
|
| 34 |
-
| `
|
| 35 |
-
| `
|
| 36 |
-
| `
|
| 37 |
|
| 38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
Synonym databases for deterministic grading in [GenomicsBench](https://github.com/AfterQuery/GenomicsBench), a realism-first benchmark for AI coding agents in computational biology.
|
| 23 |
|
| 24 |
+
GenomicsBench ships with dozens of prebuilt graders for highly specific biological outputs. This dataset powers the synonym-aware matching behind those graders, so that *TP53* and *tumor protein p53* are recognized as the same gene, *hsa-miR-21-5p* and *MIMAT0000076* as the same miRNA, and *E. coli* and *Escherichia coli* as the same organism.
|
| 25 |
|
| 26 |
+
## Format
|
| 27 |
|
| 28 |
+
One read-only SQLite file, `grader_databases.sqlite` (~22 GB), with five tables -- one per source database. Lookup columns are pre-lowercased at build time (full Unicode `str.lower()`) and covered by composite indexes, so runtime queries are direct indexed seeks and process memory is O(query result) rather than O(database size).
|
| 29 |
|
| 30 |
+
| Table | Columns | Rows | Source | Description |
|
| 31 |
|---|---|---|---|---|
|
| 32 |
+
| `ncbi_gene` | `organism_lower name_lower gene_id` | ~219M | [NCBI Gene](https://www.ncbi.nlm.nih.gov/gene/) | Gene symbol, synonym, description, Entrez GeneID, and Ensembl ID resolution scoped by organism |
|
| 33 |
+
| `ncbi_taxonomy` | `name_lower canonical_name` | ~3.2M | [NCBI Taxonomy](https://www.ncbi.nlm.nih.gov/taxonomy) | Scientific name, common name, abbreviation, and synonym resolution for all taxa |
|
| 34 |
+
| `hmdb` | `name_lower accession` | ~1.5M | [HMDB](https://hmdb.ca/) | Metabolite name, synonym, IUPAC name, and HMDB accession resolution |
|
| 35 |
+
| `mirbase` | `name_lower accession` | ~157K | [miRBase](https://mirbase.org/) | miRNA precursor and mature name/accession resolution, including deprecated entries folded into live replacements |
|
| 36 |
+
| `card` | `name_lower aro_accession` | ~19K | [CARD](https://card.mcmaster.ca/) | Antimicrobial resistance gene name, synonym, and ARO accession resolution |
|
| 37 |
|
| 38 |
+
Three of the tables (`ncbi_taxonomy`, `hmdb`, `mirbase`) use `WITHOUT ROWID` with `PRIMARY KEY(name_lower)` and are built via `INSERT OR IGNORE` so the first occurrence of any lowercased name wins. The other two (`ncbi_gene`, `card`) are 1:N and allow multiple rows per lookup key -- `gene_match` and `amr_match` do set-intersection at query time.
|
| 39 |
+
|
| 40 |
+
## Usage
|
| 41 |
+
|
| 42 |
+
Install GenomicsBench and call the match functions directly; downloading and caching happens on first use.
|
| 43 |
+
|
| 44 |
+
```python
|
| 45 |
+
from genomicsbench.grading import (
|
| 46 |
+
gene_match,
|
| 47 |
+
taxonomy_match,
|
| 48 |
+
mirna_match,
|
| 49 |
+
metabolite_match,
|
| 50 |
+
amr_match,
|
| 51 |
+
)
|
| 52 |
+
|
| 53 |
+
gene_match("TP53", "tumor protein p53", organism="Homo sapiens") # True
|
| 54 |
+
taxonomy_match("E. coli", "Escherichia coli") # True
|
| 55 |
+
mirna_match("hsa-miR-21-5p", "MIMAT0000076") # True
|
| 56 |
+
metabolite_match("Dextrose", "D-Glucose") # True
|
| 57 |
+
amr_match("mecA", "PBP2A") # True
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
Each match function is a thin wrapper over one indexed SQL query plus a case-insensitive string-equality fallback (used when either side is absent from the database, so misses don't crash on out-of-distribution inputs). The public API signatures, return types, and fallback semantics are unchanged from the previous TSV-backed release -- consumers upgrade by bumping their `genomicsbench` package version.
|
| 61 |
+
|
| 62 |
+
Each lookup module opens a thread-local read-only connection on first use with read-tuning PRAGMAs (`query_only=1`, `cache_size=-20000`, `temp_store=MEMORY`, `mmap_size=268435456`), so the dataset works correctly under multi-threaded consumers like the GenomicsBench async orchestrator. SQLite supports unlimited concurrent readers via shared file locks.
|
| 63 |
+
|
| 64 |
+
## Direct access
|
| 65 |
+
|
| 66 |
+
If you want to query the tables yourself without installing GenomicsBench, open the file read-only:
|
| 67 |
+
|
| 68 |
+
```python
|
| 69 |
+
import sqlite3
|
| 70 |
+
from huggingface_hub import hf_hub_download
|
| 71 |
+
|
| 72 |
+
path = hf_hub_download(
|
| 73 |
+
repo_id="AfterQuery/GenomicsBench-grader-databases",
|
| 74 |
+
filename="grader_databases.sqlite",
|
| 75 |
+
repo_type="dataset",
|
| 76 |
+
)
|
| 77 |
+
conn = sqlite3.connect(f"file:{path}?mode=ro", uri=True)
|
| 78 |
+
|
| 79 |
+
rows = conn.execute(
|
| 80 |
+
"SELECT gene_id FROM ncbi_gene "
|
| 81 |
+
"WHERE organism_lower = ? AND name_lower = ?",
|
| 82 |
+
("homo sapiens", "tp53"),
|
| 83 |
+
).fetchall()
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
All lookup columns are pre-lowercased; lowercase your query inputs in Python before binding. The table schemas are stable across releases: any schema change is a breaking revision of this dataset.
|
| 87 |
+
|
| 88 |
+
## Provenance and licensing
|
| 89 |
+
|
| 90 |
+
The five source databases retain their upstream licenses. GenomicsBench packages them into a single SQLite file for efficient runtime access; see the [GenomicsBench GitHub repository](https://github.com/AfterQuery/GenomicsBench) for the full ETL pipeline under `scripts/databases/` and its step-by-step runbook in `scripts/databases/README.md`.
|
| 91 |
+
|
| 92 |
+
Rebuilds ship as new revisions on this dataset whenever an upstream source publishes a new release.
|