SQLite Metadata System for Plant-mSyn
This directory contains the SQLite-based metadata system for efficient gene searches across multi-genome synteny comparisons.
Overview
The metadata system consists of two types of databases:
Central Metadata Database (
plantmsyn_metadata.db)- Stores genome registry, comparison runs, and file manifests
- Enables quick lookup of which comparison files exist for any genome pair
- Tracks custom genome uploads and their expiration dates
Per-Genome Search Catalogs (
search_catalogs/<genome>.catalog.sqlite)- One catalog per query genome
- Maps gene IDs to target genomes where matches exist
- Enables O(1) lookup: "For gene X in genome A, which target genomes have hits?"
Why This System?
Without an index, searching for a gene requires scanning ~200 comparison files to find matches. This is fast locally but slow in cloud environments with network-based storage.
With the catalog system:
- Look up the gene in the query genome's catalog → get list of target genomes with matches
- Fetch only those specific comparison files
- Extract the full match rows
This reduces file reads from ~200 to typically ~5-10 per search.
Directory Structure
sql/
├── README.md # This file
├── plantmsyn_metadata.db # Central metadata database
├── search_catalogs/ # Per-genome search catalogs
│ ├── arabidopsis_thaliana.catalog.sqlite
│ ├── glycine_max.catalog.sqlite
│ └── ...
└── test_sqlite_metadata.py # Validation test script
Building the Databases
First-time build
cd /path/to/Multi-genomes\ synteny/Scripts
python build_sqlite_metadata.py
Rebuild from scratch
python build_sqlite_metadata.py --rebuild
Clean up expired custom genomes
python build_sqlite_metadata.py --cleanup-expired
Options
| Option | Description |
|---|---|
--rebuild, -r |
Drop and rebuild all databases from scratch |
--cleanup-expired, -c |
Remove metadata for expired custom genomes (2-week expiry) |
--version VERSION |
Dataset version string (default: v1) |
--verbose, -v |
Enable debug logging |
Testing
Run the validation tests to ensure the metadata system is working correctly:
cd /path/to/Multi-genomes\ synteny/sql
python test_sqlite_metadata.py
The tests verify:
- Database tables exist and have correct schema
- Genome counts match filesystem
- Comparison runs are properly linked
- File manifests point to existing files
- Search catalogs are queryable
- Gene lookups return valid results
- Cross-validation against actual
.last.filteredfiles
Database Schemas
Central Database Tables
genome
| Column | Type | Description |
|---|---|---|
genome_id |
INTEGER | Primary key |
genome_name |
TEXT | Unique identifier (e.g., arabidopsis_thaliana) |
display_name |
TEXT | Human-readable name |
is_custom |
INTEGER | 1 if custom upload, 0 if database genome |
created_at |
TEXT | ISO timestamp |
expires_at |
TEXT | Expiration date for custom genomes |
gene_count |
INTEGER | Number of genes |
protein_count |
INTEGER | Number of proteins |
comparison_run
| Column | Type | Description |
|---|---|---|
run_id |
INTEGER | Primary key |
query_genome_id |
INTEGER | FK → genome |
target_genome_id |
INTEGER | FK → genome |
dataset_version |
TEXT | Version string (e.g., v1) |
created_at |
TEXT | ISO timestamp |
status |
TEXT | completed, failed, or pending |
run_file
| Column | Type | Description |
|---|---|---|
run_id |
INTEGER | FK → comparison_run |
file_kind |
TEXT | i1.blocks, last.filtered, lifted.anchors, etc. |
file_path |
TEXT | Relative path from MCSCAN_RESULTS_DIR |
file_bytes |
INTEGER | File size |
file_checksum |
TEXT | MD5 hash |
created_at |
TEXT | ISO timestamp |
search_catalog
| Column | Type | Description |
|---|---|---|
dataset_version |
TEXT | Version string |
query_genome_id |
INTEGER | FK → genome |
catalog_path |
TEXT | Relative path to catalog file |
catalog_bytes |
INTEGER | Catalog file size |
catalog_checksum |
TEXT | MD5 hash |
created_at |
TEXT | ISO timestamp |
Per-Genome Catalog Tables
gene_to_run
| Column | Type | Description |
|---|---|---|
query_gene_id |
TEXT | Gene identifier |
target_genome_name |
TEXT | Target genome where matches exist |
run_id |
INTEGER | Reference to comparison_run |
hit_count |
INTEGER | Number of matches for this gene |
best_identity |
REAL | Highest identity score |
File Directionality
.i1.blocksfiles: Directional.A.B.i1.blocksmeans query=A, target=B. Both A→B and B→A are stored as separate comparison runs..last.filteredfiles: Contain matches in both directions. The same file is associated with both A→B and B→A runs..lifted.anchorsfiles: Similar to last.filtered, contain both directions.
Example Usage
Query gene targets from catalog
import sqlite3
from path_config import SEARCH_CATALOGS_DIR
genome = "arabidopsis_thaliana"
gene_id = "AT1G01010"
conn = sqlite3.connect(SEARCH_CATALOGS_DIR / f"{genome}.catalog.sqlite")
cursor = conn.execute("""
SELECT target_genome_name, hit_count, best_identity
FROM gene_to_run
WHERE query_gene_id = ?
""", (gene_id,))
for row in cursor:
print(f" {row[0]}: {row[1]} hits, best identity {row[2]:.1f}%")
Get comparison files for a genome pair
import sqlite3
from path_config import METADATA_DB_PATH, MCSCAN_RESULTS_DIR
conn = sqlite3.connect(METADATA_DB_PATH)
cursor = conn.execute("""
SELECT rf.file_kind, rf.file_path
FROM run_file rf
JOIN comparison_run cr ON rf.run_id = cr.run_id
JOIN genome gq ON cr.query_genome_id = gq.genome_id
JOIN genome gt ON cr.target_genome_id = gt.genome_id
WHERE gq.genome_name = ? AND gt.genome_name = ?
""", ("arabidopsis_thaliana", "glycine_max"))
for row in cursor:
full_path = MCSCAN_RESULTS_DIR / row[1]
print(f" {row[0]}: {full_path}")
Custom Genome Lifecycle
Custom genomes uploaded by users expire after 2 weeks. The expiration date is stored in genome.expires_at.
To clean up expired genomes:
python build_sqlite_metadata.py --cleanup-expired
This removes:
- Genome entry from
genometable - Associated comparison runs from
comparison_runtable - File manifest entries from
run_filetable - Search catalog entry from
search_catalogtable - The actual catalog file from
search_catalogs/
Versioning
The system supports multiple dataset versions via the dataset_version column. This allows running the pipeline with different parameters and storing results side-by-side.
Default version: v1
To build with a different version:
python build_sqlite_metadata.py --version v2
Troubleshooting
"No comparison runs found"
- Check that
Mcscan_results/protein_pairwise/i1_blocks/contains.i1.blocksfiles - Run with
--verboseto see detailed discovery logs
"Catalog not found for genome X"
- The genome may not have any outgoing comparisons
- Check if BED file exists in
bed_files/
"Gene lookup returns no results"
- Verify the gene ID format matches what's in the BED file
- Check if the genome has been processed (has comparisons)
Integration Notes
The metadata system is designed to work alongside existing scripts without modification:
- Existing scripts continue to work by scanning files directly
- New/updated scripts can optionally use the metadata system for faster lookups
- The
path_config.pymodule providesSQL_DIR,SEARCH_CATALOGS_DIR, andMETADATA_DB_PATHconstants
For cloud deployment, the SQLite files can be stored in S3 and downloaded/cached locally as needed.