File size: 8,086 Bytes
5db1cee |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 |
# SQLite Metadata System for Plant-mSyn
This directory contains the SQLite-based metadata system for efficient gene searches across multi-genome synteny comparisons.
## Overview
The metadata system consists of two types of databases:
1. **Central Metadata Database** (`plantmsyn_metadata.db`)
- Stores genome registry, comparison runs, and file manifests
- Enables quick lookup of which comparison files exist for any genome pair
- Tracks custom genome uploads and their expiration dates
2. **Per-Genome Search Catalogs** (`search_catalogs/<genome>.catalog.sqlite`)
- One catalog per query genome
- Maps gene IDs to target genomes where matches exist
- Enables O(1) lookup: "For gene X in genome A, which target genomes have hits?"
## Why This System?
Without an index, searching for a gene requires scanning ~200 comparison files to find matches. This is fast locally but slow in cloud environments with network-based storage.
With the catalog system:
1. Look up the gene in the query genome's catalog → get list of target genomes with matches
2. Fetch only those specific comparison files
3. Extract the full match rows
This reduces file reads from ~200 to typically ~5-10 per search.
## Directory Structure
```
sql/
├── README.md # This file
├── plantmsyn_metadata.db # Central metadata database
├── search_catalogs/ # Per-genome search catalogs
│ ├── arabidopsis_thaliana.catalog.sqlite
│ ├── glycine_max.catalog.sqlite
│ └── ...
└── test_sqlite_metadata.py # Validation test script
```
## Building the Databases
### First-time build
```bash
cd /path/to/Multi-genomes\ synteny/Scripts
python build_sqlite_metadata.py
```
### Rebuild from scratch
```bash
python build_sqlite_metadata.py --rebuild
```
### Clean up expired custom genomes
```bash
python build_sqlite_metadata.py --cleanup-expired
```
### Options
| Option | Description |
|--------|-------------|
| `--rebuild`, `-r` | Drop and rebuild all databases from scratch |
| `--cleanup-expired`, `-c` | Remove metadata for expired custom genomes (2-week expiry) |
| `--version VERSION` | Dataset version string (default: `v1`) |
| `--verbose`, `-v` | Enable debug logging |
## Testing
Run the validation tests to ensure the metadata system is working correctly:
```bash
cd /path/to/Multi-genomes\ synteny/sql
python test_sqlite_metadata.py
```
The tests verify:
- Database tables exist and have correct schema
- Genome counts match filesystem
- Comparison runs are properly linked
- File manifests point to existing files
- Search catalogs are queryable
- Gene lookups return valid results
- Cross-validation against actual `.last.filtered` files
## Database Schemas
### Central Database Tables
#### `genome`
| Column | Type | Description |
|--------|------|-------------|
| `genome_id` | INTEGER | Primary key |
| `genome_name` | TEXT | Unique identifier (e.g., `arabidopsis_thaliana`) |
| `display_name` | TEXT | Human-readable name |
| `is_custom` | INTEGER | 1 if custom upload, 0 if database genome |
| `created_at` | TEXT | ISO timestamp |
| `expires_at` | TEXT | Expiration date for custom genomes |
| `gene_count` | INTEGER | Number of genes |
| `protein_count` | INTEGER | Number of proteins |
#### `comparison_run`
| Column | Type | Description |
|--------|------|-------------|
| `run_id` | INTEGER | Primary key |
| `query_genome_id` | INTEGER | FK → genome |
| `target_genome_id` | INTEGER | FK → genome |
| `dataset_version` | TEXT | Version string (e.g., `v1`) |
| `created_at` | TEXT | ISO timestamp |
| `status` | TEXT | `completed`, `failed`, or `pending` |
#### `run_file`
| Column | Type | Description |
|--------|------|-------------|
| `run_id` | INTEGER | FK → comparison_run |
| `file_kind` | TEXT | `i1.blocks`, `last.filtered`, `lifted.anchors`, etc. |
| `file_path` | TEXT | Relative path from MCSCAN_RESULTS_DIR |
| `file_bytes` | INTEGER | File size |
| `file_checksum` | TEXT | MD5 hash |
| `created_at` | TEXT | ISO timestamp |
#### `search_catalog`
| Column | Type | Description |
|--------|------|-------------|
| `dataset_version` | TEXT | Version string |
| `query_genome_id` | INTEGER | FK → genome |
| `catalog_path` | TEXT | Relative path to catalog file |
| `catalog_bytes` | INTEGER | Catalog file size |
| `catalog_checksum` | TEXT | MD5 hash |
| `created_at` | TEXT | ISO timestamp |
### Per-Genome Catalog Tables
#### `gene_to_run`
| Column | Type | Description |
|--------|------|-------------|
| `query_gene_id` | TEXT | Gene identifier |
| `target_genome_name` | TEXT | Target genome where matches exist |
| `run_id` | INTEGER | Reference to comparison_run |
| `hit_count` | INTEGER | Number of matches for this gene |
| `best_identity` | REAL | Highest identity score |
## File Directionality
- **`.i1.blocks` files**: Directional. `A.B.i1.blocks` means query=A, target=B. Both A→B and B→A are stored as separate comparison runs.
- **`.last.filtered` files**: Contain matches in both directions. The same file is associated with both A→B and B→A runs.
- **`.lifted.anchors` files**: Similar to last.filtered, contain both directions.
## Example Usage
### Query gene targets from catalog
```python
import sqlite3
from path_config import SEARCH_CATALOGS_DIR
genome = "arabidopsis_thaliana"
gene_id = "AT1G01010"
conn = sqlite3.connect(SEARCH_CATALOGS_DIR / f"{genome}.catalog.sqlite")
cursor = conn.execute("""
SELECT target_genome_name, hit_count, best_identity
FROM gene_to_run
WHERE query_gene_id = ?
""", (gene_id,))
for row in cursor:
print(f" {row[0]}: {row[1]} hits, best identity {row[2]:.1f}%")
```
### Get comparison files for a genome pair
```python
import sqlite3
from path_config import METADATA_DB_PATH, MCSCAN_RESULTS_DIR
conn = sqlite3.connect(METADATA_DB_PATH)
cursor = conn.execute("""
SELECT rf.file_kind, rf.file_path
FROM run_file rf
JOIN comparison_run cr ON rf.run_id = cr.run_id
JOIN genome gq ON cr.query_genome_id = gq.genome_id
JOIN genome gt ON cr.target_genome_id = gt.genome_id
WHERE gq.genome_name = ? AND gt.genome_name = ?
""", ("arabidopsis_thaliana", "glycine_max"))
for row in cursor:
full_path = MCSCAN_RESULTS_DIR / row[1]
print(f" {row[0]}: {full_path}")
```
## Custom Genome Lifecycle
Custom genomes uploaded by users expire after 2 weeks. The expiration date is stored in `genome.expires_at`.
To clean up expired genomes:
```bash
python build_sqlite_metadata.py --cleanup-expired
```
This removes:
- Genome entry from `genome` table
- Associated comparison runs from `comparison_run` table
- File manifest entries from `run_file` table
- Search catalog entry from `search_catalog` table
- The actual catalog file from `search_catalogs/`
## Versioning
The system supports multiple dataset versions via the `dataset_version` column. This allows running the pipeline with different parameters and storing results side-by-side.
Default version: `v1`
To build with a different version:
```bash
python build_sqlite_metadata.py --version v2
```
## Troubleshooting
### "No comparison runs found"
- Check that `Mcscan_results/protein_pairwise/i1_blocks/` contains `.i1.blocks` files
- Run with `--verbose` to see detailed discovery logs
### "Catalog not found for genome X"
- The genome may not have any outgoing comparisons
- Check if BED file exists in `bed_files/`
### "Gene lookup returns no results"
- Verify the gene ID format matches what's in the BED file
- Check if the genome has been processed (has comparisons)
## Integration Notes
The metadata system is designed to work alongside existing scripts without modification:
- Existing scripts continue to work by scanning files directly
- New/updated scripts can optionally use the metadata system for faster lookups
- The `path_config.py` module provides `SQL_DIR`, `SEARCH_CATALOGS_DIR`, and `METADATA_DB_PATH` constants
For cloud deployment, the SQLite files can be stored in S3 and downloaded/cached locally as needed.
|