Hack90 commited on
Commit
aee08b4
·
verified ·
1 Parent(s): 660896e

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -67,3 +67,5 @@ proteins_below_4096bp.fasta filter=lfs diff=lfs merge=lfs -text
67
  repeats_results_full.csv filter=lfs diff=lfs merge=lfs -text
68
  bpe_plus_special_tokens_tokenizer.json filter=lfs diff=lfs merge=lfs -text
69
  faiss.index filter=lfs diff=lfs merge=lfs -text
 
 
 
67
  repeats_results_full.csv filter=lfs diff=lfs merge=lfs -text
68
  bpe_plus_special_tokens_tokenizer.json filter=lfs diff=lfs merge=lfs -text
69
  faiss.index filter=lfs diff=lfs merge=lfs -text
70
+ data/data/bpe_plus_special_tokens_tokenizer.json filter=lfs diff=lfs merge=lfs -text
71
+ data/data/faiss.index filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Semantic Search API
2
+
3
+ A production-ready semantic search service built with FastAPI. Upload your data (sequences + metadata), create embeddings automatically, and search using natural language queries.
4
+
5
+ ## Features
6
+
7
+ - **Semantic/Latent Search**: Find similar sequences based on meaning, not just keywords
8
+ - **FastAPI Backend**: Modern, fast, async Python web framework
9
+ - **FAISS Index**: Efficient similarity search at scale
10
+ - **Sentence Transformers**: State-of-the-art embedding models
11
+ - **Beautiful UI**: Dark-themed, responsive search interface
12
+ - **CSV Upload**: Easy data import via web interface or API
13
+ - **Persistent Storage**: Index persists across restarts
14
+
15
+ ## Quick Start
16
+
17
+ ### 1. Install Dependencies
18
+
19
+ ```bash
20
+ pip install -r requirements.txt
21
+ ```
22
+
23
+ ### 2. Run the Server
24
+
25
+ ```bash
26
+ python app.py
27
+ # or
28
+ uvicorn app:app --reload --host 0.0.0.0 --port 8000
29
+ ```
30
+
31
+ ### 3. Open the UI
32
+
33
+ Navigate to `http://localhost:8000` in your browser.
34
+
35
+ ### 4. Upload Your Data
36
+
37
+ - Drag & drop a CSV file or click to browse
38
+ - Select the column containing your sequences
39
+ - Click "Create Index"
40
+ - Start searching!
41
+
42
+ ## Data Format
43
+
44
+ Your CSV should have at least one column containing the text sequences you want to search. All other columns become searchable metadata.
45
+
46
+ Example:
47
+ ```csv
48
+ sequence,category,source,date
49
+ "Machine learning is transforming industries",tech,blog,2024-01-15
50
+ "The quick brown fox jumps over the lazy dog",example,pangram,2024-01-10
51
+ "Embeddings capture semantic meaning",ml,paper,2024-01-20
52
+ ```
53
+
54
+ ## API Endpoints
55
+
56
+ ### Search
57
+ ```bash
58
+ POST /api/search
59
+ Content-Type: application/json
60
+
61
+ {
62
+ "query": "artificial intelligence",
63
+ "top_k": 10
64
+ }
65
+ ```
66
+
67
+ ### Upload CSV
68
+ ```bash
69
+ POST /api/upload-csv?sequence_column=text
70
+ Content-Type: multipart/form-data
71
+
72
+ file: your_data.csv
73
+ ```
74
+
75
+ ### Create Index (JSON)
76
+ ```bash
77
+ POST /api/index
78
+ Content-Type: application/json
79
+
80
+ {
81
+ "sequence_column": "text",
82
+ "data": [
83
+ {"text": "Hello world", "category": "greeting"},
84
+ {"text": "Machine learning", "category": "tech"}
85
+ ]
86
+ }
87
+ ```
88
+
89
+ ### Get Stats
90
+ ```bash
91
+ GET /api/stats
92
+ ```
93
+
94
+ ### Get Sample
95
+ ```bash
96
+ GET /api/sample?n=5
97
+ ```
98
+
99
+ ### Delete Index
100
+ ```bash
101
+ DELETE /api/index
102
+ ```
103
+
104
+ ## Programmatic Usage
105
+
106
+ You can also create indexes directly from Python:
107
+
108
+ ```python
109
+ from create_index import create_index_from_dataframe, search_index
110
+ import pandas as pd
111
+
112
+ # Create your dataframe
113
+ df = pd.DataFrame({
114
+ 'sequence': [
115
+ 'The mitochondria is the powerhouse of the cell',
116
+ 'DNA stores genetic information',
117
+ 'Proteins are made of amino acids'
118
+ ],
119
+ 'category': ['biology', 'genetics', 'biochemistry'],
120
+ 'difficulty': ['easy', 'medium', 'medium']
121
+ })
122
+
123
+ # Create the index
124
+ create_index_from_dataframe(df, sequence_column='sequence')
125
+
126
+ # Search
127
+ results = search_index("cellular energy production", top_k=3)
128
+ for r in results:
129
+ print(f"Score: {r['score']:.3f} | {r['sequence'][:50]}...")
130
+ ```
131
+
132
+ ## Configuration
133
+
134
+ Edit these values in `app.py` to customize:
135
+
136
+ ```python
137
+ # Embedding model (from sentence-transformers)
138
+ EMBEDDING_MODEL = "all-MiniLM-L6-v2" # Fast, 384 dimensions
139
+
140
+ # Alternatives:
141
+ # "all-mpnet-base-v2" # Higher quality, 768 dimensions
142
+ # "paraphrase-multilingual-MiniLM-L12-v2" # Multilingual support
143
+ # "all-MiniLM-L12-v2" # Balanced quality/speed
144
+ ```
145
+
146
+ ## Project Structure
147
+
148
+ ```
149
+ semantic_search/
150
+ ├── app.py # FastAPI application
151
+ ├── create_index.py # Programmatic index creation
152
+ ├── requirements.txt # Python dependencies
153
+ ├── static/
154
+ │ └── index.html # Search UI
155
+ ├── data/ # Created at runtime
156
+ │ ├── faiss.index # FAISS index file
157
+ │ ├── metadata.pkl # DataFrame with metadata
158
+ │ └── embeddings.npy # Raw embeddings (optional)
159
+ └── README.md
160
+ ```
161
+
162
+ ## How It Works
163
+
164
+ 1. **Embedding Creation**: When you upload data, each sequence is converted to a dense vector (embedding) using a sentence transformer model
165
+ 2. **FAISS Indexing**: Embeddings are stored in a FAISS index optimized for similarity search
166
+ 3. **Search**: Your query is embedded using the same model, then FAISS finds the most similar vectors using cosine similarity
167
+ 4. **Results**: The original sequences and metadata are returned, ranked by similarity
168
+
169
+ ## Performance Tips
170
+
171
+ - **Model Choice**: `all-MiniLM-L6-v2` is fast and good for most use cases. Use `all-mpnet-base-v2` for higher quality at the cost of speed.
172
+ - **Batch Size**: For large datasets, the model processes in batches automatically
173
+ - **GPU**: If you have a CUDA-capable GPU, install `faiss-gpu` instead of `faiss-cpu` for faster indexing
174
+
175
+ ## License
176
+
177
+ MIT
app.py ADDED
@@ -0,0 +1,267 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Genomic Semantic Search API with FastAPI
3
+ =========================================
4
+ Search genomic sequences using your pre-trained transformer embeddings.
5
+ """
6
+
7
+ import pickle
8
+ from pathlib import Path
9
+ from typing import Optional
10
+ import numpy as np
11
+ import pandas as pd
12
+ from fastapi import FastAPI, HTTPException
13
+ from fastapi.staticfiles import StaticFiles
14
+ from fastapi.responses import HTMLResponse, FileResponse
15
+ from fastapi.middleware.cors import CORSMiddleware
16
+ from pydantic import BaseModel
17
+ import faiss
18
+ import torch
19
+ import torch.nn as nn
20
+ from x_transformers import TransformerWrapper, Encoder
21
+ import tiktoken
22
+
23
+ # ============================================================================
24
+ # Configuration
25
+ # ============================================================================
26
+
27
+ DATA_DIR = Path("data")
28
+ INDEX_PATH = DATA_DIR / "data/faiss.index"
29
+ METADATA_PATH = DATA_DIR / "data/metadata.pkl"
30
+ EMBEDDINGS_PATH = DATA_DIR / "data/embeddings.npy"
31
+
32
+ # Model paths - update these to your actual paths
33
+ MODEL_WEIGHTS_PATH = DATA_DIR / "data/bpe_plus_special_tokens_model.pt"
34
+ TOKENIZER_PATH = DATA_DIR / "data/bpe_plus_special_tokens_tokenizer.json"
35
+
36
+ # ============================================================================
37
+ # Model Definition
38
+ # ============================================================================
39
+
40
+ class GenomicTransformer(nn.Module):
41
+ def __init__(self, vocab_size=40000, hidden_dim=512, layers=12, heads=8, max_length=6000):
42
+ super().__init__()
43
+ self.model = TransformerWrapper(
44
+ num_tokens=vocab_size,
45
+ max_seq_len=max_length,
46
+ attn_layers=Encoder(
47
+ dim=hidden_dim,
48
+ depth=layers,
49
+ heads=heads,
50
+ rotary_pos_emb=True,
51
+ attn_orthog_projected_values=True,
52
+ attn_orthog_projected_values_per_head=True,
53
+ attn_flash=True
54
+ )
55
+ )
56
+
57
+ def forward(self, input_ids, return_embeddings=False):
58
+ return self.model(input_ids, return_embeddings=return_embeddings)
59
+
60
+ # ============================================================================
61
+ # App Setup
62
+ # ============================================================================
63
+
64
+ app = FastAPI(
65
+ title="Genomic Semantic Search",
66
+ description="Search genomic sequences using transformer embeddings",
67
+ version="1.0.0"
68
+ )
69
+
70
+ app.add_middleware(
71
+ CORSMiddleware,
72
+ allow_origins=["*"],
73
+ allow_credentials=True,
74
+ allow_methods=["*"],
75
+ allow_headers=["*"],
76
+ )
77
+
78
+ # Global state
79
+ device: torch.device = None
80
+ model: Optional[GenomicTransformer] = None
81
+ encoder: Optional[tiktoken.Encoding] = None
82
+ index: Optional[faiss.IndexFlatIP] = None
83
+ metadata: Optional[pd.DataFrame] = None
84
+
85
+ # ============================================================================
86
+ # Models
87
+ # ============================================================================
88
+
89
+ class SearchRequest(BaseModel):
90
+ query: str # The genomic sequence to search for
91
+ top_k: int = 10
92
+
93
+ class SearchResult(BaseModel):
94
+ rank: int
95
+ score: float
96
+ sequence: str
97
+ metadata: dict
98
+
99
+ class SearchResponse(BaseModel):
100
+ query: str
101
+ results: list[SearchResult]
102
+ total_indexed: int
103
+
104
+ class IndexStats(BaseModel):
105
+ total_documents: int
106
+ embedding_dimension: int
107
+ model_name: str
108
+ device: str
109
+
110
+ # ============================================================================
111
+ # Startup
112
+ # ============================================================================
113
+
114
+ @app.on_event("startup")
115
+ async def startup():
116
+ """Load the model, tokenizer, and FAISS index on startup."""
117
+ global device, model, encoder, index, metadata
118
+
119
+ # Setup device
120
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
121
+ print(f"Using device: {device}")
122
+
123
+ # Load tokenizer
124
+ print("Loading tokenizer...")
125
+ if TOKENIZER_PATH.exists():
126
+ with open(TOKENIZER_PATH, "rb") as f:
127
+ tokenizer_data = pickle.load(f)
128
+ encoder = tiktoken.Encoding(
129
+ name="genomic_bpe",
130
+ pat_str=tokenizer_data['pattern'],
131
+ mergeable_ranks=tokenizer_data['mergable_ranks'],
132
+ special_tokens={}
133
+ )
134
+ print("Tokenizer loaded successfully")
135
+ else:
136
+ print(f"WARNING: Tokenizer not found at {TOKENIZER_PATH}")
137
+
138
+ # Load model
139
+ print("Loading model...")
140
+ if MODEL_WEIGHTS_PATH.exists():
141
+ model = GenomicTransformer(
142
+ vocab_size=40_000, hidden_dim=512, layers=12, heads=8
143
+ )
144
+ weights = torch.load(MODEL_WEIGHTS_PATH, map_location=device)
145
+ model.load_state_dict(weights)
146
+ model = model.to(device)
147
+ model.eval()
148
+ print("Model loaded successfully")
149
+ else:
150
+ print(f"WARNING: Model weights not found at {MODEL_WEIGHTS_PATH}")
151
+
152
+ # Load FAISS index
153
+ if INDEX_PATH.exists() and METADATA_PATH.exists():
154
+ print("Loading FAISS index...")
155
+ index = faiss.read_index(str(INDEX_PATH))
156
+ with open(METADATA_PATH, "rb") as f:
157
+ metadata = pickle.load(f)
158
+ print(f"Index loaded with {index.ntotal} documents")
159
+ else:
160
+ print(f"WARNING: Index not found at {INDEX_PATH}")
161
+
162
+ # ============================================================================
163
+ # API Endpoints
164
+ # ============================================================================
165
+
166
+ @app.get("/", response_class=HTMLResponse)
167
+ async def root():
168
+ """Serve the search frontend."""
169
+ return FileResponse("index.html")
170
+
171
+ @app.get("/api/health")
172
+ async def health():
173
+ """Health check endpoint."""
174
+ return {
175
+ "status": "healthy",
176
+ "model_loaded": model is not None,
177
+ "index_loaded": index is not None,
178
+ "tokenizer_loaded": encoder is not None,
179
+ "device": str(device)
180
+ }
181
+
182
+ @app.get("/api/stats", response_model=IndexStats)
183
+ async def get_stats():
184
+ """Get statistics about the current index."""
185
+ if index is None:
186
+ raise HTTPException(status_code=404, detail="No index loaded")
187
+
188
+ return IndexStats(
189
+ total_documents=index.ntotal,
190
+ embedding_dimension=index.d,
191
+ model_name="GenomicTransformer (512d, 12 layers)",
192
+ device=str(device)
193
+ )
194
+
195
+ @app.post("/api/search", response_model=SearchResponse)
196
+ async def search(request: SearchRequest):
197
+ """
198
+ Perform semantic search over genomic sequences.
199
+
200
+ - **query**: The genomic sequence to search for (e.g., "ATCGATCG...")
201
+ - **top_k**: Number of results to return (default: 10)
202
+ """
203
+ if index is None or metadata is None:
204
+ raise HTTPException(status_code=404, detail="No index loaded")
205
+ if model is None or encoder is None:
206
+ raise HTTPException(status_code=503, detail="Model or tokenizer not loaded")
207
+ if index.ntotal == 0:
208
+ raise HTTPException(status_code=404, detail="Index is empty")
209
+
210
+ # Encode the query sequence
211
+ try:
212
+ encodings = encoder.encode_ordinary(request.query)
213
+ query_tensor = torch.tensor([encodings]).long().to(device)
214
+
215
+ with torch.no_grad():
216
+ query_embedding = model(query_tensor, return_embeddings=True)
217
+ query_embedding = query_embedding.mean(dim=1).cpu().numpy()
218
+
219
+ query_embedding = query_embedding.astype(np.float32)
220
+ except Exception as e:
221
+ raise HTTPException(status_code=400, detail=f"Failed to encode query: {str(e)}")
222
+
223
+ # Search
224
+ k = min(request.top_k, index.ntotal)
225
+ scores, indices = index.search(query_embedding, k)
226
+
227
+ # Build results
228
+ results = []
229
+ for rank, (score, idx) in enumerate(zip(scores[0], indices[0]), 1):
230
+ if idx == -1:
231
+ continue
232
+
233
+ row = metadata.iloc[idx]
234
+ meta_dict = row.to_dict()
235
+ sequence = meta_dict.pop("__sequence__", "")
236
+
237
+ results.append(SearchResult(
238
+ rank=rank,
239
+ score=float(score),
240
+ sequence=sequence,
241
+ metadata=meta_dict
242
+ ))
243
+
244
+ return SearchResponse(
245
+ query=request.query[:100] + "..." if len(request.query) > 100 else request.query,
246
+ results=results,
247
+ total_indexed=index.ntotal
248
+ )
249
+
250
+ @app.get("/api/sample")
251
+ async def get_sample(n: int = 5):
252
+ """Get a sample of indexed documents."""
253
+ if metadata is None:
254
+ raise HTTPException(status_code=404, detail="No index loaded")
255
+
256
+ sample = metadata.head(n)
257
+ return {
258
+ "total": len(metadata),
259
+ "sample": sample.to_dict(orient="records")
260
+ }
261
+
262
+ # Mount files
263
+ # app.mount("/static", StaticFiles(directory="static"), name="static")
264
+
265
+ if __name__ == "__main__":
266
+ import uvicorn
267
+ uvicorn.run(app, host="0.0.0.0", port=8080)
create_index.py ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Programmatic Index Creation
3
+ ===========================
4
+ Use this script to create an index from a DataFrame without the web interface.
5
+
6
+ Example usage:
7
+ python create_index.py --input data.csv --sequence-column text
8
+
9
+ Or use programmatically:
10
+ from create_index import create_index_from_dataframe
11
+ import pandas as pd
12
+
13
+ df = pd.DataFrame({
14
+ 'sequence': ['Hello world', 'Machine learning is great', ...],
15
+ 'category': ['greeting', 'tech', ...],
16
+ 'id': [1, 2, ...]
17
+ })
18
+
19
+ create_index_from_dataframe(df, sequence_column='sequence')
20
+ """
21
+
22
+ import argparse
23
+ import pickle
24
+ from pathlib import Path
25
+ import numpy as np
26
+ import pandas as pd
27
+ from pickle import dump, load
28
+ import tiktoken
29
+ import faiss
30
+ import torch.nn as nn
31
+ from x_transformers import TransformerWrapper, Encoder
32
+ import torch
33
+ from tqdm import tqdm
34
+
35
+ # Set device and check available GPUs
36
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
37
+ n_gpus = torch.cuda.device_count()
38
+ print(f"Using device: {device}")
39
+ print(f"Available GPUs: {n_gpus}")
40
+ for i in range(n_gpus):
41
+ print(f" GPU {i}: {torch.cuda.get_device_name(i)}")
42
+
43
+
44
+ class GenomicTransformer(nn.Module):
45
+ def __init__(self, vocab_size=40000, hidden_dim=32, layers=2, heads=3, max_length=6000):
46
+ super().__init__()
47
+ self.model = TransformerWrapper(
48
+ num_tokens=vocab_size,
49
+ max_seq_len=max_length,
50
+ attn_layers=Encoder(
51
+ dim=hidden_dim,
52
+ depth=layers,
53
+ heads=heads,
54
+ rotary_pos_emb=True,
55
+ attn_orthog_projected_values=True,
56
+ attn_orthog_projected_values_per_head=True,
57
+ attn_flash=True
58
+ )
59
+ )
60
+
61
+ def forward(self, input_ids, return_embeddings=False):
62
+ logits = self.model(input_ids, return_embeddings=return_embeddings)
63
+ return logits
64
+
65
+
66
+ # Configuration - must match app.py
67
+ DATA_DIR = Path("data")
68
+ DATA_DIR.mkdir(exist_ok=True)
69
+ INDEX_PATH = DATA_DIR / "faiss.index"
70
+ METADATA_PATH = DATA_DIR / "metadata.pkl"
71
+ EMBEDDINGS_PATH = DATA_DIR / "embeddings.npy"
72
+ EMBEDDING_MODEL = "all-MiniLM-L6-v2"
73
+
74
+ pattern = load(open("/user/hassanahmed.hassan/u21055/.project/dir.project/towards_better_genomic_models/data/tokenizer_components_bpe_with_repeats.pkl", "rb"))['pattern']
75
+ mergable_ranks = load(open("/user/hassanahmed.hassan/u21055/.project/dir.project/towards_better_genomic_models/data/tokenizer_components_bpe_with_repeats.pkl", "rb"))['mergable_ranks']
76
+
77
+ recreated_enc = tiktoken.Encoding(
78
+ name="genomic_bpe_recreated",
79
+ pat_str=pattern,
80
+ mergeable_ranks=mergable_ranks,
81
+ special_tokens={}
82
+ )
83
+
84
+ # Initialize model
85
+ MODEL = GenomicTransformer(
86
+ vocab_size=40_000, hidden_dim=512, layers=12, heads=8
87
+ )
88
+
89
+ # Wrap with DataParallel if multiple GPUs available
90
+ if n_gpus > 1:
91
+ print(f"Using DataParallel across {n_gpus} GPUs")
92
+ MODEL = nn.DataParallel(MODEL)
93
+
94
+ MODEL = MODEL.to(device)
95
+ MODEL.eval()
96
+
97
+
98
+ def create_index_from_dataframe(
99
+ df: pd.DataFrame,
100
+ sequence_column: str = "sequence",
101
+ model=MODEL,
102
+ encoder=recreated_enc,
103
+ batch_size: int = 8 # Increased batch size for multi-GPU
104
+ ) -> dict:
105
+ """
106
+ Create a FAISS index from a pandas DataFrame.
107
+
108
+ Args:
109
+ df: DataFrame containing sequences and metadata
110
+ sequence_column: Name of the column containing text sequences
111
+ model: The transformer model to use
112
+ encoder: The tokenizer/encoder
113
+ batch_size: Batch size for encoding (increase for multi-GPU)
114
+
115
+ Returns:
116
+ dict with index statistics
117
+ """
118
+ if sequence_column not in df.columns:
119
+ raise ValueError(f"Column '{sequence_column}' not found. Available: {list(df.columns)}")
120
+
121
+ # Get sequences
122
+ sequences = df[sequence_column].astype(str).tolist()[:10]
123
+ df = df.iloc[:10].copy()
124
+ df["__sequence__"] = sequences
125
+
126
+ # Create embeddings
127
+ print(f"Creating embeddings for {len(sequences)} sequences...")
128
+ encodings = encoder.encode_batch(sequences)
129
+ embeddings = []
130
+ print(f"Total encodings: {len(encodings)}")
131
+
132
+ # Adjust batch size to be divisible by number of GPUs for efficiency
133
+ effective_batch_size = batch_size * n_gpus if n_gpus > 1 else batch_size
134
+ print(f"Using effective batch size: {effective_batch_size}")
135
+
136
+ for i in tqdm(range(0, len(encodings), effective_batch_size)):
137
+ batch_encodings = encodings[i:i+effective_batch_size]
138
+ # pad to max length in batch
139
+ max_len = max(len(enc) for enc in batch_encodings)
140
+ batch_encodings = [enc + [0]*(max_len - len(enc)) for enc in batch_encodings]
141
+
142
+ # Move tensor to GPU
143
+ batch_tensor = torch.tensor(batch_encodings).long().to(device)
144
+ print(f"Batch tensor shape: {batch_tensor.shape}")
145
+ print(f"Sample batch tensor: {batch_tensor[0][:10]}")
146
+
147
+ with torch.no_grad():
148
+ print("Generating embeddings...")
149
+ batch_embeddings = model(batch_tensor, return_embeddings=True)
150
+ print(f"Raw batch embeddings shape: {batch_embeddings.shape}")
151
+ # Move back to CPU for numpy conversion
152
+ batch_embeddings = batch_embeddings.mean(dim=1).cpu().numpy().tolist()
153
+ print(f"Batch embeddings shape: {np.array(batch_embeddings).shape}")
154
+
155
+ if i == 0:
156
+ embeddings = batch_embeddings
157
+ else:
158
+ embeddings = embeddings + batch_embeddings
159
+
160
+ embeddings = np.array(embeddings)
161
+ embeddings = embeddings.astype(np.float32)
162
+
163
+ # Create FAISS index
164
+ dimension = embeddings.shape[1]
165
+ index = faiss.IndexFlatIP(dimension) # Inner product = cosine sim for normalized vectors
166
+ index.add(embeddings)
167
+
168
+ # Save everything
169
+ print("Saving index to disk...")
170
+ faiss.write_index(index, str(INDEX_PATH))
171
+ with open(METADATA_PATH, "wb") as f:
172
+ pickle.dump(df, f)
173
+ np.save(EMBEDDINGS_PATH, embeddings)
174
+
175
+ stats = {
176
+ "documents_indexed": index.ntotal,
177
+ "embedding_dimension": dimension,
178
+ "model": 'MODEL',
179
+ "index_path": str(INDEX_PATH),
180
+ "metadata_path": str(METADATA_PATH),
181
+ "gpus_used": n_gpus
182
+ }
183
+
184
+ print(f"Index created successfully!")
185
+ print(f" - Documents: {stats['documents_indexed']}")
186
+ print(f" - Dimensions: {stats['embedding_dimension']}")
187
+ print(f" - GPUs used: {stats['gpus_used']}")
188
+
189
+ return stats
190
+
191
+
192
+ def search_index(
193
+ query: str,
194
+ top_k: int = 10,
195
+ model=MODEL,
196
+ encoder=recreated_enc
197
+ ) -> list[dict]:
198
+ """
199
+ Search the index for similar sequences.
200
+ """
201
+ if not INDEX_PATH.exists():
202
+ raise FileNotFoundError("No index found. Create one first with create_index_from_dataframe()")
203
+
204
+ # Load resources
205
+ index = faiss.read_index(str(INDEX_PATH))
206
+ with open(METADATA_PATH, "rb") as f:
207
+ metadata = pickle.load(f)
208
+
209
+ # Encode query
210
+ encodings = encoder.encode_ordinary(query)
211
+ query_tensor = torch.tensor([encodings]).long().to(device)
212
+
213
+ with torch.no_grad():
214
+ query_embedding = model(query_tensor, return_embeddings=True).mean(dim=1).cpu().numpy()
215
+
216
+ query_embedding = query_embedding.astype(np.float32)
217
+
218
+ # Search
219
+ k = min(top_k, index.ntotal)
220
+ scores, indices = index.search(query_embedding, k)
221
+
222
+ # Build results
223
+ results = []
224
+ for score, idx in zip(scores[0], indices[0]):
225
+ if idx == -1:
226
+ continue
227
+
228
+ row = metadata.iloc[idx].to_dict()
229
+ sequence = row.pop("__sequence__", "")
230
+
231
+ results.append({
232
+ "score": float(score),
233
+ "sequence": sequence,
234
+ "metadata": row
235
+ })
236
+
237
+ return results
238
+
239
+
240
+ def main():
241
+ parser = argparse.ArgumentParser(description="Create semantic search index from CSV")
242
+ parser.add_argument("--sequence-column", "-c", default="seq_with_repeat_tokens", help="Column containing sequences")
243
+ parser.add_argument("--batch-size", "-b", type=int, default=8, help="Batch size per GPU")
244
+
245
+ args = parser.parse_args()
246
+
247
+ df = pd.read_parquet("/user/hassanahmed.hassan/u21055/.project/dir.project/towards_better_genomic_models/data/sample.parquet")
248
+ print(f"Loaded {len(df)} rows with columns: {list(df.columns)}")
249
+
250
+ create_index_from_dataframe(df, args.sequence_column, MODEL, recreated_enc, batch_size=args.batch_size)
251
+
252
+
253
+ if __name__ == "__main__":
254
+ main()
data/data/bpe_plus_special_tokens_model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2811ae2bb5890b033f326622ded51fca54461016389ea06578760418b3df14de
3
+ size 327656691
data/data/bpe_plus_special_tokens_tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b01b9201adfc0892e867bb5ece10d7c83bc1c740f595d0452008288905fcb4d
3
+ size 842903
data/data/embeddings.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:134f630a923a588597432a3297ad2bb099becb7ab631b6ab30de9dc120389a79
3
+ size 204800128
data/data/faiss.index ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e7aa815134b8aa8715c63505d3d2e34681aadd748e81d764939d61c3589625f
3
+ size 204800045
data/data/metadata.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09e1530cca7fd6c444fa8486a0671656cf9dd37408a10fd7dc6cf8d86dcf97cf
3
+ size 1069945904
data/data/repeats_results_small.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5c6cfa93298110b4d72bedbca1fce0848a0937fde8ed72acef3d6ad8abbb58b
3
+ size 155282
data/data/sample_small.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc0d45175022214d02fe8d671a8cdf70f72d7bf45b5d5993af2b8696cb0a97d2
3
+ size 4869635
index.html ADDED
@@ -0,0 +1,704 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Genomic Sequence Search</title>
7
+ <link rel="preconnect" href="https://fonts.googleapis.com">
8
+ <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
9
+ <link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;500;600&family=Outfit:wght@300;400;500;600;700&display=swap" rel="stylesheet">
10
+ <style>
11
+ :root {
12
+ --bg-primary: #0a0a0b;
13
+ --bg-secondary: #111113;
14
+ --bg-tertiary: #18181b;
15
+ --bg-hover: #1f1f23;
16
+ --border: #27272a;
17
+ --border-focus: #3f3f46;
18
+ --text-primary: #fafafa;
19
+ --text-secondary: #a1a1aa;
20
+ --text-muted: #71717a;
21
+ --accent: #10b981;
22
+ --accent-dim: #059669;
23
+ --accent-glow: rgba(16, 185, 129, 0.15);
24
+ --dna-a: #22d3ee;
25
+ --dna-t: #f472b6;
26
+ --dna-c: #a78bfa;
27
+ --dna-g: #fbbf24;
28
+ --success: #4ade80;
29
+ --error: #f87171;
30
+ --gradient-1: linear-gradient(135deg, #10b981 0%, #22d3ee 100%);
31
+ --radius-sm: 6px;
32
+ --radius-md: 10px;
33
+ --radius-lg: 16px;
34
+ }
35
+
36
+ * {
37
+ margin: 0;
38
+ padding: 0;
39
+ box-sizing: border-box;
40
+ }
41
+
42
+ body {
43
+ font-family: 'Outfit', -apple-system, BlinkMacSystemFont, sans-serif;
44
+ background: var(--bg-primary);
45
+ color: var(--text-primary);
46
+ min-height: 100vh;
47
+ line-height: 1.6;
48
+ }
49
+
50
+ body::before {
51
+ content: '';
52
+ position: fixed;
53
+ top: 0;
54
+ left: 0;
55
+ right: 0;
56
+ bottom: 0;
57
+ background-image:
58
+ linear-gradient(rgba(16, 185, 129, 0.03) 1px, transparent 1px),
59
+ linear-gradient(90deg, rgba(16, 185, 129, 0.03) 1px, transparent 1px);
60
+ background-size: 40px 40px;
61
+ pointer-events: none;
62
+ z-index: 0;
63
+ }
64
+
65
+ .container {
66
+ max-width: 1000px;
67
+ margin: 0 auto;
68
+ padding: 2rem;
69
+ position: relative;
70
+ z-index: 1;
71
+ }
72
+
73
+ header {
74
+ text-align: center;
75
+ padding: 2rem 0 1.5rem;
76
+ }
77
+
78
+ .logo {
79
+ display: inline-flex;
80
+ align-items: center;
81
+ gap: 0.75rem;
82
+ margin-bottom: 0.75rem;
83
+ }
84
+
85
+ .logo-icon {
86
+ width: 52px;
87
+ height: 52px;
88
+ background: var(--gradient-1);
89
+ border-radius: var(--radius-md);
90
+ display: flex;
91
+ align-items: center;
92
+ justify-content: center;
93
+ font-size: 1.75rem;
94
+ }
95
+
96
+ h1 {
97
+ font-size: 2.25rem;
98
+ font-weight: 600;
99
+ letter-spacing: -0.03em;
100
+ background: var(--gradient-1);
101
+ -webkit-background-clip: text;
102
+ -webkit-text-fill-color: transparent;
103
+ background-clip: text;
104
+ }
105
+
106
+ .subtitle {
107
+ color: var(--text-secondary);
108
+ font-size: 1rem;
109
+ font-weight: 300;
110
+ margin-top: 0.25rem;
111
+ }
112
+
113
+ .stats-bar {
114
+ display: flex;
115
+ justify-content: center;
116
+ gap: 2.5rem;
117
+ padding: 1rem 0;
118
+ margin-bottom: 1.5rem;
119
+ border-bottom: 1px solid var(--border);
120
+ }
121
+
122
+ .stat {
123
+ text-align: center;
124
+ }
125
+
126
+ .stat-value {
127
+ font-family: 'JetBrains Mono', monospace;
128
+ font-size: 1.25rem;
129
+ font-weight: 500;
130
+ color: var(--accent);
131
+ }
132
+
133
+ .stat-label {
134
+ font-size: 0.7rem;
135
+ color: var(--text-muted);
136
+ text-transform: uppercase;
137
+ letter-spacing: 0.08em;
138
+ margin-top: 0.2rem;
139
+ }
140
+
141
+ .search-section {
142
+ background: var(--bg-secondary);
143
+ border: 1px solid var(--border);
144
+ border-radius: var(--radius-lg);
145
+ padding: 1.5rem;
146
+ margin-bottom: 1.5rem;
147
+ }
148
+
149
+ .search-label {
150
+ display: block;
151
+ font-size: 0.85rem;
152
+ font-weight: 500;
153
+ color: var(--text-secondary);
154
+ margin-bottom: 0.75rem;
155
+ }
156
+
157
+ .search-textarea {
158
+ width: 100%;
159
+ min-height: 120px;
160
+ padding: 1rem;
161
+ font-family: 'JetBrains Mono', monospace;
162
+ font-size: 0.9rem;
163
+ background: var(--bg-primary);
164
+ border: 1px solid var(--border);
165
+ border-radius: var(--radius-md);
166
+ color: var(--text-primary);
167
+ resize: vertical;
168
+ transition: all 0.2s ease;
169
+ line-height: 1.6;
170
+ }
171
+
172
+ .search-textarea:focus {
173
+ outline: none;
174
+ border-color: var(--accent);
175
+ box-shadow: 0 0 0 3px var(--accent-glow);
176
+ }
177
+
178
+ .search-textarea::placeholder {
179
+ color: var(--text-muted);
180
+ }
181
+
182
+ .search-controls {
183
+ display: flex;
184
+ justify-content: space-between;
185
+ align-items: center;
186
+ margin-top: 1rem;
187
+ gap: 1rem;
188
+ }
189
+
190
+ .char-count {
191
+ font-family: 'JetBrains Mono', monospace;
192
+ font-size: 0.8rem;
193
+ color: var(--text-muted);
194
+ }
195
+
196
+ .search-actions {
197
+ display: flex;
198
+ gap: 0.75rem;
199
+ align-items: center;
200
+ }
201
+
202
+ .top-k-select {
203
+ padding: 0.6rem 0.75rem;
204
+ background: var(--bg-tertiary);
205
+ border: 1px solid var(--border);
206
+ border-radius: var(--radius-sm);
207
+ color: var(--text-primary);
208
+ font-family: inherit;
209
+ font-size: 0.85rem;
210
+ cursor: pointer;
211
+ }
212
+
213
+ .search-btn {
214
+ padding: 0.75rem 2rem;
215
+ background: var(--gradient-1);
216
+ border: none;
217
+ border-radius: var(--radius-md);
218
+ color: var(--bg-primary);
219
+ font-family: inherit;
220
+ font-size: 0.95rem;
221
+ font-weight: 600;
222
+ cursor: pointer;
223
+ transition: all 0.2s ease;
224
+ }
225
+
226
+ .search-btn:hover {
227
+ opacity: 0.9;
228
+ transform: scale(1.02);
229
+ }
230
+
231
+ .search-btn:disabled {
232
+ opacity: 0.5;
233
+ cursor: not-allowed;
234
+ transform: none;
235
+ }
236
+
237
+ .clear-btn {
238
+ padding: 0.6rem 1rem;
239
+ background: transparent;
240
+ border: 1px solid var(--border);
241
+ border-radius: var(--radius-sm);
242
+ color: var(--text-secondary);
243
+ font-family: inherit;
244
+ font-size: 0.85rem;
245
+ cursor: pointer;
246
+ transition: all 0.2s ease;
247
+ }
248
+
249
+ .clear-btn:hover {
250
+ border-color: var(--border-focus);
251
+ color: var(--text-primary);
252
+ }
253
+
254
+ .results-container {
255
+ margin-top: 1rem;
256
+ }
257
+
258
+ .results-header {
259
+ display: flex;
260
+ justify-content: space-between;
261
+ align-items: center;
262
+ margin-bottom: 1rem;
263
+ padding-bottom: 0.75rem;
264
+ border-bottom: 1px solid var(--border);
265
+ }
266
+
267
+ .results-count {
268
+ color: var(--text-secondary);
269
+ font-size: 0.9rem;
270
+ }
271
+
272
+ .results-count strong {
273
+ color: var(--text-primary);
274
+ }
275
+
276
+ .result-card {
277
+ background: var(--bg-secondary);
278
+ border: 1px solid var(--border);
279
+ border-radius: var(--radius-md);
280
+ padding: 1.25rem;
281
+ margin-bottom: 0.75rem;
282
+ transition: all 0.2s ease;
283
+ animation: slideIn 0.3s ease forwards;
284
+ opacity: 0;
285
+ transform: translateY(10px);
286
+ }
287
+
288
+ .result-card:hover {
289
+ border-color: var(--border-focus);
290
+ background: var(--bg-tertiary);
291
+ }
292
+
293
+ @keyframes slideIn {
294
+ to {
295
+ opacity: 1;
296
+ transform: translateY(0);
297
+ }
298
+ }
299
+
300
+ .result-header {
301
+ display: flex;
302
+ justify-content: space-between;
303
+ align-items: center;
304
+ margin-bottom: 0.75rem;
305
+ }
306
+
307
+ .result-rank {
308
+ display: inline-flex;
309
+ align-items: center;
310
+ justify-content: center;
311
+ width: 32px;
312
+ height: 32px;
313
+ background: var(--bg-primary);
314
+ border-radius: var(--radius-sm);
315
+ font-family: 'JetBrains Mono', monospace;
316
+ font-size: 0.85rem;
317
+ font-weight: 600;
318
+ color: var(--text-secondary);
319
+ }
320
+
321
+ .result-rank.top-3 {
322
+ background: var(--accent-glow);
323
+ color: var(--accent);
324
+ }
325
+
326
+ .result-score {
327
+ font-family: 'JetBrains Mono', monospace;
328
+ font-size: 0.85rem;
329
+ color: var(--accent);
330
+ background: var(--accent-glow);
331
+ padding: 0.3rem 0.85rem;
332
+ border-radius: var(--radius-sm);
333
+ }
334
+
335
+ .result-sequence {
336
+ font-family: 'JetBrains Mono', monospace;
337
+ font-size: 0.85rem;
338
+ color: var(--text-primary);
339
+ background: var(--bg-primary);
340
+ padding: 0.85rem 1rem;
341
+ border-radius: var(--radius-sm);
342
+ margin-bottom: 0.75rem;
343
+ word-break: break-all;
344
+ line-height: 1.7;
345
+ max-height: 120px;
346
+ overflow-y: auto;
347
+ letter-spacing: 0.5px;
348
+ }
349
+
350
+ .result-metadata {
351
+ display: flex;
352
+ flex-wrap: wrap;
353
+ gap: 0.5rem;
354
+ }
355
+
356
+ .metadata-tag {
357
+ display: inline-flex;
358
+ align-items: center;
359
+ gap: 0.4rem;
360
+ padding: 0.35rem 0.75rem;
361
+ background: var(--bg-primary);
362
+ border-radius: var(--radius-sm);
363
+ font-size: 0.8rem;
364
+ }
365
+
366
+ .metadata-key {
367
+ color: var(--text-muted);
368
+ }
369
+
370
+ .metadata-value {
371
+ color: var(--text-secondary);
372
+ font-family: 'JetBrains Mono', monospace;
373
+ max-width: 200px;
374
+ overflow: hidden;
375
+ text-overflow: ellipsis;
376
+ white-space: nowrap;
377
+ }
378
+
379
+ .loading {
380
+ display: flex;
381
+ flex-direction: column;
382
+ align-items: center;
383
+ justify-content: center;
384
+ gap: 1rem;
385
+ padding: 3rem;
386
+ color: var(--text-secondary);
387
+ }
388
+
389
+ .spinner {
390
+ width: 32px;
391
+ height: 32px;
392
+ border: 3px solid var(--border);
393
+ border-top-color: var(--accent);
394
+ border-radius: 50%;
395
+ animation: spin 0.8s linear infinite;
396
+ }
397
+
398
+ @keyframes spin {
399
+ to { transform: rotate(360deg); }
400
+ }
401
+
402
+ .message {
403
+ padding: 1rem 1.25rem;
404
+ border-radius: var(--radius-md);
405
+ margin-bottom: 1rem;
406
+ font-size: 0.9rem;
407
+ }
408
+
409
+ .message.error {
410
+ background: rgba(248, 113, 113, 0.1);
411
+ border: 1px solid rgba(248, 113, 113, 0.2);
412
+ color: var(--error);
413
+ }
414
+
415
+ .message.info {
416
+ background: rgba(16, 185, 129, 0.1);
417
+ border: 1px solid rgba(16, 185, 129, 0.2);
418
+ color: var(--accent);
419
+ }
420
+
421
+ .empty-state {
422
+ text-align: center;
423
+ padding: 4rem 2rem;
424
+ color: var(--text-muted);
425
+ }
426
+
427
+ .empty-state-icon {
428
+ font-size: 3.5rem;
429
+ margin-bottom: 1rem;
430
+ opacity: 0.4;
431
+ }
432
+
433
+ .empty-state p {
434
+ font-size: 1rem;
435
+ }
436
+
437
+ .example-queries {
438
+ margin-top: 1.5rem;
439
+ }
440
+
441
+ .example-queries h4 {
442
+ font-size: 0.8rem;
443
+ color: var(--text-muted);
444
+ text-transform: uppercase;
445
+ letter-spacing: 0.05em;
446
+ margin-bottom: 0.75rem;
447
+ }
448
+
449
+ .example-btn {
450
+ display: inline-block;
451
+ padding: 0.5rem 1rem;
452
+ margin: 0.25rem;
453
+ background: var(--bg-tertiary);
454
+ border: 1px solid var(--border);
455
+ border-radius: var(--radius-sm);
456
+ color: var(--text-secondary);
457
+ font-family: 'JetBrains Mono', monospace;
458
+ font-size: 0.75rem;
459
+ cursor: pointer;
460
+ transition: all 0.2s ease;
461
+ }
462
+
463
+ .example-btn:hover {
464
+ border-color: var(--accent);
465
+ color: var(--accent);
466
+ }
467
+
468
+ @media (max-width: 640px) {
469
+ .container {
470
+ padding: 1rem;
471
+ }
472
+
473
+ h1 {
474
+ font-size: 1.5rem;
475
+ }
476
+
477
+ .stats-bar {
478
+ gap: 1.5rem;
479
+ }
480
+
481
+ .search-controls {
482
+ flex-direction: column;
483
+ align-items: stretch;
484
+ }
485
+
486
+ .search-actions {
487
+ justify-content: space-between;
488
+ }
489
+ }
490
+ </style>
491
+ </head>
492
+ <body>
493
+ <div class="container">
494
+ <header>
495
+ <div class="logo">
496
+ <div class="logo-icon">🧬</div>
497
+ </div>
498
+ <h1>Genomic Sequence Search</h1>
499
+ <p class="subtitle">Find similar sequences using transformer embeddings</p>
500
+ </header>
501
+
502
+ <div class="stats-bar">
503
+ <div class="stat">
504
+ <div class="stat-value" id="doc-count">—</div>
505
+ <div class="stat-label">Sequences</div>
506
+ </div>
507
+ <div class="stat">
508
+ <div class="stat-value" id="dim-count">—</div>
509
+ <div class="stat-label">Dimensions</div>
510
+ </div>
511
+ <div class="stat">
512
+ <div class="stat-value" id="device-info">—</div>
513
+ <div class="stat-label">Device</div>
514
+ </div>
515
+ </div>
516
+
517
+ <div id="message-container"></div>
518
+
519
+ <div class="search-section">
520
+ <label class="search-label">Enter a genomic sequence to search:</label>
521
+ <textarea
522
+ class="search-textarea"
523
+ id="search-input"
524
+ placeholder="Paste your genomic sequence here (e.g., ATCGATCGATCG...)"
525
+ spellcheck="false"
526
+ ></textarea>
527
+ <div class="search-controls">
528
+ <span class="char-count"><span id="char-count">0</span> characters</span>
529
+ <div class="search-actions">
530
+ <button class="clear-btn" onclick="clearSearch()">Clear</button>
531
+ <select class="top-k-select" id="top-k">
532
+ <option value="5">Top 5</option>
533
+ <option value="10" selected>Top 10</option>
534
+ <option value="20">Top 20</option>
535
+ <option value="50">Top 50</option>
536
+ </select>
537
+ <button class="search-btn" id="search-btn" onclick="search()">
538
+ Search
539
+ </button>
540
+ </div>
541
+ </div>
542
+ </div>
543
+
544
+ <div class="results-container" id="results-container">
545
+ <div class="empty-state">
546
+ <div class="empty-state-icon">🔬</div>
547
+ <p>Enter a sequence above to find similar matches</p>
548
+ <div class="example-queries">
549
+ <h4>Try an example</h4>
550
+ <button class="example-btn" onclick="loadExample('ATCGATCGATCGATCGATCG')">ATCGATCG...</button>
551
+ <button class="example-btn" onclick="loadExample('GCTAGCTAGCTAGCTAGCTA')">GCTAGCTA...</button>
552
+ <button class="example-btn" onclick="loadExample('AAAATTTTCCCCGGGGAAAA')">AAAATTTT...</button>
553
+ </div>
554
+ </div>
555
+ </div>
556
+ </div>
557
+
558
+ <script>
559
+ const API_BASE = '';
560
+
561
+ document.addEventListener('DOMContentLoaded', () => {
562
+ loadStats();
563
+
564
+ const input = document.getElementById('search-input');
565
+ input.addEventListener('input', updateCharCount);
566
+ input.addEventListener('keydown', (e) => {
567
+ if (e.key === 'Enter' && e.ctrlKey) {
568
+ search();
569
+ }
570
+ });
571
+ });
572
+
573
+ async function loadStats() {
574
+ try {
575
+ const res = await fetch(`${API_BASE}/api/stats`);
576
+ if (res.ok) {
577
+ const data = await res.json();
578
+ document.getElementById('doc-count').textContent = data.total_documents.toLocaleString();
579
+ document.getElementById('dim-count').textContent = data.embedding_dimension;
580
+ document.getElementById('device-info').textContent = data.device.toUpperCase();
581
+ }
582
+ } catch (e) {
583
+ console.log('Could not load stats:', e);
584
+ }
585
+ }
586
+
587
+ function updateCharCount() {
588
+ const count = document.getElementById('search-input').value.length;
589
+ document.getElementById('char-count').textContent = count.toLocaleString();
590
+ }
591
+
592
+ function clearSearch() {
593
+ document.getElementById('search-input').value = '';
594
+ updateCharCount();
595
+ document.getElementById('results-container').innerHTML = `
596
+ <div class="empty-state">
597
+ <div class="empty-state-icon">🔬</div>
598
+ <p>Enter a sequence above to find similar matches</p>
599
+ <div class="example-queries">
600
+ <h4>Try an example</h4>
601
+ <button class="example-btn" onclick="loadExample('ATCGATCGATCGATCGATCG')">ATCGATCG...</button>
602
+ <button class="example-btn" onclick="loadExample('GCTAGCTAGCTAGCTAGCTA')">GCTAGCTA...</button>
603
+ <button class="example-btn" onclick="loadExample('AAAATTTTCCCCGGGGAAAA')">AAAATTTT...</button>
604
+ </div>
605
+ </div>
606
+ `;
607
+ }
608
+
609
+ function loadExample(seq) {
610
+ document.getElementById('search-input').value = seq;
611
+ updateCharCount();
612
+ search();
613
+ }
614
+
615
+ async function search() {
616
+ const query = document.getElementById('search-input').value.trim();
617
+ if (!query) {
618
+ showMessage('Please enter a sequence to search', 'error');
619
+ return;
620
+ }
621
+
622
+ const topK = parseInt(document.getElementById('top-k').value);
623
+ const container = document.getElementById('results-container');
624
+ const searchBtn = document.getElementById('search-btn');
625
+
626
+ container.innerHTML = `
627
+ <div class="loading">
628
+ <div class="spinner"></div>
629
+ <span>Encoding sequence and searching...</span>
630
+ </div>
631
+ `;
632
+ searchBtn.disabled = true;
633
+
634
+ try {
635
+ const res = await fetch(`${API_BASE}/api/search`, {
636
+ method: 'POST',
637
+ headers: { 'Content-Type': 'application/json' },
638
+ body: JSON.stringify({ query, top_k: topK })
639
+ });
640
+
641
+ const data = await res.json();
642
+
643
+ if (!res.ok) {
644
+ container.innerHTML = `<div class="message error">${data.detail}</div>`;
645
+ return;
646
+ }
647
+
648
+ if (data.results.length === 0) {
649
+ container.innerHTML = `
650
+ <div class="empty-state">
651
+ <div class="empty-state-icon">🤷</div>
652
+ <p>No similar sequences found</p>
653
+ </div>
654
+ `;
655
+ return;
656
+ }
657
+
658
+ container.innerHTML = `
659
+ <div class="results-header">
660
+ <span class="results-count">Found <strong>${data.results.length}</strong> similar sequences</span>
661
+ <span class="results-count">from ${data.total_indexed.toLocaleString()} indexed</span>
662
+ </div>
663
+ ${data.results.map((r, i) => `
664
+ <div class="result-card" style="animation-delay: ${i * 0.04}s">
665
+ <div class="result-header">
666
+ <span class="result-rank ${r.rank <= 3 ? 'top-3' : ''}">${r.rank}</span>
667
+ <span class="result-score">${(r.score * 100).toFixed(2)}%</span>
668
+ </div>
669
+ <div class="result-sequence">${escapeHtml(r.sequence)}</div>
670
+ <div class="result-metadata">
671
+ ${Object.entries(r.metadata)
672
+ .filter(([k]) => !k.startsWith('__'))
673
+ .slice(0, 8)
674
+ .map(([k, v]) => `
675
+ <span class="metadata-tag">
676
+ <span class="metadata-key">${escapeHtml(k)}:</span>
677
+ <span class="metadata-value" title="${escapeHtml(String(v))}">${escapeHtml(String(v).slice(0, 50))}</span>
678
+ </span>
679
+ `).join('')}
680
+ </div>
681
+ </div>
682
+ `).join('')}
683
+ `;
684
+ } catch (e) {
685
+ container.innerHTML = `<div class="message error">Search failed: ${e.message}</div>`;
686
+ } finally {
687
+ searchBtn.disabled = false;
688
+ }
689
+ }
690
+
691
+ function showMessage(text, type = 'info') {
692
+ const container = document.getElementById('message-container');
693
+ container.innerHTML = `<div class="message ${type}">${text}</div>`;
694
+ setTimeout(() => container.innerHTML = '', 4000);
695
+ }
696
+
697
+ function escapeHtml(text) {
698
+ const div = document.createElement('div');
699
+ div.textContent = text;
700
+ return div.innerHTML;
701
+ }
702
+ </script>
703
+ </body>
704
+ </html>
main.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
main.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ def main():
2
+ print("Hello from index-search!")
3
+
4
+
5
+ if __name__ == "__main__":
6
+ main()
pyproject.toml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "index-search"
3
+ version = "0.1.0"
4
+ description = "Add your description here"
5
+ readme = "README.md"
6
+ requires-python = ">=3.12"
7
+ dependencies = [
8
+ c
9
+ ]
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ fastapi>=0.104.0
2
+ uvicorn[standard]>=0.24.0
3
+ pandas>=2.0.0
4
+ numpy>=1.24.0
5
+ sentence-transformers>=2.2.0
6
+ faiss-cpu>=1.7.4
7
+ python-multipart>=0.0.6
uv.lock ADDED
The diff for this file is too large to render. See raw diff