Semantic Search API
A production-ready semantic search service built with FastAPI. Upload your data (sequences + metadata), create embeddings automatically, and search using natural language queries.
Features
- Semantic/Latent Search: Find similar sequences based on meaning, not just keywords
- FastAPI Backend: Modern, fast, async Python web framework
- FAISS Index: Efficient similarity search at scale
- Sentence Transformers: State-of-the-art embedding models
- Beautiful UI: Dark-themed, responsive search interface
- CSV Upload: Easy data import via web interface or API
- Persistent Storage: Index persists across restarts
Quick Start
1. Install Dependencies
pip install -r requirements.txt
2. Run the Server
python app.py
# or
uvicorn app:app --reload --host 0.0.0.0 --port 8000
3. Open the UI
Navigate to http://localhost:8000 in your browser.
4. Upload Your Data
- Drag & drop a CSV file or click to browse
- Select the column containing your sequences
- Click "Create Index"
- Start searching!
Data Format
Your CSV should have at least one column containing the text sequences you want to search. All other columns become searchable metadata.
Example:
sequence,category,source,date
"Machine learning is transforming industries",tech,blog,2024-01-15
"The quick brown fox jumps over the lazy dog",example,pangram,2024-01-10
"Embeddings capture semantic meaning",ml,paper,2024-01-20
API Endpoints
Search
POST /api/search
Content-Type: application/json
{
"query": "artificial intelligence",
"top_k": 10
}
Upload CSV
POST /api/upload-csv?sequence_column=text
Content-Type: multipart/form-data
file: your_data.csv
Create Index (JSON)
POST /api/index
Content-Type: application/json
{
"sequence_column": "text",
"data": [
{"text": "Hello world", "category": "greeting"},
{"text": "Machine learning", "category": "tech"}
]
}
Get Stats
GET /api/stats
Get Sample
GET /api/sample?n=5
Delete Index
DELETE /api/index
Programmatic Usage
You can also create indexes directly from Python:
from create_index import create_index_from_dataframe, search_index
import pandas as pd
# Create your dataframe
df = pd.DataFrame({
'sequence': [
'The mitochondria is the powerhouse of the cell',
'DNA stores genetic information',
'Proteins are made of amino acids'
],
'category': ['biology', 'genetics', 'biochemistry'],
'difficulty': ['easy', 'medium', 'medium']
})
# Create the index
create_index_from_dataframe(df, sequence_column='sequence')
# Search
results = search_index("cellular energy production", top_k=3)
for r in results:
print(f"Score: {r['score']:.3f} | {r['sequence'][:50]}...")
Configuration
Edit these values in app.py to customize:
# Embedding model (from sentence-transformers)
EMBEDDING_MODEL = "all-MiniLM-L6-v2" # Fast, 384 dimensions
# Alternatives:
# "all-mpnet-base-v2" # Higher quality, 768 dimensions
# "paraphrase-multilingual-MiniLM-L12-v2" # Multilingual support
# "all-MiniLM-L12-v2" # Balanced quality/speed
Project Structure
semantic_search/
├── app.py # FastAPI application
├── create_index.py # Programmatic index creation
├── requirements.txt # Python dependencies
├── static/
│ └── index.html # Search UI
├── data/ # Created at runtime
│ ├── faiss.index # FAISS index file
│ ├── metadata.pkl # DataFrame with metadata
│ └── embeddings.npy # Raw embeddings (optional)
└── README.md
How It Works
- Embedding Creation: When you upload data, each sequence is converted to a dense vector (embedding) using a sentence transformer model
- FAISS Indexing: Embeddings are stored in a FAISS index optimized for similarity search
- Search: Your query is embedded using the same model, then FAISS finds the most similar vectors using cosine similarity
- Results: The original sequences and metadata are returned, ranked by similarity
Performance Tips
- Model Choice:
all-MiniLM-L6-v2is fast and good for most use cases. Useall-mpnet-base-v2for higher quality at the cost of speed. - Batch Size: For large datasets, the model processes in batches automatically
- GPU: If you have a CUDA-capable GPU, install
faiss-gpuinstead offaiss-cpufor faster indexing
License
MIT