--- title: Biblos Semantic Search API emoji: 📖 colorFrom: blue colorTo: purple sdk: docker pinned: false license: mit --- # Biblos Semantic Search API Semantic search over the entire Bible using BGE-large embeddings. This API keeps the model and embeddings in memory for fast responses (~50-100ms after initial load). ## Features - ✅ Fast semantic search with BGE-large-en-v1.5 embeddings - ✅ Model stays loaded in memory (no cold starts) - ✅ Searches entire Bible (Old and New Testament) - ✅ CORS enabled for easy integration - ✅ RESTful JSON API with FastAPI - ✅ Automatic API documentation at `/docs` ## API Endpoints ### `GET /` Health check and API information ### `GET /health` Detailed health status and available books ### `POST /search` Perform semantic search **Request Body:** ```json { "query": "What did Jesus say about love?", "limit": 10 // Optional: results to return (1-100, default: 10) } ``` **Response:** ```json { "query": "What did Jesus say about love?", "results": [ { "book": "jhn", "chapter": 13, "testament": "new", "content": "A new commandment I give to you, that you love one another...", "similarity": 0.892 } ], "total_searched": 7957, "execution_time_ms": 87.3 } ``` ## Quick Start ### Using cURL ```bash curl -X POST https://dssjon-biblos-api.hf.space/search \ -H "Content-Type: application/json" \ -d '{ "query": "faith and works", "limit": 5 }' ``` ### Using JavaScript ```javascript const response = await fetch('https://dssjon-biblos-api.hf.space/search', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: 'faith and works', limit: 5 }) }) const data = await response.json() console.log(data.results) ``` ### Using Python ```python import requests response = requests.post( 'https://dssjon-biblos-api.hf.space/search', json={ 'query': 'faith and works', 'limit': 5 } ) data = response.json() print(data['results']) ``` ## Interactive Documentation Visit `/docs` on your deployed Space for interactive Swagger UI documentation where you can test the API directly. ## Performance - First request: ~2-3 seconds (model loading) - Subsequent requests: **50-100ms** (model already in memory) - No cold starts after initial load - Supports concurrent requests ## Model Information - **Model:** BAAI/bge-large-en-v1.5 - **Embedding dimensions:** 1024 - **Total Bible passages:** ~31,000 - **Total books:** 66 ## Deployment This Space uses Docker SDK with FastAPI. The model and embeddings are loaded once at startup and kept in memory for fast responses. ## Data The Bible embeddings are pre-computed and stored in the `data/` directory. See `prepare_data.py` for how to generate embeddings from your own Bible XML source. ## License MIT License - Free to use for any purpose