Spaces:
Running
Running
File size: 2,659 Bytes
fea62df |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
# π§ Unified Embedding API
> π§© Unified API for all your Embedding & Reranking needs β plug and play with any model from Hugging Face or your own fine-tuned versions. This official repository from huggingface space
---
## π Overview
**Unified Embedding API** is a modular and open-source **RAG-ready API** built for developers who want a simple, unified way to access **dense**, and **sparse** models.
Itβs designed for **vector search**, **semantic retrieval**, and **AI-powered pipelines** β all controlled from a single `config.yaml` file.
β οΈ **Note:** This is a development API.
For production deployment, host it on cloud platforms such as **Hugging Face TGI**, **AWS**, or **GCP**.
---
## π§© Features
- π§ **Unified Interface** β One API to handle dense, sparse, and reranking models.
- βοΈ **Configurable** β Switch models instantly via `config.yaml`.
- π **Vector DB Ready** β Easily integrates with FAISS, Chroma, Qdrant, Milvus, etc.
- π **RAG Support** β Perfect base for Retrieval-Augmented Generation systems.
- β‘ **Fast & Lightweight** β Powered by FastAPI and optimized with async processing.
- π§° **Extendable** β Add your own models or pipelines effortlessly.
---
## π Project Structure
```
unified-embedding-api/
β
βββ core/
β βββ embedding.py
β βββ model_manager.py
βββ models/
| βββmodel.py
βββ app.py # Entry point (FastAPI server)
|ββ config.yaml # Model + system configuration
βββ Dockerfile
βββ requirements.txt
βββ README.md
```
---
## βοΈ How to Deploy (Free π)
Deploy your **custom Embedding API** on **Hugging Face Spaces** β free, fast, and serverless.
### π§ Steps:
1. **Clone this Space Template:**
π [Hugging Face Space β fahmiaziz/api-embedding](https://huggingface.co/spaces/fahmiaziz/api-embedding)
2. **Edit `config.yaml`** to set your own model names and backend preferences.
3. **Push your code** β Spaces will automatically rebuild and host your API.
Thatβs it! You now have a live embedding API endpoint powered by your models.
π **Tutorial Reference:**
[Deploy Applications on Hugging Face Spaces (Official Guide)](https://huggingface.co/blog/HemanthSai7/deploy-applications-on-huggingface-spaces)
---
## π§βπ» Contributing
Contributions are welcome!
Please open an issue or submit a pull request to discuss changes.
---
## β οΈ License
MIT License Β© 2025
Developed with β€οΈ by the Open-Source Community.
---
> β¨ βUnify your embeddings. Simplify your AI stack.β
|