Spaces:
Sleeping
Sleeping
File size: 1,331 Bytes
020b4bd 94c8770 712bb5f 020b4bd 94c8770 712bb5f 94c8770 712bb5f 94c8770 712bb5f 94c8770 712bb5f 94c8770 712bb5f 94c8770 712bb5f 94c8770 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
title: Sheikh LLM Studio
emoji: π
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
---
# Sheikh LLM Studio
Full-featured FastAPI application deployed on Hugging Face Spaces with multi-model chat, tooling, and model-creation workflows.
## Features
- Web UI with real-time chat experience and adjustable generation settings
- Multi-model support backed by Hugging Face gated models via `InferenceClient`
- Tool integration endpoints for search and code execution prototypes
- Model Studio workflow to queue fine-tuning jobs and monitor status
- WebSocket endpoint for streaming-style interactions
## Configuration
1. Add an `HF_TOKEN` repository secret in your Space with access to the desired gated models.
2. Optional: adjust available models in `app.py` under `Config.AVAILABLE_MODELS`.
## Development
```bash
git clone git@hf.co:spaces/RecentCoders/sheikh-llm
cd sheikh-llm
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
uvicorn app:app --reload --port 7860
```
## Deployment
```bash
./deploy.sh
```
After pushing, monitor the build logs on your Space and test the endpoints:
- `https://recentcoders-sheikh-llm.hf.space/`
- `https://recentcoders-sheikh-llm.hf.space/chat`
- `https://recentcoders-sheikh-llm.hf.space/docs`
- `https://recentcoders-sheikh-llm.hf.space/health`
|