Spaces:
Sleeping
Sleeping
| title: Axon Llama GUI | |
| emoji: π§ | |
| colorFrom: blue | |
| colorTo: purple | |
| sdk: docker | |
| pinned: true | |
| license: mit | |
| short_description: Multimodal AI - Text, Images & Audio powered by Qwen2.5-Omni | |
| # π§ Axon - Multimodal AI Assistant | |
| A powerful multimodal AI assistant powered by **Qwen2.5-Omni-7B** running on llama.cpp. Handles text, images, and audio all in one model! | |
| ## β¨ Features | |
| | Modality | Supported | Examples | | |
| |----------|-----------|----------| | |
| | π¬ Text | β | Chat, Q&A, writing, reasoning, math | | |
| | πΌοΈ Images | β | Describe images, read text, analyze charts | | |
| | π΅ Audio | β | Transcribe speech, analyze audio content | | |
| ## π§ Technical Details | |
| - **Model:** [Qwen2.5-Omni-7B](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) | |
| - **Quantization:** Q8_0 (near-lossless quality) | |
| - **Backend:** [llama.cpp](https://github.com/ggml-org/llama.cpp) | |
| - **API:** OpenAI-compatible endpoints | |
| ## π API Usage | |
| The server exposes OpenAI-compatible endpoints: | |
| ```bash | |
| # Text chat | |
| curl http://localhost:7860/v1/chat/completions \ | |
| -H "Content-Type: application/json" \ | |
| -d '{ | |
| "model": "qwen2.5-omni-7b", | |
| "messages": [{"role": "user", "content": "Hello!"}] | |
| }' | |
| ``` | |
| ## π Endpoints | |
| | Endpoint | Description | | |
| |----------|-------------| | |
| | `/` | Web UI (llama.cpp chat interface) | | |
| | `/v1/chat/completions` | OpenAI-compatible chat API | | |
| | `/v1/models` | List available models | | |
| | `/health` | Health check | | |
| ## π Credits | |
| - [Qwen Team](https://github.com/QwenLM) for Qwen2.5-Omni | |
| - [ggml-org](https://github.com/ggml-org) for llama.cpp | |
| - [Hugging Face](https://huggingface.co) for hosting | |
| ## π License | |
| MIT License - See [LICENSE](LICENSE) for details. | |
| --- | |
| <p align="center"> | |
| Made with β€οΈ by <a href="https://huggingface.co/Alencoder">Alencoder</a> | |
| </p> |