Instructions to use aelgendy/QModel with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use aelgendy/QModel with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="aelgendy/QModel", filename="models/Qwen3-32B-Q4_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use aelgendy/QModel with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf aelgendy/QModel:Q4_K_M # Run inference directly in the terminal: llama-cli -hf aelgendy/QModel:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf aelgendy/QModel:Q4_K_M # Run inference directly in the terminal: llama-cli -hf aelgendy/QModel:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf aelgendy/QModel:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf aelgendy/QModel:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf aelgendy/QModel:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf aelgendy/QModel:Q4_K_M
Use Docker
docker model run hf.co/aelgendy/QModel:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use aelgendy/QModel with Ollama:
ollama run hf.co/aelgendy/QModel:Q4_K_M
- Unsloth Studio new
How to use aelgendy/QModel with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for aelgendy/QModel to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for aelgendy/QModel to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for aelgendy/QModel to start chatting
- Pi new
How to use aelgendy/QModel with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf aelgendy/QModel:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "aelgendy/QModel:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use aelgendy/QModel with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf aelgendy/QModel:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default aelgendy/QModel:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use aelgendy/QModel with Docker Model Runner:
docker model run hf.co/aelgendy/QModel:Q4_K_M
- Lemonade
How to use aelgendy/QModel with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull aelgendy/QModel:Q4_K_M
Run and chat with the model
lemonade run user.QModel-Q4_K_M
List all available models
lemonade list
| # QModel 6 - Islamic RAG API | |
| # ============================= | |
| # Dockerfile for QModel API | |
| # Supports both Ollama and HuggingFace backends via .env configuration | |
| # | |
| # Build: docker build -t qmodel . | |
| # Run: docker run -p 8000:8000 --env-file .env qmodel | |
| FROM python:3.11-slim | |
| # Metadata | |
| LABEL maintainer="QModel Team" | |
| LABEL description="QModel v6 - Quran & Hadith RAG API" | |
| LABEL version="4.1" | |
| # Environment variables | |
| ENV PYTHONDONTWRITEBYTECODE=1 \ | |
| PYTHONUNBUFFERED=1 \ | |
| PIP_NO_CACHE_DIR=1 | |
| # Set working directory | |
| WORKDIR /app | |
| # Install system dependencies | |
| # - build-essential: For compiling Python packages | |
| # - libopenblas-dev: For numerical operations (FAISS, numpy) | |
| # - libomp-dev: For OpenMP (FAISS parallelization) | |
| RUN apt-get update && apt-get install -y --no-install-recommends \ | |
| build-essential \ | |
| libopenblas-dev \ | |
| libomp-dev \ | |
| curl \ | |
| && rm -rf /var/lib/apt/lists/* | |
| # Copy requirements and install Python dependencies | |
| COPY requirements.txt . | |
| RUN pip install --no-cache-dir -r requirements.txt | |
| # Copy application code | |
| COPY . . | |
| # Expose port for API | |
| EXPOSE 8000 | |
| # Health check | |
| HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \ | |
| CMD curl -f http://localhost:8000/health || exit 1 | |
| # Start application | |
| # Configure via .env: LLM_BACKEND=ollama or LLM_BACKEND=hf | |
| CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] | |