Instructions to use xlelords/omriX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use xlelords/omriX with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="xlelords/omriX", filename="OmniCode-Pro-v1-f16.gguf", )
llm.create_chat_completion( messages = "\"I like you. I love you\"" )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use xlelords/omriX with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf xlelords/omriX:F16 # Run inference directly in the terminal: llama-cli -hf xlelords/omriX:F16
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf xlelords/omriX:F16 # Run inference directly in the terminal: llama-cli -hf xlelords/omriX:F16
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf xlelords/omriX:F16 # Run inference directly in the terminal: ./llama-cli -hf xlelords/omriX:F16
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf xlelords/omriX:F16 # Run inference directly in the terminal: ./build/bin/llama-cli -hf xlelords/omriX:F16
Use Docker
docker model run hf.co/xlelords/omriX:F16
- LM Studio
- Jan
- Ollama
How to use xlelords/omriX with Ollama:
ollama run hf.co/xlelords/omriX:F16
- Unsloth Studio new
How to use xlelords/omriX with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for xlelords/omriX to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for xlelords/omriX to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for xlelords/omriX to start chatting
- Pi new
How to use xlelords/omriX with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf xlelords/omriX:F16
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "xlelords/omriX:F16" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use xlelords/omriX with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf xlelords/omriX:F16
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default xlelords/omriX:F16
Run Hermes
hermes
- Docker Model Runner
How to use xlelords/omriX with Docker Model Runner:
docker model run hf.co/xlelords/omriX:F16
- Lemonade
How to use xlelords/omriX with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull xlelords/omriX:F16
Run and chat with the model
lemonade run user.omriX-F16
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf xlelords/omriX:F16# Run inference directly in the terminal:
llama-cli -hf xlelords/omriX:F16Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf xlelords/omriX:F16# Run inference directly in the terminal:
./llama-cli -hf xlelords/omriX:F16Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf xlelords/omriX:F16# Run inference directly in the terminal:
./build/bin/llama-cli -hf xlelords/omriX:F16Use Docker
docker model run hf.co/xlelords/omriX:F16π omriX
A well-trained small coding agent that is very quick.
omriX is a lightweight, open-source coding-focused model designed for speed, efficiency, and practical developer workflows. With a compact disk size of roughly 3 GB, itβs ideal for local inference, low-cost deployments, and experimentation without heavy hardware requirements.
β¨ Key Features
- β‘ Fast inference β optimized for quick responses
- π§ Coding-focused β tuned for programming and code-related tasks
- π¦ Lightweight β ~3 GB disk size
- π Open source β Apache-2.0 license
- πΈ Cheap to run β suitable for low-resource environments
- π€ Agent-friendly β works well as a small coding agent component
π§© Use Cases
- Code understanding & classification
- Lightweight coding assistants
- Student projects / FYPs
- Local developer tools
- Agent pipelines requiring fast, small models
- Prototyping and experimentation
π₯ Model Details
- Model ID:
xlelords/omriX - License: Apache-2.0
- Language: English
- Pipeline Tag: Text Classification
- Disk Size: ~3 GB
- Status: Fully open-source
π Quick Start
Example usage with π€ Transformers:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model_id = "xlelords/omriX"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
text = "Explain what this function does: def add(a, b): return a + b"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
print(outputs.logits)
β οΈ Note: The pipeline interface depends on your downstream setup.
omriXis commonly used as a component model inside agents or custom pipelines.
π οΈ Agent Integration
omriX is well-suited for use in lightweight agent frameworks such as:
- Custom Python agents
- Tool-calling pipelines
- smolagents-style workflows
- Local or edge deployments
Its small size and fast responses make it ideal for chaining with tools or running alongside other models.
π License
This model is released under the Apache-2.0 License.
You are free to use, modify, distribute, and build upon it β even commercially.
π€ Contributing
Contributions are welcome!
Feel free to open issues or pull requests for:
- Documentation improvements
- Benchmarks
- Agent examples
- Optimized inference setups
β Final Notes
If youβre looking for a fast, cheap, and capable small coding agent, omriX is built to get out of your way and let you ship.
Enjoy π
- Downloads last month
- 15
16-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf xlelords/omriX:F16# Run inference directly in the terminal: llama-cli -hf xlelords/omriX:F16