Instructions to use almax000/cellsentry-model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use almax000/cellsentry-model with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("almax000/cellsentry-model") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - llama-cpp-python
How to use almax000/cellsentry-model with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="almax000/cellsentry-model", filename="cellsentry-1.5b-v3-q4km.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use almax000/cellsentry-model with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf almax000/cellsentry-model # Run inference directly in the terminal: llama-cli -hf almax000/cellsentry-model
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf almax000/cellsentry-model # Run inference directly in the terminal: llama-cli -hf almax000/cellsentry-model
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf almax000/cellsentry-model # Run inference directly in the terminal: ./llama-cli -hf almax000/cellsentry-model
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf almax000/cellsentry-model # Run inference directly in the terminal: ./build/bin/llama-cli -hf almax000/cellsentry-model
Use Docker
docker model run hf.co/almax000/cellsentry-model
- LM Studio
- Jan
- vLLM
How to use almax000/cellsentry-model with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "almax000/cellsentry-model" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "almax000/cellsentry-model", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/almax000/cellsentry-model
- Ollama
How to use almax000/cellsentry-model with Ollama:
ollama run hf.co/almax000/cellsentry-model
- Unsloth Studio new
How to use almax000/cellsentry-model with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for almax000/cellsentry-model to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for almax000/cellsentry-model to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for almax000/cellsentry-model to start chatting
- Pi new
How to use almax000/cellsentry-model with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "almax000/cellsentry-model"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "almax000/cellsentry-model" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use almax000/cellsentry-model with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "almax000/cellsentry-model"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default almax000/cellsentry-model
Run Hermes
hermes
- MLX LM
How to use almax000/cellsentry-model with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "almax000/cellsentry-model"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "almax000/cellsentry-model" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "almax000/cellsentry-model", "messages": [ {"role": "user", "content": "Hello"} ] }' - Docker Model Runner
How to use almax000/cellsentry-model with Docker Model Runner:
docker model run hf.co/almax000/cellsentry-model
- Lemonade
How to use almax000/cellsentry-model with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull almax000/cellsentry-model
Run and chat with the model
lemonade run user.cellsentry-model-{{QUANT_TAG}}List all available models
lemonade list
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)CellSentry Model โ Multi-Task Spreadsheet AI
A fine-tuned 1.5B parameter model for spreadsheet intelligence tasks. Built on Qwen2.5-1.5B with LoRA, this model handles three distinct tasks through prompt routing:
- Formula Audit โ Verify or dismiss rule engine findings in Excel formulas
- PII Detection โ Identify sensitive data (SSN, phone, email, national IDs) in cell values
- Data Extraction โ Extract structured fields (invoice number, date, vendor, totals) from spreadsheets
Model Details
| Property | Value |
|---|---|
| Base model | Qwen/Qwen2.5-1.5B |
| Fine-tuning | LoRA (rank 16, alpha 32) |
| Training | 4000 iterations, batch_size=2, lr=3e-5, AdamW |
| Quantization | 4-bit, group_size=32 (Q4_K_M for GGUF) |
| Context length | 1024 tokens |
| License | MIT |
Available Formats
| Format | File | Size | Platform |
|---|---|---|---|
| GGUF (Q4_K_M) | cellsentry-1.5b-v3-q4km.gguf |
~940 MB | Windows (llama.cpp) |
| MLX (4-bit g32) | cellsentry-1.5b-v3-4bit-g32/ |
~920 MB | macOS (MLX) |
Currently only the GGUF format is uploaded. MLX format coming soon.
Usage
This model is designed to be used with CellSentry, an open-source desktop app for spreadsheet auditing. The app downloads the model automatically on first launch.
Manual Download
# Install Hugging Face CLI
pip install huggingface-hub
# Download GGUF model
huggingface-cli download almax000/cellsentry-model cellsentry-1.5b-v3-q4km.gguf --local-dir ./models
Prompt Format
The model uses Qwen2.5 chat template with task-specific system prompts:
Formula Audit:
<|im_start|>system
You are a spreadsheet formula auditor...<|im_end|>
<|im_start|>user
{rule engine finding + cell context}<|im_end|>
<|im_start|>assistant
PII Detection:
<|im_start|>system
You are a PII detection specialist...<|im_end|>
<|im_start|>user
{cell values to scan}<|im_end|>
<|im_start|>assistant
Data Extraction:
<|im_start|>system
You are a document data extractor...<|im_end|>
<|im_start|>user
{spreadsheet content + template}<|im_end|>
<|im_start|>assistant
Training
- Method: LoRA fine-tuning with multi-task data
- Data: Synthetic + real-world spreadsheet samples across all three tasks
- Fusion: LoRA weights fused into base model, then quantized (dequantize โ fuse โ re-quantize with group_size=32)
- Key lesson: group_size=64 loses fine-tuning quality; group_size=32 is the minimum viable floor for 1.5B models
Limitations
- Optimized for structured spreadsheet content, not general text
- 1024 token context โ large spreadsheets need chunking
- PII patterns trained primarily on US and Chinese formats
- Extraction templates cover 5 document types (invoice, receipt, PO, expense, payroll)
Related
- CellSentry App โ Desktop app that uses this model
- CellSentry Website โ Project homepage
- Downloads last month
- 5
We're not able to determine the quantization variants.
Model tree for almax000/cellsentry-model
Base model
Qwen/Qwen2.5-1.5B
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="almax000/cellsentry-model", filename="cellsentry-1.5b-v3-q4km.gguf", )