Instructions to use cernis-intelligence/precis-gguf with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cernis-intelligence/precis-gguf with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="cernis-intelligence/precis-gguf") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("cernis-intelligence/precis-gguf") model = AutoModelForCausalLM.from_pretrained("cernis-intelligence/precis-gguf") - llama-cpp-python
How to use cernis-intelligence/precis-gguf with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="cernis-intelligence/precis-gguf", filename="granite-4.0-h-micro.Q4_K_M.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use cernis-intelligence/precis-gguf with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf cernis-intelligence/precis-gguf:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M # Run inference directly in the terminal: llama-cli -hf cernis-intelligence/precis-gguf:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf cernis-intelligence/precis-gguf:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf cernis-intelligence/precis-gguf:Q4_K_M
Use Docker
docker model run hf.co/cernis-intelligence/precis-gguf:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use cernis-intelligence/precis-gguf with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "cernis-intelligence/precis-gguf" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cernis-intelligence/precis-gguf", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/cernis-intelligence/precis-gguf:Q4_K_M
- SGLang
How to use cernis-intelligence/precis-gguf with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "cernis-intelligence/precis-gguf" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cernis-intelligence/precis-gguf", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "cernis-intelligence/precis-gguf" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "cernis-intelligence/precis-gguf", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use cernis-intelligence/precis-gguf with Ollama:
ollama run hf.co/cernis-intelligence/precis-gguf:Q4_K_M
- Unsloth Studio new
How to use cernis-intelligence/precis-gguf with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for cernis-intelligence/precis-gguf to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for cernis-intelligence/precis-gguf to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for cernis-intelligence/precis-gguf to start chatting
- Pi new
How to use cernis-intelligence/precis-gguf with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "cernis-intelligence/precis-gguf:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use cernis-intelligence/precis-gguf with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default cernis-intelligence/precis-gguf:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use cernis-intelligence/precis-gguf with Docker Model Runner:
docker model run hf.co/cernis-intelligence/precis-gguf:Q4_K_M
- Lemonade
How to use cernis-intelligence/precis-gguf with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull cernis-intelligence/precis-gguf:Q4_K_M
Run and chat with the model
lemonade run user.precis-gguf-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf cernis-intelligence/precis-gguf:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf cernis-intelligence/precis-gguf:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf cernis-intelligence/precis-gguf:Q4_K_MUse Docker
docker model run hf.co/cernis-intelligence/precis-gguf:Q4_K_MPrecis: Document Summarization
Model Overview
Precis is a specialized document summarization model fine-tuned from IBM's Granite 4.0-H-Micro (3.2B parameters) using efficient LoRA adapters. It generates comprehensive ~300-word summaries optimized for question-answering capability while maintaining complete privacy through local, on-premise processing.
Key Features:
- 🔒 Privacy-First: Process sensitive documents entirely on your infrastructure
- ⚡ Fast: 0.5s inference time (5-10x faster than cloud APIs)
- 💰 Cost-Effective: Zero per-document API fees
- 📚 Long Context: 128K tokens ≈ 320-380 book pages
- 🎯 Specialized: Trained on 5,500+ document-summary pairs, processed millions of tokens during training
🚀 Quick Start
Using with Transformers + PEFT
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/granite-4.0-h-micro",
torch_dtype=torch.float16,
device_map="auto"
)
# Load LoRA adapters
model = PeftModel.from_pretrained(base_model, "cernis-intelligence/precis")
tokenizer = AutoTokenizer.from_pretrained("cernis-intelligence/precis")
# Generate summary
document = """Your long document here..."""
messages = [
{"role": "user", "content": f"Summarize the following document in around 300 words:\n\n{document}"}
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=512,
temperature=0.3,
top_p=0.9,
do_sample=True
)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(summary)
Using with Unsloth (Recommended)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="cernis-intelligence/precis",
max_seq_length=2048,
load_in_4bit=True, # For lower memory usage
)
FastLanguageModel.for_inference(model)
messages = [
{"role": "user", "content": f"Summarize the following document in around 300 words:\n\n{document}"}
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to("cuda")
outputs = model.generate(inputs, max_new_tokens=512, temperature=0.3)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
Using with vLLM (Production)
from vllm import LLM, SamplingParams
from vllm.lora.request import LoRARequest
# Initialize vLLM with base model
llm = LLM(
model="unsloth/granite-4.0-h-micro",
enable_lora=True,
max_lora_rank=32,
gpu_memory_utilization=0.9
)
# Create LoRA request
lora_request = LoRARequest(
"precis-granite",
1,
"cernis-intelligence/precis"
)
# Sampling parameters
sampling_params = SamplingParams(
temperature=0.3,
top_p=0.9,
max_tokens=512
)
# Generate
prompts = ["Summarize the following document in around 300 words:\n\n" + document]
outputs = llm.generate(prompts, sampling_params, lora_request=lora_request)
print(outputs[0].outputs[0].text)
📊 Training Details
Base Model
- Architecture: IBM Granite 4.0-H-Micro
- Parameters: 3.2B (38.4M trainable via LoRA)
- Context Length: 128K tokens
- License: Apache 2.0
🎯 Use Cases
✅ Perfect For:
- 📄 Legal Document Review: Summarize contracts while maintaining confidentiality
- 🏥 Medical Records: HIPAA-compliant summarization of patient notes
- 💼 Financial Reports: Analyze earnings reports without exposing sensitive data
- 📚 Research Papers: Quick digests of academic literature
- 📧 Email Threads: Comprehensive summaries of long conversations
⚠️ Considerations:
- Works best with documents under 380 pages (128K token limit)
- Optimized for English text (multilingual support coming)
- May miss some deeply nested structured data (tables, forms)
- For specialized needs, consider fine-tuning on domain-specific data
📄 License
This model is released under the Apache 2.0 License, same as the base IBM Granite 4.0 model.
Copyright 2025
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
- Downloads last month
- 11
4-bit
8-bit
Model tree for cernis-intelligence/precis-gguf
Base model
ibm-granite/granite-4.0-h-micro
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf cernis-intelligence/precis-gguf:Q4_K_M# Run inference directly in the terminal: llama-cli -hf cernis-intelligence/precis-gguf:Q4_K_M