Streamlined Inter-agent Protocol (Slipstream)
Collection
Semantic Quantization for Efficient Multi-Agent Coordination
โข
5 items
โข
Updated
A finetuned version of GLM-Z1-9B-0414 trained on the Slipstream protocol - a semantic quantization system that achieves 82% token reduction in multi-agent AI communication.
This model has learned the Think-Quantize-Transmit (TQT) cognitive pattern:
Input:
Tell bob to review my authentication code
Output:
THOUGHT: I need bob to do a code review on the auth module
QUANTIZE: [ACTION=request | DOMAIN=task | URGENCY=normal | POLARITY=neutral] -> RequestReview
SLIP: SLIP v1 alice bob RequestReview auth_module
| Parameter | Value |
|---|---|
| Base Model | zai-org/GLM-Z1-9B-0414 |
| Method | LoRA (rank=16, alpha=16) |
| Epochs | 2 |
| Learning Rate | 2e-4 |
| Batch Size | 16 (4 ร 4 grad accum) |
| Sequence Length | 2048 |
| Training Examples | 2,283 |
| Hardware | Google Colab (A100/V100) |
| Framework | Unsloth + TRL |
q_proj, k_proj, v_proj, o_projgate_proj, up_proj, down_proj| Format | Repository | Use Case |
|---|---|---|
| LoRA Adapter | slipstream-glm-z1-9b | Merge with base model |
| Merged 16-bit | slipstream-glm-z1-9b-merged | Direct loading |
| GGUF Q4_K_M | slipstream-glm-z1-9b-gguf | Ollama / llama.cpp |
| GGUF Q8_0 | slipstream-glm-z1-9b-gguf | Higher quality local |
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = AutoModelForCausalLM.from_pretrained("zai-org/GLM-Z1-9B-0414")
model = PeftModel.from_pretrained(base_model, "anthonym21/slipstream-glm-z1-9b")
tokenizer = AutoTokenizer.from_pretrained("anthonym21/slipstream-glm-z1-9b")
# Download GGUF
wget https://huggingface.co/anthonym21/slipstream-glm-z1-9b-gguf/resolve/main/slipstream-q4_k_m.gguf
# Create Modelfile
cat > Modelfile <<EOF
FROM ./slipstream-q4_k_m.gguf
SYSTEM "You are an AI agent using the Slipstream protocol for efficient multi-agent communication."
EOF
# Run
ollama create slipstream -f Modelfile
ollama run slipstream "Tell bob to review my code"
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
"anthonym21/slipstream-glm-z1-9b",
max_seq_length=2048,
load_in_4bit=True,
)
FastLanguageModel.for_inference(model)
The model understands 21 core anchors:
| Category | Anchors |
|---|---|
| Requests | RequestTask, RequestReview, RequestHelp, RequestPlan |
| Inform | InformComplete, InformProgress, InformBlocked, InformStatus |
| Propose | ProposePlan, ProposeChange, ProposeAlternative |
| Evaluate | EvalApprove, EvalReject, EvalNeedsWork |
| Meta | Accept, Reject, MetaAck, MetaHandoff, Fallback |
SLIP v1 <src> <dst> <anchor> [payload...]
Example: SLIP v1 alice bob RequestReview auth_module
pip install slipcore@misc{maio2025slipstream,
title={Slipstream: Semantic Quantization for Efficient Multi-Agent Coordination},
author={Maio, Anthony},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/anthonym21/slipstream-glm-z1-9b}
}
Apache 2.0
4-bit
8-bit
Base model
zai-org/GLM-Z1-9B-0414