anthonym21 commited on
Commit
5e39571
·
verified ·
1 Parent(s): c762ef9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -7
README.md CHANGED
@@ -1,17 +1,157 @@
1
  ---
2
- license: other
3
  base_model: zai-org/GLM-Z1-9B-0414
4
  tags:
5
  - slipstream
6
- - agent
7
- - protocol
8
- - unsloth
 
9
  - lora
 
 
 
 
 
 
 
10
  ---
11
 
12
- # Slipstream GLM-Z1 9B (LoRA Adapter)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
- This is a LoRA adapter for **GLM-Z1-9B** finetuned on the **Slipstream** dataset. It enables the model to communicate using the efficient Slipstream Protocol (SLIP).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  ## Usage
17
- Load using `peft` or `unsloth`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  base_model: zai-org/GLM-Z1-9B-0414
4
  tags:
5
  - slipstream
6
+ - multi-agent
7
+ - semantic-quantization
8
+ - agent-communication
9
+ - think-quantize-transmit
10
  - lora
11
+ - unsloth
12
+ datasets:
13
+ - anthonym21/slipstream-tqt
14
+ language:
15
+ - en
16
+ pipeline_tag: text-generation
17
+ library_name: peft
18
  ---
19
 
20
+ # Slipstream GLM-Z1-9B
21
+
22
+ A LORA Adapter for [GLM-Z1-9B-0414](https://huggingface.co/zai-org/GLM-Z1-9B-0414) trained on the **Slipstream protocol** - a semantic quantization system that achieves **82% token reduction** in multi-agent AI communication.
23
+
24
+ ## Model Description
25
+
26
+ This model has learned the **Think-Quantize-Transmit (TQT)** cognitive pattern:
27
+
28
+ 1. **THINK**: Reason about the communication intent
29
+ 2. **QUANTIZE**: Map intent to a semantic anchor in the UCR manifold
30
+ 3. **TRANSMIT**: Output a compact SLIP wire format message
31
+
32
+ ### Example
33
+
34
+ **Input:**
35
+ ```
36
+ Tell bob to review my authentication code
37
+ ```
38
+
39
+ **Output:**
40
+ ```
41
+ THOUGHT: I need bob to do a code review on the auth module
42
+ QUANTIZE: [ACTION=request | DOMAIN=task | URGENCY=normal | POLARITY=neutral] -> RequestReview
43
+ SLIP: SLIP v1 alice bob RequestReview auth_module
44
+ ```
45
 
46
+ ## Training Details
47
+
48
+ | Parameter | Value |
49
+ |-----------|-------|
50
+ | Base Model | zai-org/GLM-Z1-9B-0414 |
51
+ | Method | LoRA (rank=16, alpha=16) |
52
+ | Epochs | 2 |
53
+ | Learning Rate | 2e-4 |
54
+ | Batch Size | 16 (4 × 4 grad accum) |
55
+ | Sequence Length | 2048 |
56
+ | Training Examples | 2,283 |
57
+ | Hardware | Google Colab (A100/V100) |
58
+ | Framework | Unsloth + TRL |
59
+
60
+ ### LoRA Target Modules
61
+ - Attention: `q_proj`, `k_proj`, `v_proj`, `o_proj`
62
+ - MLP: `gate_proj`, `up_proj`, `down_proj`
63
+
64
+ ## Available Formats
65
+
66
+ | Format | Repository | Use Case |
67
+ |--------|------------|----------|
68
+ | LoRA Adapter | [slipstream-glm-z1-9b](https://huggingface.co/anthonym21/slipstream-glm-z1-9b) | Merge with base model |
69
+ | Merged 16-bit | [slipstream-glm-z1-9b-merged](https://huggingface.co/anthonym21/slipstream-glm-z1-9b-merged) | Direct loading |
70
+ | GGUF Q4_K_M | [slipstream-glm-z1-9b-gguf](https://huggingface.co/anthonym21/slipstream-glm-z1-9b-gguf) | Ollama / llama.cpp |
71
+ | GGUF Q8_0 | [slipstream-glm-z1-9b-gguf](https://huggingface.co/anthonym21/slipstream-glm-z1-9b-gguf) | Higher quality local |
72
 
73
  ## Usage
74
+
75
+ ### With Transformers + PEFT
76
+
77
+ ```python
78
+ from peft import PeftModel
79
+ from transformers import AutoModelForCausalLM, AutoTokenizer
80
+
81
+ base_model = AutoModelForCausalLM.from_pretrained("zai-org/GLM-Z1-9B-0414")
82
+ model = PeftModel.from_pretrained(base_model, "anthonym21/slipstream-glm-z1-9b")
83
+ tokenizer = AutoTokenizer.from_pretrained("anthonym21/slipstream-glm-z1-9b")
84
+ ```
85
+
86
+ ### With Ollama
87
+
88
+ ```bash
89
+ # Download GGUF
90
+ wget https://huggingface.co/anthonym21/slipstream-glm-z1-9b-gguf/resolve/main/slipstream-q4_k_m.gguf
91
+
92
+ # Create Modelfile
93
+ cat > Modelfile <<EOF
94
+ FROM ./slipstream-q4_k_m.gguf
95
+ SYSTEM "You are an AI agent using the Slipstream protocol for efficient multi-agent communication."
96
+ EOF
97
+
98
+ # Run
99
+ ollama create slipstream -f Modelfile
100
+ ollama run slipstream "Tell bob to review my code"
101
+ ```
102
+
103
+ ### With Unsloth (for inference)
104
+
105
+ ```python
106
+ from unsloth import FastLanguageModel
107
+
108
+ model, tokenizer = FastLanguageModel.from_pretrained(
109
+ "anthonym21/slipstream-glm-z1-9b",
110
+ max_seq_length=2048,
111
+ load_in_4bit=True,
112
+ )
113
+ FastLanguageModel.for_inference(model)
114
+ ```
115
+
116
+ ## UCR Anchors
117
+
118
+ The model understands 21 core anchors:
119
+
120
+ | Category | Anchors |
121
+ |----------|---------|
122
+ | Requests | `RequestTask`, `RequestReview`, `RequestHelp`, `RequestPlan` |
123
+ | Inform | `InformComplete`, `InformProgress`, `InformBlocked`, `InformStatus` |
124
+ | Propose | `ProposePlan`, `ProposeChange`, `ProposeAlternative` |
125
+ | Evaluate | `EvalApprove`, `EvalReject`, `EvalNeedsWork` |
126
+ | Meta | `Accept`, `Reject`, `MetaAck`, `MetaHandoff`, `Fallback` |
127
+
128
+ ## Wire Format
129
+
130
+ ```
131
+ SLIP v1 <src> <dst> <anchor> [payload...]
132
+ ```
133
+
134
+ Example: `SLIP v1 alice bob RequestReview auth_module`
135
+
136
+ ## Related Resources
137
+
138
+ - **Project Repo**: [github.com/anthony-maio/slipcore](https://github.com/anthony-maio/slipcore)
139
+ - **Training Dataset**: [hf.co/anthonym21/slipstream-tqt](https://huggingface.co/datasets/anthonym21/slipstream-tqt)
140
+ - **Paper**: [Slipstream: Semantic Quantization for Efficient Multi-Agent Coordination](https://doi.org/10.5281/zenodo.18063451)
141
+ - **PyPI**: `pip install slipcore`
142
+
143
+ ## Citation
144
+
145
+ ```bibtex
146
+ @misc{maio2025slipstream,
147
+ title={Slipstream: Semantic Quantization for Efficient Multi-Agent Coordination},
148
+ author={Maio, Anthony},
149
+ year={2025},
150
+ publisher={Hugging Face},
151
+ url={https://huggingface.co/anthonym21/slipstream-glm-z1-9b}
152
+ }
153
+ ```
154
+
155
+ ## License
156
+
157
+ Apache 2.0