reaperdoesntknow commited on
Commit
df37c1d
Β·
verified Β·
1 Parent(s): 17f4819

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -20
README.md CHANGED
@@ -1,43 +1,114 @@
1
  ---
 
2
  library_name: transformers
 
3
  tags:
4
- - trl
5
  - sft
 
 
 
 
 
6
  ---
7
 
 
 
 
8
 
9
- ## Convergent Intelligence Portfolio
10
 
11
- *Part of the [Topological Series](https://huggingface.co/reaperdoesntknow) by [Convergent Intelligence LLC: Research Division](https://huggingface.co/reaperdoesntknow)*
12
 
 
13
 
14
- ### Top Models from Our Lab
15
 
16
- | Model | Downloads |
17
- |-------|-----------|
18
- | [Qwen3-1.7B-Thinking-Distil](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Thinking-Distil) | 501 |
19
- | [LFM2.5-1.2B-Distilled-SFT](https://huggingface.co/reaperdoesntknow/LFM2.5-1.2B-Distilled-SFT) | 342 |
20
- | [Qwen3-1.7B-Coder-Distilled-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT) | 302 |
21
- | [Qwen3-0.6B-Distilled-30B-A3B-Thinking-SFT-GGUF](https://huggingface.co/reaperdoesntknow/Qwen3-0.6B-Distilled-30B-A3B-Thinking-SFT-GGUF) | 203 |
22
- | [Qwen3-1.7B-Coder-Distilled-SFT-GGUF](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT-GGUF) | 194 |
23
 
24
- **Total Portfolio: 41 models | 2,781 total downloads**
25
 
 
26
 
27
- *Last updated: 2026-03-28 12:56 UTC*
 
 
 
 
 
 
 
 
 
 
28
 
29
- <!-- CIX-CROSSLINK-START -->
30
 
31
- ---
32
 
33
- ## From the Convergent Intelligence Portfolio
34
 
35
- **[DistilQwen Collection](https://huggingface.co/collections/reaperdoesntknow/distilqwen-69bf40ec669117e3f069ef1c)** β€” Proof-weighted distillation from Qwen3-30B-A3B β†’ 1.7B and 0.6B. Three teacher variants (Instruct, Thinking, Coder), nine models, 2,788 combined downloads. Structure beats scale.
36
 
37
- Top model: [Qwen3-1.7B-Coder-Distilled-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT) β€” 508 downloads
 
 
38
 
39
  Full methodology: [Structure Over Scale (DOI: 10.57967/hf/8165)](https://doi.org/10.57967/hf/8165)
40
 
41
- *Convergent Intelligence LLC: Research Division*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
- <!-- CIX-CROSSLINK-END -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  library_name: transformers
4
+ pipeline_tag: text-generation
5
  tags:
6
+ - qwen3
7
  - sft
8
+ - trl
9
+ - topological-knowledge-distillation
10
+ - disc
11
+ - convergent-intelligence
12
+ base_model: Qwen/Qwen3-1.7B
13
  ---
14
 
15
+ # TopologicalQwen
16
+
17
+ **Topology-Aware Knowledge Distillation from Qwen3-30B-A3B β†’ 1.7B**
18
 
19
+ *Convergent Intelligence LLC: Research Division*
20
 
21
+ ---
22
 
23
+ ## What This Is
24
 
25
+ TopologicalQwen is a 1.7B parameter model distilled from Qwen3-30B-A3B using **Topological Knowledge Distillation (TKD)** β€” a methodology that treats the teacher's output distribution over a concatenated token stream as a bounded variation (BV) function and decomposes knowledge transfer into three channels via the Mesh Fundamental Identity:
26
 
27
+ 1. **Smooth distillation (AC component)** β€” Standard KL divergence over regions where the teacher's distribution varies continuously. This is what every other KD method does and stops at.
28
+ 2. **Jump corrections (D^j f)** β€” Explicit correction terms at conceptual boundaries where the teacher's distribution exhibits discontinuities. These are the points where topic, register, or reasoning mode shifts β€” standard KD smears across them, losing structural information.
29
+ 3. **Drift corrections (D^c f)** β€” The Cantor/singular-continuous component capturing gradual distributional drift that neither the smooth nor jump terms account for. This is the residual structure that emerges in generation quality.
 
 
 
 
30
 
31
+ Standard knowledge distillation only handles term (1). TKD captures all three.
32
 
33
+ ## Architecture
34
 
35
+ | Parameter | Value |
36
+ |-----------|-------|
37
+ | Architecture | Qwen3ForCausalLM |
38
+ | Parameters | ~2.03B (1.7B effective) |
39
+ | Hidden Size | 2048 |
40
+ | Layers | 28 |
41
+ | Attention Heads | 16 (Q) / 8 (KV) β€” GQA |
42
+ | Intermediate | 6144 |
43
+ | Context Length | 40,960 tokens |
44
+ | Vocabulary | 151,936 |
45
+ | Precision | FP32 training, BF16/FP16 inference |
46
 
47
+ ## Training Methodology
48
 
49
+ The TKD pipeline has four phases:
50
 
51
+ **Phase 1 β€” Teacher logit caching:** Single forward pass through the teacher (Qwen3-30B-A3B) with top-k logit compression to disk. One pass, no repeated teacher inference.
52
 
53
+ **Phase 2 β€” DISC topology pass:** Vectorized discrepancy operator maps the knowledge manifold, identifying where the teacher's distribution has structural features (jumps, drift, curvature).
54
 
55
+ **Phase 3 β€” Topology-guided adaptive windowing:** Training windows cut at low-discrepancy positions rather than fixed stride. The topology tells you where to cut without losing information across boundaries.
56
+
57
+ **Phase 4 β€” Curriculum-ordered continuous KD:** Belt-fed training with proof-weighted loss. 55% cross-entropy with decaying proof weights (2.5Γ— β†’ 1.5Γ—), 45% KL divergence at T=2.0. Proof weights amplify loss on reasoning-critical tokens.
58
 
59
  Full methodology: [Structure Over Scale (DOI: 10.57967/hf/8165)](https://doi.org/10.57967/hf/8165)
60
 
61
+ ## Usage
62
+
63
+ ```python
64
+ from transformers import AutoModelForCausalLM, AutoTokenizer
65
+
66
+ model = AutoModelForCausalLM.from_pretrained(
67
+ "reaperdoesntknow/TopologicalQwen",
68
+ torch_dtype="auto",
69
+ device_map="auto"
70
+ )
71
+ tokenizer = AutoTokenizer.from_pretrained("reaperdoesntknow/TopologicalQwen")
72
+
73
+ messages = [{"role": "user", "content": "Derive the Euler-Lagrange equation from the principle of stationary action."}]
74
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
75
+ inputs = tokenizer(text, return_tensors="pt").to(model.device)
76
+ output = model.generate(**inputs, max_new_tokens=1024, temperature=0.7, top_p=0.9)
77
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
78
+ ```
79
+
80
+ ## Why Topology Matters
81
 
82
+ Every knowledge distillation method in the literature treats the teacher's output as a smooth function and minimizes KL divergence globally. This works for the easy parts β€” regions where the teacher's distribution varies slowly. But language has structure: topic shifts, reasoning mode transitions, register changes. At these boundaries, the teacher's distribution jumps. Standard KD averages across these jumps, teaching the student a blurred version of the teacher's actual knowledge.
83
+
84
+ TKD uses the DISC (Discrepancy Calculus) framework to detect these structural features before training, then allocates capacity and loss weight accordingly. The result is a student that preserves the teacher's structural understanding, not just its surface statistics.
85
+
86
+ The empirical evidence: this model at 1.7B consistently produces responses with structural reasoning quality that standard distillation at the same parameter count does not achieve.
87
+
88
+ ## Related Models
89
+
90
+ | Model | Description | Downloads |
91
+ |-------|-------------|-----------|
92
+ | [Qwen3-1.7B-Thinking-Distil](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Thinking-Distil) | TKD with Thinking teacher | 687 |
93
+ | [LFM2.5-1.2B-Distilled-SFT](https://huggingface.co/reaperdoesntknow/LFM2.5-1.2B-Distilled-SFT) | Cross-architecture TKD (LFM β†’ Qwen) | 544 |
94
+ | [Qwen3-1.7B-Coder-Distilled-SFT](https://huggingface.co/reaperdoesntknow/Qwen3-1.7B-Coder-Distilled-SFT) | TKD with Coder teacher | 508 |
95
+
96
+ **[DistilQwen Collection](https://huggingface.co/collections/reaperdoesntknow/distilqwen-69bf40ec669117e3f069ef1c)** β€” Full proof-weighted distillation series (9 models)
97
+
98
+ ## Citation
99
+
100
+ ```bibtex
101
+ @misc{colca2026topologicalqwen,
102
+ title={TopologicalQwen: Topology-Aware Knowledge Distillation via Bounded Variation Decomposition},
103
+ author={Colca, Roy S.},
104
+ year={2026},
105
+ publisher={HuggingFace},
106
+ url={https://huggingface.co/reaperdoesntknow/TopologicalQwen},
107
+ note={Convergent Intelligence LLC: Research Division}
108
+ }
109
+ ```
110
+
111
+ ---
112
+
113
+ *Convergent Intelligence LLC: Research Division*
114
+ *"Where classical analysis fails to see, we begin."*