Text Generation
MLX
Safetensors
Rust
qwen2
7b
agentic-coding
android
apple-silicon
attested
bash
c
chain-of-custody
chinese
code
code-completion
code-generation
code-infill
compacted
compensation-lora
consumer-gpu
cpp
cryptographically-verified
css
distillation
edge-inference
efficient
embedded
english
forge-alloy
function-calling
general
general-purpose
go
head-pruning
html
iphone
java
javascript
knowledge-distillation
kotlin
llama-cpp
lm-studio
local-inference
lora
macbook
mobile
multilingual
ollama
on-device
optimized
php
pruned
python
qwen
qwen-coder
qwen2.5
qwen2.5-coder
raspberry-pi
reproducible
ruby
sql
swift
teacher-student
typescript
validation-artifact
versatile
conversational
Upload v2-7b-coder-compensated.alloy.json with huggingface_hub
Browse files
v2-7b-coder-compensated.alloy.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
{
|
| 2 |
"name": "v2-7b-coder-compensated",
|
| 3 |
-
"version": "1.2.
|
| 4 |
"description": "Methodology validation artifact for the v2 forge pipeline + KL-distillation compensation LoRA. Demonstrates that aggressive head pruning + activation-metric importance + pad-mode defrag, when paired with output-distribution distillation against the unmodified teacher, recovers near-base HumanEval capability (61.0 vs 62.2 base, within calibration tolerance). This is the empirical anchor for PLASTICITY-COMPACTION \u00a74.1.3.3 and the loss-function ablation that closes the \u00a74.1.3.2 PPL/HumanEval disconnect. NOT a Pareto improvement over the unmodified base 7B at any single VRAM tier \u2014 published as proof that the methodology stack works end-to-end, in preparation for the Qwen3.5-35B-A3B and 397B-A17B forges where the pruning dimension actually wins.",
|
| 5 |
"author": "continuum-ai",
|
| 6 |
"tags": [
|
|
@@ -191,9 +191,9 @@
|
|
| 191 |
{
|
| 192 |
"target": "huggingface",
|
| 193 |
"url": "https://huggingface.co/continuum-ai/v2-7b-coder-compensated",
|
| 194 |
-
"publishedAt": "2026-04-08T05:
|
| 195 |
}
|
| 196 |
],
|
| 197 |
-
"issuedAt": "2026-04-08T05:
|
| 198 |
}
|
| 199 |
}
|
|
|
|
| 1 |
{
|
| 2 |
"name": "v2-7b-coder-compensated",
|
| 3 |
+
"version": "1.2.1",
|
| 4 |
"description": "Methodology validation artifact for the v2 forge pipeline + KL-distillation compensation LoRA. Demonstrates that aggressive head pruning + activation-metric importance + pad-mode defrag, when paired with output-distribution distillation against the unmodified teacher, recovers near-base HumanEval capability (61.0 vs 62.2 base, within calibration tolerance). This is the empirical anchor for PLASTICITY-COMPACTION \u00a74.1.3.3 and the loss-function ablation that closes the \u00a74.1.3.2 PPL/HumanEval disconnect. NOT a Pareto improvement over the unmodified base 7B at any single VRAM tier \u2014 published as proof that the methodology stack works end-to-end, in preparation for the Qwen3.5-35B-A3B and 397B-A17B forges where the pruning dimension actually wins.",
|
| 5 |
"author": "continuum-ai",
|
| 6 |
"tags": [
|
|
|
|
| 191 |
{
|
| 192 |
"target": "huggingface",
|
| 193 |
"url": "https://huggingface.co/continuum-ai/v2-7b-coder-compensated",
|
| 194 |
+
"publishedAt": "2026-04-08T05:02:57.072577+00:00"
|
| 195 |
}
|
| 196 |
],
|
| 197 |
+
"issuedAt": "2026-04-08T05:02:57.072577+00:00"
|
| 198 |
}
|
| 199 |
}
|