Lion Optimized

AXL-Refactor-Lion

Refactoring specialist. 19.1M params. PPL 1.11. Context 2048 bytes.

19M
Parameters
1.11
Perplexity
10 min
Training
38 MB
GGUF
PropertyValue
ArchitectureMulti-Scale Transformer
d_model?
Attention Heads?
Layers per Scale?
Context Window2048 bytes
Downsample Factors[1, 2, 4]
Vocab Size258 (byte-level)
OptimizerLion
Trained on 7MB before/after pairs. 162 steps in 10 min. Loops to comprehensions, if-else to ternary.
MetricValue
Final Loss0.0922
Perplexity1.11
Training Steps162
Training Time10 min

Usage

ollama create axl-refactor-lion -f Modelfile
ollama run axl-refactor-lion "def fibonacci():"
Transforms verbose code to idiomatic Python.
FileSizeFormat
F16 GGUF38 MBFull precision
Q4_K_M GGUF12 MB4-bit quantized
GGUF files work with Ollama and llama.cpp. Q4_K_M is about 3x smaller than F16.
← All AXL Models