AXL-Refactor-20M / README.md
KennedyOfficaly's picture
Upload 12 files
c583042 verified
metadata
license: apache-2.0
language:
  - code
tags:
  - code-generation
  - multi-scale-transformer
  - cpu-optimized
  - koinic
  - pytorch
  - llama
  - gguf
  - byte-level
  - refactoring
pipeline_tag: text-generation
library_name: transformers
datasets:
  - bigcode/starcoderdata
  - theblackcat102/evol-codealpaca-v1
widget:
  - text: |-
      Before:
      f = open("file.txt")
      data = f.read()
      f.close()
      After:
model-index:
  - name: AXL-Refactor-20M
    results:
      - task:
          type: text-generation
        metrics:
          - name: Perplexity (byte-level)
            type: perplexity
            value: 1.01

AXL-Refactor-20M

Refactoring (SGD). 19.1M params. PPL 1.01. Context 1024 bytes. Part of the AXL model family by KoinicLabs.

Model Details

Property Value
Developed by KoinicLabs
Architecture Multi-Scale Transformer
Parameters 19M
Optimizer SGD
Attention SDPA
Vocab Size 258 (byte-level)
Context Window 1024 bytes
d_model 288
Attention Heads 4
Layers per Scale 4
Downsample Factors [1, 2, 4]
License Apache 2.0

Sources

Uses

Direct Use

Code refactoring (SGD baseline).

import torch
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
ckpt = torch.load("axl_refactor_20m.pt", map_location="cpu")
model = MultiScaleTransformer(config)
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
ids = torch.tensor([tokenizer.encode("def hello():")], dtype=torch.long)
with torch.no_grad():
    out = model.generate(ids, max_new_tokens=50, temperature=0.8)
print(tokenizer.decode(out[0].tolist()))

Out-of-Scope Use

Not for production code generation. Not for non-code NLP tasks. For integration with tools like Continue.dev, LlamaIndex, or LangChain, use the Python API server which provides OpenAI-compatible endpoints.

Bias, Risks, and Limitations

Byte-level perplexity is not comparable to BPE-level perplexity. Max context 1024 bytes. Note: GGUF files for Ollama use a simplified single-stack encoder. For full AXL quality, use the Python API server.

Recommendations

  • Use for prototyping and experimentation, not production code generation.
  • Byte-level perplexity (258 vocab) is not comparable to BPE-level perplexity (32K vocab).
  • For better results, use the Lion-optimized version if available.

Training Details

Training Data

Trained on 7MB before/after pairs. 202 steps.

Preprocessing

Byte-level tokenization with vocabulary size 258 (256 bytes + BOS + EOS). No vocabulary training required.

Speeds, Sizes, Times

Metric Value
Training Steps 202
Training Time 5 min
Final Loss 0.0081

Evaluation

Metrics

Perplexity on held-out Python code using byte-level tokenization.

Results

Metric Value
Perplexity (byte-level) 1.01
Final Loss 0.0081
Training Steps 202
Training Time 5 min

Summary: Refactoring baseline. AXL-Refactor-Lion has PPL 1.01

Environmental Impact

Property Value
Hardware AMD Ryzen 5 5600G
Hours Used 0.083
Carbon Emitted 0.0035 kg CO2
Cloud Provider None (local CPU)

Technical Specifications

Model Architecture

Multi-Scale Transformer with three parallel encoder stacks at resolution scales 1x, 2x, and 4x. Cross-scale attention connects all scale pairs. Adaptive gating fusion. SwiGLU feed-forward. RoPE positional encoding.

Compute Infrastructure

Property Value
Hardware AMD Ryzen 5 5600G (6 cores, 12 threads)
RAM 16 GB
GPU None (CPU-only)

Citation

@misc{axl_2026,
  title={AXL: AXL-Refactor-20M - Multi-Scale Transformer for CPU Code Generation},
  author={Koinic},
  year={2026},
  url={https://huggingface.co/KoinicLabs}
}

How to Get Started

With Ollama

ollama create axl-refactor-20m -f Modelfile
ollama run axl-refactor-20m "def fibonacci():"

With Python

import torch
from multiscale_transformer.model.config import load_config
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
config = load_config("config.json")
model = MultiScaleTransformer(config)
ckpt = torch.load("axl_refactor_20m.pt", map_location="cpu")
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
prompt = "def fibonacci():"
ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long)
with torch.no_grad():
    out = model.generate(ids, max_new_tokens=100, temperature=0.8, top_k=40)
print(tokenizer.decode(out[0].tolist()))