--- license: apache-2.0 language: - code tags: - code-generation - multi-scale-transformer - cpu-optimized - koinic - pytorch - llama - gguf - byte-level - refactoring pipeline_tag: text-generation library_name: transformers datasets: - bigcode/starcoderdata - sahil2801/CodeAlpaca-20k widget: - text: "Before:\nresult = []\nfor x in items:\n if x > 0:\n result.append(x * 2)\nAfter:" - text: "Before:\nf = open(\"file.txt\", \"r\")\ndata = f.read()\nf.close()\nAfter:" model-index: - name: AXL-Refactor-Lion results: - task: type: text-generation metrics: - name: Perplexity (byte-level) type: perplexity value: 1.02 --- # AXL-Refactor-Lion Refactoring specialist. 19.1M params. PPL 1.02. Context 256 bytes. Part of the AXL model family by [KoinicLabs](https://huggingface.co/KoinicLabs). ## Model Details | Property | Value | |----------|-------| | Developed by | [KoinicLabs](https://huggingface.co/KoinicLabs) | | Architecture | Multi-Scale Transformer | | Parameters | 19M | | Optimizer | Lion | | Attention | SDPA | | Vocab Size | 258 (byte-level) | | Context Window | 256 bytes | | d_model | 288 | | Attention Heads | 4 | | Layers per Scale | 4 | | Downsample Factors | [1, 2, 4] | | License | Apache 2.0 | ### Sources - **Repository:** [GitHub](https://github.com/Koinic/AXL) - **Organization:** [KoinicLabs](https://huggingface.co/KoinicLabs) ## Uses ### Direct Use Transforming verbose code to idiomatic Python. ```python import torch from multiscale_transformer.model.model import MultiScaleTransformer from multiscale_transformer.training.tokenizer import ByteTokenizer ckpt = torch.load("axl_refactor_lion.pt", map_location="cpu") model = MultiScaleTransformer(config) model.load_state_dict(ckpt["model_state_dict"]) model.eval() tokenizer = ByteTokenizer() ids = torch.tensor([tokenizer.encode("def hello():")], dtype=torch.long) with torch.no_grad(): out = model.generate(ids, max_new_tokens=50, temperature=0.8) print(tokenizer.decode(out[0].tolist())) ``` ### Out-of-Scope Use Not for production code generation. Not for non-code NLP tasks. For integration with tools like Continue.dev, LlamaIndex, or LangChain, use the Python API server which provides OpenAI-compatible endpoints. ## Bias, Risks, and Limitations Byte-level perplexity is not comparable to BPE-level perplexity. Max context 256 bytes. Simple refactoring patterns only. IMPORTANT: GGUF files exported for Ollama/LM Studio use only the fine-scale encoder (1/3 of the AXL architecture). The reported PPL applies to the full multi-scale model. For full AXL quality, use the Python API server at http://localhost:8880/v1/completions. ### Recommendations - Use for prototyping and experimentation, not production code generation. - Byte-level perplexity (258 vocab) is not comparable to BPE-level perplexity (32K vocab). - For better results, use the Lion-optimized version if available. ## Training Details ### Training Data Trained on 7MB before/after pairs. 162 steps in 10 min. Loops to comprehensions, if-else to ternary. ### Preprocessing Byte-level tokenization with vocabulary size 258 (256 bytes + BOS + EOS). No vocabulary training required. ### Speeds, Sizes, Times | Metric | Value | |--------|-------| | Training Steps | 162 | | Training Time | 10 min | | Final Loss | 0.0922 | ## Evaluation ### Metrics Perplexity on held-out Python code using byte-level tokenization. ### Results | Metric | Value | |--------|-------| | Perplexity (byte-level) | 1.02 | | Final Loss | 0.0922 | | Training Steps | 162 | | Training Time | 10 min | **Summary:** Transforms verbose code to idiomatic Python. ## Environmental Impact | Property | Value | |----------|-------| | Hardware | AMD Ryzen 5 5600G | | Hours Used | 0.167 | | Carbon Emitted | 0.0070 kg CO2 | | Cloud Provider | None (local CPU) | ## Technical Specifications ### Model Architecture Multi-Scale Transformer with three parallel encoder stacks at resolution scales 1x, 2x, and 4x. Cross-scale attention connects all scale pairs. Adaptive gating fusion. SwiGLU feed-forward. RoPE positional encoding. ### Compute Infrastructure | Property | Value | |----------|-------| | Hardware | AMD Ryzen 5 5600G (6 cores, 12 threads) | | RAM | 16 GB | | GPU | None (CPU-only) | ## Citation ```bibtex @misc{axl_2026, title={AXL: AXL-Refactor-Lion - Multi-Scale Transformer for CPU Code Generation}, author={Koinic}, year={2026}, url={https://huggingface.co/KoinicLabs} } ``` ## How to Get Started ### With Ollama ```bash ollama create axl-refactor-lion -f Modelfile ollama run axl-refactor-lion "def fibonacci():" ``` ### With Python ```python import torch from multiscale_transformer.model.config import load_config from multiscale_transformer.model.model import MultiScaleTransformer from multiscale_transformer.training.tokenizer import ByteTokenizer config = load_config("config.json") model = MultiScaleTransformer(config) ckpt = torch.load("axl_refactor_lion.pt", map_location="cpu") model.load_state_dict(ckpt["model_state_dict"]) model.eval() tokenizer = ByteTokenizer() prompt = "def fibonacci():" ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long) with torch.no_grad(): out = model.generate(ids, max_new_tokens=100, temperature=0.8, top_k=40) print(tokenizer.decode(out[0].tolist())) ```