AXL-Code-1B-Lion / README.md
KennedyOfficaly's picture
Upload 12 files
4dd9de4 verified
---
license: apache-2.0
language:
- code
tags:
- code-generation
- multi-scale-transformer
- cpu-optimized
- koinic
- pytorch
- llama
- gguf
- byte-level
pipeline_tag: text-generation
library_name: transformers
datasets:
- bigcode/starcoderdata
- sahil2801/CodeAlpaca-20k
widget:
- text: "def fibonacci(n):"
- text: "class Calculator:"
- text: "import os\ndef list_files(path):"
model-index:
- name: AXL-Code-1B-Lion
results:
- task:
type: text-generation
metrics:
- name: Perplexity (byte-level)
type: perplexity
value: 1.9
---
# AXL-Code-1B-Lion
The largest Lion model. 318M params trained in 20 min. PPL 1.90. Context 256 bytes. Part of the AXL model family by [KoinicLabs](https://huggingface.co/KoinicLabs).
## Model Details
| Property | Value |
|----------|-------|
| Developed by | [KoinicLabs](https://huggingface.co/KoinicLabs) |
| Architecture | Multi-Scale Transformer |
| Parameters | 318M |
| Optimizer | Lion |
| Attention | SDPA |
| Vocab Size | 258 (byte-level) |
| Context Window | 256 bytes |
| d_model | 1024 |
| Attention Heads | 16 |
| Layers per Scale | 6 |
| Downsample Factors | [1, 2, 4] |
| License | Apache 2.0 |
### Sources
- **Repository:** [GitHub](https://github.com/Koinic/AXL)
- **Organization:** [KoinicLabs](https://huggingface.co/KoinicLabs)
## Uses
### Direct Use
Code completion and generation from prompts.
```python
import torch
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
ckpt = torch.load("axl_code_1b_lion.pt", map_location="cpu")
model = MultiScaleTransformer(config)
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
ids = torch.tensor([tokenizer.encode("def hello():")], dtype=torch.long)
with torch.no_grad():
out = model.generate(ids, max_new_tokens=50, temperature=0.8)
print(tokenizer.decode(out[0].tolist()))
```
### Out-of-Scope Use
Not for production code generation. Not for non-code NLP tasks. Not for complex multi-step reasoning. For integration with tools like Continue.dev, LlamaIndex, or LangChain, use the Python API server which provides OpenAI-compatible endpoints.
## Bias, Risks, and Limitations
Byte-level perplexity (258 vocab) is not comparable to BPE-level perplexity (32K vocab). Not suitable for production code generation. Max context 256 bytes. IMPORTANT: GGUF files exported for Ollama/LM Studio use only the fine-scale encoder (1/3 of the AXL architecture). The reported PPL applies to the full multi-scale model. For full AXL quality, use the Python API server at http://localhost:8880/v1/completions.
### Recommendations
- Use for prototyping and experimentation, not production code generation.
- Byte-level perplexity (258 vocab) is not comparable to BPE-level perplexity (32K vocab).
- For better results, use the Lion-optimized version if available.
## Training Details
### Training Data
Trained on 50MB real HF Python code. 421 steps, 20 min. Lion vs SGD: PPL 1.90 vs 31.22.
### Preprocessing
Byte-level tokenization with vocabulary size 258 (256 bytes + BOS + EOS). No vocabulary training required.
### Speeds, Sizes, Times
| Metric | Value |
|--------|-------|
| Training Steps | 421 |
| Training Time | 20 min |
| Final Loss | 0.6338 |
## Evaluation
### Metrics
Perplexity on held-out Python code using byte-level tokenization.
### Results
| Metric | Value |
|--------|-------|
| Perplexity (byte-level) | 1.9 |
| Final Loss | 0.6338 |
| Training Steps | 421 |
| Training Time | 20 min |
**Summary:** Best overall code generation. General-purpose code completion.
## Environmental Impact
| Property | Value |
|----------|-------|
| Hardware | AMD Ryzen 5 5600G |
| Hours Used | 0.334 |
| Carbon Emitted | 0.0140 kg CO2 |
| Cloud Provider | None (local CPU) |
## Technical Specifications
### Model Architecture
Multi-Scale Transformer with three parallel encoder stacks at resolution scales 1x, 2x, and 4x. Cross-scale attention connects all scale pairs. Adaptive gating fusion. SwiGLU feed-forward. RoPE positional encoding.
### Compute Infrastructure
| Property | Value |
|----------|-------|
| Hardware | AMD Ryzen 5 5600G (6 cores, 12 threads) |
| RAM | 16 GB |
| GPU | None (CPU-only) |
## Citation
```bibtex
@misc{axl_2026,
title={AXL: AXL-Code-1B-Lion - Multi-Scale Transformer for CPU Code Generation},
author={Koinic},
year={2026},
url={https://huggingface.co/KoinicLabs}
}
```
## How to Get Started
### With Ollama
```bash
ollama create axl-code-1b-lion -f Modelfile
ollama run axl-code-1b-lion "def fibonacci():"
```
### With Python
```python
import torch
from multiscale_transformer.model.config import load_config
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
config = load_config("config.json")
model = MultiScaleTransformer(config)
ckpt = torch.load("axl_code_1b_lion.pt", map_location="cpu")
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
prompt = "def fibonacci():"
ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long)
with torch.no_grad():
out = model.generate(ids, max_new_tokens=100, temperature=0.8, top_k=40)
print(tokenizer.decode(out[0].tolist()))
```