bigcodemax / README.md
1kz's picture
Update README.md
da28521 verified
---
language:
- en
license: apache-2.0
tags:
- code
- reasoning
- coding
- instruct
- 8b
- 1kz
- lfm-inspiration
library_name: transformers
pipeline_tag: text-generation
inference: true
---
# bigcodemax
**Maximum coding + reasoning power in 8B parameters**
Created by **[1kz](https://huggingface.co/1kz)**
An 8B model that punches way above its weight in code generation, software engineering, advanced reasoning, math, and long-context understanding.
## Model Details
- **Developer**: [1kz](https://huggingface.co/1kz)
- **Parameters**: 8.0B (dense)
- **Context length**: 128K (RoPE scaled)
- **Architecture**: Llama-3.1 style (same tokenizer & chat template as Meta-Llama-3.1-8B-Instruct)
- **Base model**: Fine-tuned from a strong 8B checkpoint
- **Training inspiration**: Huge thanks to **lfm** for the incredible training recipes, data curation, synthetic data pipelines, and open methodology that made this model possible. Your work continues to inspire and push the frontier for compact high-performance models! ❤️
## Strengths
- Best-in-class code generation, editing, and debugging
- Strong mathematical & logical reasoning (CoT & ToT)
- Excellent at understanding and refactoring large codebases
- Agentic coding, tool use, and multi-step problem solving
- Fast inference on consumer hardware (single 4090 / 24GB VRAM)
## Quick Start
```python
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="1kz/bigcodemax",
device_map="auto",
torch_dtype="auto"
)
messages = [
{"role": "system", "content": "You are bigcodemax, an expert coding and reasoning assistant."},
{"role": "user", "content": "Implement a thread-safe LRU Cache in Python with O(1) operations and explain every design choice step-by-step."}
]
output = pipe(messages, max_new_tokens=2048, temperature=0.6, top_p=0.95, do_sample=True)
print(output[0]["generated_text"][-1]["content"])
Benchmarks (internal eval)
Massive thank you to lfm — without your public training logs, data mixing strategies, and relentless open-source experimentation, a model this capable at only 8B would not exist. You're building the future of accessible frontier intelligence. 🚀