File size: 2,773 Bytes
9ab70a9 30cae15 9ab70a9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- smolagents
- code-generation
- qwen2
- text-generation
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
---
# pynb-73m-base
A 73M parameter language model trained for code generation with [smolagents](https://github.com/huggingface/smolagents). Built on the Qwen2 architecture.
## Model Details
| Property | Value |
|----------|-------|
| Parameters | 73.6M |
| Architecture | Qwen2ForCausalLM |
| Hidden size | 384 |
| Layers | 12 |
| Attention heads | 6 (2 KV heads, GQA 3:1) |
| Intermediate size | 768 |
| Context length | 2048 |
| Vocab size | 151,671 |
## Training
Trained for 15,500 steps (~12 hours) on a single NVIDIA RTX 5070 Ti.

| Metric | Start | End |
|--------|-------|-----|
| Train Loss | 12.0 | 2.4 |
| Val Loss | 6.5 | 2.6 |
## Quick Start with smolagents
See [`inference_smolagent.py`](inference_smolagent.py) for full agent setup with LocalPythonExecutor and tools.
```python
from inference_smolagent import create_agent, CalculatorTool, FibonacciTool
agent = create_agent(
model_id="AutomatedScientist/pynb-73m-base",
tools=[CalculatorTool(), FibonacciTool()],
max_steps=5,
)
result = agent.run("Calculate 15 * 7 + 23")
print(result)
```
Or with HuggingFace API model:
```python
from smolagents import CodeAgent, HfApiModel
model = HfApiModel(model_id="AutomatedScientist/pynb-73m-base")
agent = CodeAgent(tools=[], model=model)
result = agent.run("Calculate the sum of numbers from 1 to 100")
print(result)
```
## Local Inference
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "AutomatedScientist/pynb-73m-base" # or "checkpoint" for local
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
prompt = "Write a function to calculate fibonacci numbers"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
```
## Inference Script
See [`inference.py`](inference.py) for a wrapper class:
```python
from inference import CodeModel
model = CodeModel("AutomatedScientist/pynb-73m-base")
result = model.generate("Write a function to sort a list")
print(result)
```
## Installation
```bash
pip install torch transformers smolagents
```
## Limitations
- Small model (73M params) - limited reasoning capacity compared to larger models
- Context window limited to 2,048 tokens
- Best used with short prompts due to context constraints
## License
Apache 2.0
|