Instructions to use junzzhu/atomllama-33K-5x5-DigitMesh with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use junzzhu/atomllama-33K-5x5-DigitMesh with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="junzzhu/atomllama-33K-5x5-DigitMesh")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("junzzhu/atomllama-33K-5x5-DigitMesh") model = AutoModelForCausalLM.from_pretrained("junzzhu/atomllama-33K-5x5-DigitMesh") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use junzzhu/atomllama-33K-5x5-DigitMesh with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "junzzhu/atomllama-33K-5x5-DigitMesh" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "junzzhu/atomllama-33K-5x5-DigitMesh", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/junzzhu/atomllama-33K-5x5-DigitMesh
- SGLang
How to use junzzhu/atomllama-33K-5x5-DigitMesh with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "junzzhu/atomllama-33K-5x5-DigitMesh" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "junzzhu/atomllama-33K-5x5-DigitMesh", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "junzzhu/atomllama-33K-5x5-DigitMesh" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "junzzhu/atomllama-33K-5x5-DigitMesh", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use junzzhu/atomllama-33K-5x5-DigitMesh with Docker Model Runner:
docker model run hf.co/junzzhu/atomllama-33K-5x5-DigitMesh
AtomLlama-33K-5x5-DigitMesh
A minimal 33K parameter language model for 5Γ5 digit mesh recognition, built on the LlamaForCausalLM architecture.
Model Description
AtomLlama-33K-5x5-DigitMesh is an ultra-lightweight causal language model designed for efficient digit recognition from 5Γ5 binary mesh patterns. With only 33,000 parameters, this "atom-sized" model demonstrates effective pattern recognition with minimal computational resources.
Key Specifications
- Architecture: LlamaForCausalLM
- Parameters: ~33K
- Input: 5Γ5 binary mesh (25 tokens)
- Output: Digit tokens (D0-D9)
- Vocabulary Size: 14 tokens
- Context Length: 32 tokens
- Hidden Size: 32, Layers: 2, Attention Heads: 4
Quick Start
Serving with vLLM
python -m vllm.entrypoints.openai.api_server \
--model junzzhu/atomllama-33K-5x5-DigitMesh \
--max-model-len 32
Test Patterns
Example: Testing Digit 0
curl http://localhost:8000/v1/completions \
-H 'Content-Type: application/json' \
-d '{
"model": "junzzhu/atomllama-33K-5x5-DigitMesh",
"prompt": "1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 <SEP>",
"max_tokens": 1,
"temperature": 0
}'
Expected output: D0
Test Patterns for Other Digits
Replace the prompt value in the curl command above with these patterns:
- Digit 1:
"0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 <SEP>"β Expected:D1 - Digit 2:
"1 1 1 1 0 0 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 1 1 1 <SEP>"β Expected:D2 - Digit 3:
"1 1 1 1 0 0 0 0 1 1 0 1 1 1 0 0 0 0 1 1 1 1 1 1 0 <SEP>"β Expected:D3 - Digit 4:
"1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 <SEP>"β Expected:D4 - Digit 5:
"1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 <SEP>"β Expected:D5 - Digit 6:
"1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 <SEP>"β Expected:D6 - Digit 7:
"1 1 1 1 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 <SEP>"β Expected:D7 - Digit 8:
"0 1 1 1 0 1 0 0 0 1 0 1 1 1 0 1 0 0 0 1 0 1 1 1 0 <SEP>"β Expected:D8 - Digit 9:
"1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 <SEP>"β Expected:D9
Input Format
The model expects 25 space-separated binary values (0 or 1) representing a 5Γ5 grid, followed by <SEP>:
[5 values] [5 values] [5 values] [5 values] [5 values] <SEP>
Use Cases
- Educational demonstrations of minimal transformers
- Resource-constrained digit recognition
- Model compression research
- Pattern recognition proof-of-concepts
Limitations
- Fixed 5Γ5 resolution only
- Binary patterns only (no grayscale)
- No rotation or scaling support
- Digits 0-9 only
License
Apache-2.0
Citation
@misc{atomllama-33k-digitMesh,
title={AtomLlama-33K-5x5-DigitMesh: A Minimal Parameter Model for Digit Recognition},
author={Jun Zhu},
year={2026}
howpublished={\url{https://huggingface.co/junzzhu/atomllama-33K-5x5-DigitMesh/}}
}
- Downloads last month
- 9