Text Generation
GGUF
English
python
codegen
markdown
smol_llama
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0
How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf afrideva/beecoder-220M-python-GGUF:# Run inference directly in the terminal:
llama-cli -hf afrideva/beecoder-220M-python-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf afrideva/beecoder-220M-python-GGUF:# Run inference directly in the terminal:
./llama-cli -hf afrideva/beecoder-220M-python-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf afrideva/beecoder-220M-python-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf afrideva/beecoder-220M-python-GGUF:Use Docker
docker model run hf.co/afrideva/beecoder-220M-python-GGUF:Quick Links
BEE-spoke-data/beecoder-220M-python-GGUF
Quantized GGUF model files for beecoder-220M-python from BEE-spoke-data
| Name | Quant method | Size |
|---|---|---|
| beecoder-220m-python.fp16.gguf | fp16 | 436.50 MB |
| beecoder-220m-python.q2_k.gguf | q2_k | 94.43 MB |
| beecoder-220m-python.q3_k_m.gguf | q3_k_m | 114.65 MB |
| beecoder-220m-python.q4_k_m.gguf | q4_k_m | 137.58 MB |
| beecoder-220m-python.q5_k_m.gguf | q5_k_m | 157.91 MB |
| beecoder-220m-python.q6_k.gguf | q6_k | 179.52 MB |
| beecoder-220m-python.q8_0.gguf | q8_0 | 232.28 MB |
Original Model Card:
BEE-spoke-data/beecoder-220M-python
This is BEE-spoke-data/smol_llama-220M-GQA fine-tuned for code generation on:
- filtered version of stack-smol-XL
- deduped version of 'algebraic stack' from proof-pile-2
- cleaned and deduped pypi (last dataset)
This model (and the base model) were both trained using ctx length 2048.
examples
Example script for inference testing: here
It has its limitations at 220M, but seems decent for single-line or docstring generation, and/or being used for speculative decoding for such purposes.
The screenshot is on CPU on a laptop.
- Downloads last month
- 321
Hardware compatibility
Log In to add your hardware
Model tree for afrideva/beecoder-220M-python-GGUF
Base model
BEE-spoke-data/smol_llama-220M-GQA Finetuned
BEE-spoke-data/beecoder-220M-python
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf afrideva/beecoder-220M-python-GGUF:# Run inference directly in the terminal: llama-cli -hf afrideva/beecoder-220M-python-GGUF: