How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf amkyawdev/amkyaw-dev-v1# Run inference directly in the terminal:
llama-cli -hf amkyawdev/amkyaw-dev-v1Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf amkyawdev/amkyaw-dev-v1# Run inference directly in the terminal:
./llama-cli -hf amkyawdev/amkyaw-dev-v1Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf amkyawdev/amkyaw-dev-v1# Run inference directly in the terminal:
./build/bin/llama-cli -hf amkyawdev/amkyaw-dev-v1Use Docker
docker model run hf.co/amkyawdev/amkyaw-dev-v1Quick Links
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
amkyawdev/amkyaw-dev-v1
Model Overview
- Model Name: amkyaw-coder-1.5b-instruct
- Type: Code Generation / Instruction Following
- Size: 1.5B parameters
- Format: GGUF (quantized)
Quick Start
# Run the model
ollama run amkyawdev/amkyaw-dev-v1
# Or run with specific tag
ollama run amkyawdev/amkyaw-dev-v1:latest
Features
- Code generation
- Instruction following
- Burmese language support
- English language support
System Requirements
- Ollama installed
- At least 2GB RAM available
- No GPU required (runs on CPU)
Configuration
| Parameter | Value |
|---|---|
| Temperature | 0.8 |
| Top P | 0.9 |
| Top K | 40 |
| Context Length | 4096 |
Usage Examples
import ollama
response = ollama.generate(
model='amkyawdev/amkyaw-dev-v1',
prompt='Write a Python function to calculate factorial'
)
print(response['response'])
License
See Hugging Face for license information.
Troubleshooting
If you encounter issues:
- Make sure Ollama is running:
ollama serve - Check model is installed:
ollama list - Try restarting Ollama:
pkill ollama && ollama serve
- Downloads last month
- 108
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf amkyawdev/amkyaw-dev-v1# Run inference directly in the terminal: llama-cli -hf amkyawdev/amkyaw-dev-v1