Pauper Llama 3 8B
Fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct specialized for Magic: The Gathering's Pauper format using LoRA fine-tuning.
๐ฆ Available Formats
This repository contains both the full HuggingFace model and GGUF quantizations for various use cases.
HuggingFace Transformers (Full Precision)
Perfect for:
- Further fine-tuning
- Maximum quality inference
- Integration with transformers library
GGUF Quantized Models (llama.cpp compatible)
Perfect for:
- LM Studio, Ollama, llama.cpp
- Local inference on consumer hardware
- Faster inference with minimal quality loss
| File | Size | Description | Best For |
|---|---|---|---|
gguf/pauper_llama3_q4km.gguf |
~5GB | 4-bit quantized | Recommended - Best balance |
gguf/pauper_llama3_q5km.gguf |
~6GB | 5-bit quantized | Better quality |
gguf/pauper_llama3_q8.gguf |
~8GB | 8-bit quantized | Near-original quality |
gguf/pauper_llama3_fp16.gguf |
~15GB | Full precision | Maximum quality |
๐ Usage
Option 1: HuggingFace Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"nmalinowski/pauper-llama3-8b",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("nmalinowski/pauper-llama3-8b")
prompt = "What are the best cards in Pauper?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Option 2: LM Studio (GGUF - Easiest!)
- Download
gguf/pauper_llama3_q4km.gguffrom Files tab - Open LM Studio โ Load Model
- Select the downloaded GGUF file
- Start chatting about Pauper!
Option 3: llama.cpp
# Download the quantized model
huggingface-cli download nmalinowski/pauper-llama3-8b gguf/pauper_llama3_q4km.gguf --local-dir ./
# Run inference
./llama-cli -m pauper_llama3_q4km.gguf \
-p "What are the top Pauper decks in the current meta?" \
-n 256 \
--temp 0.7
Option 4: Ollama
# Create Modelfile
cat > Modelfile <<EOF
FROM ./gguf/pauper_llama3_q4km.gguf
PARAMETER temperature 0.7
PARAMETER top_p 0.9
SYSTEM "You are an expert on Magic: The Gathering's Pauper format."
EOF
# Create and run
ollama create pauper-llama3 -f Modelfile
ollama run pauper-llama3 "Explain the current Pauper meta"
๐ฏ Training Details
- Base Model: Llama 3 8B Instruct
- Training Method: LoRA (Low-Rank Adaptation)
- Domain: Magic: The Gathering - Pauper format
- LoRA Configuration:
- Rank: 16
- Alpha: 32
- Target modules: q_proj, v_proj
- Dropout: 0.05
๐ก Recommendations
- For most users: Download
gguf/pauper_llama3_q4km.ggufand use with LM Studio - For best quality: Use the full HuggingFace model with transformers
- For low VRAM: Use Q4_K_M quantization (~5GB)
- For high VRAM: Use Q8_0 or FP16 for better quality
๐ Performance
The Q4_K_M quantization offers:
- โ ~95% of full precision quality
- โ 70% smaller file size
- โ Faster inference on CPU and GPU
- โ Runs on consumer hardware (16GB RAM recommended)
๐ฎ Example Prompts
"What are the best removal spells in Pauper?"
"Build me a Pauper deck around Monastery Swiftspear"
"Explain the differences between Affinity and Elves in Pauper"
"What are the current tier 1 Pauper decks?"
โ ๏ธ Limitations
- Specialized for Pauper format - may not perform well on other MTG formats
- May occasionally hallucinate card names or abilities
- Knowledge cutoff: January 2025
- Not suitable for medical, legal, or financial advice
๐ License
This model inherits the Llama 3 Community License from Meta. See LICENSE for details.
๐ Acknowledgments
- Base model: Meta's Llama 3 8B Instruct
- Training framework: HuggingFace Transformers + PEFT
- Quantization: llama.cpp
๐ Issues & Feedback
If you encounter issues or have suggestions, please open an issue on the Community tab.
- Downloads last month
- 186