π§ Llama-3.2-3B-Dracula Fine-tuned Model
Fine-tuned Llama 3.2 3B model on character dialogues from Bram Stoker's Dracula novel.
Model Details
- Base Model: Llama 3.2 3B
- Fine-tuning: LoRA on character dialogues
- Quantization: Q4_K_M (GGUF format)
- Size: 1.9 GB
- Use Case: Character-based conversational AI
- Characters: Dracula, Mina Harker, Van Helsing, Jonathan Harker, Lucy Westenra, Dr. Seward
Usage
With llama-cpp-python
from llama_cpp import Llama
# Load model
llm = Llama(
model_path="llama-3.2-3b-dracula-q4_k_m.gguf",
n_ctx=4096,
n_gpu_layers=-1 # Use GPU if available
)
# Generate response
response = llm(
"Tell me about your castle.",
max_tokens=400,
temperature=0.6,
stop=["\n\n", "Human:", "User:"]
)
print(response['choices'][0]['text'])
Download from Hub
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(
repo_id="Priyanks27/llama-3.2-3b-dracula",
filename="llama-3.2-3b-dracula-q4_k_m.gguf"
)
Training Details
- Dataset: Character dialogues extracted from Dracula novel
- Method: LoRA fine-tuning
- Base: Llama 3.2 3B Instruct
- Quantization: Q4_K_M for optimal size/quality balance
Performance
- Faithfulness: 5/5 (zero hallucination on novel content)
- Character Consistency: Distinct personalities maintained
- Response Quality: High-quality Victorian-era language
- Speed: ~10 tokens/second on CPU
Use in HuggingFace Space
This model is used by the Dracula Character Chat Space.
License
Apache 2.0
Citation
@misc{dracula-chatbot-2025,
author = {Priyanks27},
title = {Llama-3.2-3B-Dracula Fine-tuned Model},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/Priyanks27/llama-3.2-3b-dracula}}
}
Acknowledgments
- Base model: Meta Llama 3.2
- Novel: Bram Stoker's Dracula (1897)
- Framework: llama.cpp, llama-cpp-python
- Downloads last month
- 30
Hardware compatibility
Log In
to view the estimation
4-bit