metadata
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: llama.cpp
pipeline_tag: text-generation
tags:
- tinyllama
- email-reply
- gguf
- ollama
- local-ai
license: mit
language:
- en
TinyLlama Email Reply Generator
A fine-tuned TinyLlama model for generating professional email replies from incoming emails.
The adapter was trained on the Enron Email Reply Dataset to learn professional communication patterns.
Model Overview
- Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Format: GGUF (Q4_K_M quantization)
- Size: ~667 MB
- Task: Email reply generation
- Language: English
Quick Start with GGUF
Using Ollama (Recommended)
# Pull the model from Hugging Face
huggingface-cli download AshankGupta/tinyllama-email-reply tinyllama-chat.Q4_K_M.gguf --local-dir ./model
# Or download directly
curl -L -o model.gguf "https://huggingface.co/AshankGupta/tinyllama-email-reply/resolve/main/tinyllama-chat.Q4_K_M.gguf"
# Create Ollama model
ollama create tinyllama-email-reply -f Modelfile
# Run
ollama run tinyllama-email-reply
Using llama.cpp
# Download GGUF file from Hugging Face
curl -L -o tinyllama-email-reply.Q4_K_M.gguf \
"https://huggingface.co/AshankGupta/tinyllama-email-reply/resolve/main/tinyllama-chat.Q4_K_M.gguf"
# Run inference
./llama.cpp/llama-cli -m tinyllama-email-reply.Q4_K_M.gguf -p "Write a professional email reply to: Can you send the invoice by tomorrow?"
Using Python
from llama_cpp import Llama
model = Llama(
model_path="tinyllama-chat.Q4_K_M.gguf",
n_ctx=1024,
)
prompt = """You are an AI assistant that writes professional email replies.
Email:
Can you send the invoice by tomorrow?
Reply:"""
output = model(prompt, max_tokens=120)
print(output["choices"][0]["text"])
Intended Use
This model is designed for:
- Email reply suggestion systems
- AI productivity tools
- Email assistants
- Local AI workflows
- Research on small language models
Training Dataset
The model was trained using the Enron Email Reply Dataset, which contains real-world corporate email conversations.
Dataset characteristics:
- ~15,000 email-reply pairs
- Business and professional communication
- Cleaned and formatted into instruction-style prompts
Training format example:
Instruction:
Generate a professional email reply.
Email:
Can you send the project report by tomorrow?
Reply:
Sure, I will send the report by tomorrow.
Training Details
- Fine-tuning technique: LoRA
- Training framework: Unsloth
- Sequence length: 512 tokens
- Optimizer: AdamW
- Base architecture: TinyLlama 1.1B
- Quantization: Q4_K_M
Example
Input Email
Hi,
Can you send the invoice by tomorrow?
Generated Reply
Sure, I will send the invoice by tomorrow.
Limitations
- The model may produce generic replies.
- Performance is limited by the small size of the base model.
- It may occasionally generate repetitive outputs.
- Not suitable for sensitive or confidential communications.
License
This model follows the license of the base model:
TinyLlama License
Please review the base model license before commercial usage.
Acknowledgements
- TinyLlama team for the base model
- Unsloth for efficient LoRA training
- Enron Email Dataset for training data