Gemma 3 1B - Sales Conversation Fine-tuned
Fine-tuned on 100K B2B sales conversations for objection handling and deal progression.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("convaiinnovations/gemma3-fine-tuned", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("convaiinnovations/gemma3-fine-tuned")
messages = [{"role": "user", "content": "Customer: It's too expensive. How to respond?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs.to(model.device), max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training
- Base:
unsloth/gemma-3-1b-it - LoRA: r=64, alpha=32
- Epochs: 3
- Data: 100K synthetic sales conversations
License
Apache 2.0 | ConvAI Innovations 2026
- Downloads last month
- 73