Gemma 3 1B - Sales Conversation Fine-tuned

Fine-tuned on 100K B2B sales conversations for objection handling and deal progression.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("convaiinnovations/gemma3-fine-tuned", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("convaiinnovations/gemma3-fine-tuned")

messages = [{"role": "user", "content": "Customer: It's too expensive. How to respond?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs.to(model.device), max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training

  • Base: unsloth/gemma-3-1b-it
  • LoRA: r=64, alpha=32
  • Epochs: 3
  • Data: 100K synthetic sales conversations

License

Apache 2.0 | ConvAI Innovations 2026

Downloads last month
73
Safetensors
Model size
1.0B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for convaiinnovations/gemma3-fine-tuned

Finetuned
(438)
this model
Quantizations
1 model