🎯 Gemma Sales Comprehensive
A Gemma 1.1 2B model fine-tuned specifically for sales conversations and customer communication.
📊 Model Details
- Base Model: google/gemma-1.1-2b-it
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Parameter Size: 22.3 MB (adapter weights only)
- LoRA Rank: 8
- LoRA Alpha: 16
- Target Modules: q_proj, k_proj, v_proj, o_proj
- Languages: English & Turkish
🚀 Usage
Installation
pip install transformers peft torch
Loading the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"google/gemma-1.1-2b-it",
torch_dtype=torch.float16,
device_map="auto"
)
# Load fine-tuned adapter
model = PeftModel.from_pretrained(
base_model,
"YOUR_USERNAME/gemma-sales-comprehensive"
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("YOUR_USERNAME/gemma-sales-comprehensive")
Example Usage
# Simple usage
prompt = "How should I approach a potential customer?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_length=300,
temperature=0.7,
do_sample=True,
top_p=0.9
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Interactive Chat
while True:
prompt = input("\nYour question (type 'quit' to exit): ")
if prompt.lower() == 'quit':
break
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=300, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"\nResponse: {response}")
💡 Use Cases
- ✅ Sales conversation strategies
- ✅ Handling customer objections
- ✅ Product/service presentations
- ✅ Customer relationship management
- ✅ Communication techniques
- ✅ Sales pitch generation
📈 Training Details
The model was fine-tuned on a specialized dataset containing sales scenarios and customer communication examples.
Training Hyperparameters
- LoRA Rank (r): 8
- LoRA Alpha: 16
- LoRA Dropout: 0.1
- Target Modules: q_proj, k_proj, v_proj, o_proj
- Task Type: Causal Language Modeling
⚙️ Technical Specifications
Model Architecture
- Base: Gemma 1.1 2B Instruction-Tuned
- Adapter Type: LoRA (Low-Rank Adaptation)
- Adapter Size: ~22 MB
- Total Parameters (with base): ~2B
Inference
# For faster inference
model.eval()
with torch.no_grad():
outputs = model.generate(**inputs)
⚠️ Limitations
- The model is specialized for sales and customer communication contexts
- May not perform as well as the base model on general-purpose tasks
- Promotes ethical and professional sales practices
- Responses should be reviewed before use in production
🔒 Safety & Ethics
This model is designed to assist with professional sales communication. It should:
- Promote honest and transparent sales practices
- Respect customer autonomy and consent
- Avoid manipulative or deceptive tactics
- Comply with relevant sales regulations and guidelines
📝 License
Apache 2.0
🤝 Contributors
[Add team member names here]
📧 Contact
For questions or feedback: [email/contact info]
🙏 Acknowledgments
- Built with Hugging Face Transformers
- Uses PEFT for efficient fine-tuning
- Based on Google's Gemma
Note: This is a LoRA adapter and requires the base Gemma model to function. The adapter weights are ~22 MB, while the full model with base weights is ~5 GB.
- Downloads last month
- 64
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ilkeileri/gemma-sales-comprehensive
Base model
google/gemma-1.1-2b-it