metadata
library_name: transformers
tags:
- unsloth
- lora
- gemma
- whatsapp
license: apache-2.0
language:
- es
base_model:
- unsloth/gemma-2-2b-it-bnb-4bit
Model Card for Whatsapp-Finetuned LoRA (Gemma-2-2B-IT-4bit)
Model Details
Model Description
This is a LoRA adapter trained on personal WhatsApp conversations, applied on top of unsloth/gemma-2-2b-it-bnb-4bit, an instruction-tuned Gemma 2B model in 4-bit quantization.
The adapter specializes the base model toward informal Spanish conversational style, slang, and context typical of WhatsApp chats.
- Developed by: Private (Boni)
- Model type: LoRA adapter for causal language modeling
- Language(s): Spanish (es)
- Finetuned from model:
unsloth/gemma-2-2b-it-bnb-4bit
Model Sources
- Base model: unsloth/gemma-2-2b-it-bnb-4bit
Uses
Direct Use
- Chatbots and assistants that mimic WhatsApp-style Spanish conversations.
- Experimentation with low-rank adapters on personal datasets.
Downstream Use
- Can be merged with the base model for full fine-tuned inference.
- Can be combined with other adapters for multi-domain behavior.
Out-of-Scope Use
- Production deployment without careful filtering (the dataset is personal, informal, and may not generalize).
- Sensitive domains like healthcare, law, or safety-critical applications.
Bias, Risks, and Limitations
- The dataset consists of personal WhatsApp conversations, which may include biases, informal expressions, and idiosyncratic slang.
- The model may reflect private communication style and does not guarantee factual correctness.
- Limited training size means performance outside the conversational domain is reduced.
Recommendations
Users should treat outputs as experimental. Avoid relying on this model for factual or professional contexts.
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "unsloth/gemma-2-2b-it-bnb-4bit"
adapter = "Jabr7/Mini-Boni"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model, device_map="auto")
model = PeftModel.from_pretrained(model, adapter)
prompt = "Hola, ¿cómo estás?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))