Instructions to use PabloBaeza/tinyllama-professional-etiquette-adapter with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use PabloBaeza/tinyllama-professional-etiquette-adapter with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0") model = PeftModel.from_pretrained(base_model, "PabloBaeza/tinyllama-professional-etiquette-adapter") - Notebooks
- Google Colab
- Kaggle
TinyLlama Professional Etiquette Assistant (LoRA Adapter)
Model Description
This model is a QLoRA fine-tuned adapter for TinyLlama-1.1B-Chat-v1.0. It has been specifically trained to act as a Professional Etiquette Expert. Its primary task is to transform rude, aggressive, or informal sentences into professional, polite, and corporate-appropriate communication.
- Developed by: PabloBaeza
- Model type: Causal Language Model (LoRA Adapter)
- Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Task: Sentence Rephrasing / Professional Etiquette
Intended Use
The model is designed for corporate environments (e.g., Slack, Email, Customer Support) where technical staff or users need to rephrase direct or blunt messages into formal language.
Example:
- Input: "Shut up and do your job."
- Output: "I need to ensure that we’re all working towards our goals."
Training Procedure
The model was trained using QLoRA (4-bit quantization) on a Tesla T4 GPU (Google Colab).
Hyperparameters:
- Rank (r): 16
- Alpha: 32
- Target Modules: q_proj, v_proj, k_proj, o_proj
- Learning Rate: 2e-4
- Steps: 100
- Batch Size: 4
Training Results:
- Initial Loss: 1.79
- Final Loss: 0.45
- Convergence: The model showed steady convergence over 100 steps, indicating successful style adaptation.
Limitations
Due to its small size (1.1B parameters), the model may occasionally repeat parts of the input if the repetition penalty is not set correctly (recommended repetition_penalty=1.2). It is best suited for short to medium-length professional sentences.
Dataset
Trained on the MohamedAshraf701/sentence-corrector dataset, which contains pairs of informal/rude sentences and their polite equivalents.
- Downloads last month
- -
Model tree for PabloBaeza/tinyllama-professional-etiquette-adapter
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0