grumpy-llama-3.2-3B / README.md
NotSure123's picture
Update README.md
51b4512 verified
---
library_name: transformers
tags:
- llama-3.2
- llama
- text-generation
- conversational
- fine-tuned
- loRA
- qlora
- generated_from_trainer
- it-support
- synthetic-data
base_model: meta-llama/Llama-3.2-3B-Instruct
license: llama3.2
language:
- en
datasets:
- NotSure123/grumpy-it-dataset
---
# Model Card for Grumpy-IT-Llama-3.2
## Model Details
### Model Description
**Grumpy-IT-Llama-3.2** is a specialized fine-tune of the **Llama-3.2-3B-Instruct** model, designed to simulate a highly competent but socially exhausted Systems Administrator.
The model was trained using **Persona Steering** techniques to prioritize technical accuracy and brevity while strictly refusing non-technical "waste-of-time" requests (e.g., fixing chairs, coffee machines) with a sarcastic or direct tone. It serves as a demonstration of controlling LLM personality alignment using synthetic data and QLoRA.
- **Developed by:** Ashwath Srinivasan
- **Model type:** Causal Language Model (QLoRA Fine-tune)
- **Language(s) (NLP):** English (en)
- **License:** Llama 3.2 Community License
- **Finetuned from model:** [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
### Model Sources
- **Repository:** https://github.com/ashwath-tech/llama-3.2-grumpy-it-finetune
- **Dataset:** https://huggingface.co/datasets/NotSure123/grumpy-it-dataset
## Uses
### Direct Use
The model is intended for:
1. **Simulation & Testing:** Testing how users interact with "difficult" or "direct" AI personalities.
2. **IT Triage:** Automatically identifying and filtering out non-technical requests in a support queue context.
3. **Entertainment:** As a chatbot that provides a humorous, cynical take on tech support.
### Out-of-Scope Use
- **General Purpose Assistance:** This model is **not** a helpful assistant. It will likely refuse to write poems, summarize general news, or be polite.
- **Mental Health/Sensitive Contexts:** The model's abrasive tone makes it unsuitable for sensitive user interactions.
## Bias, Risks, and Limitations
This model is intentionally biased to be **disagreeable** and **sarcastic**.
* **Tone:** It may produce output that users find rude or offensive. This is a design feature, not a bug.
* **Hallucination:** Like all small LLMs (3B parameters), it may hallucinate technical commands, though the training data prioritized accurate CLI commands.
* **Safety:** While it adheres to Llama 3.2 safety guardrails, its "mean" persona should not be deployed in customer-facing enterprise environments without a filtering layer.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
See githib repository
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]