File size: 2,626 Bytes
0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 0257b4f f9cbbb7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
---
base_model: google/gemma-2b
library_name: peft
pipeline_tag: text-generation
tags:
- sentiment-analysis
- nlp
- lora
- peft
- transformers
- business-analytics
- social-media-analytics
---
# Sentiment Analyzer (LoRA Fine-tuned Gemma-2B)
## Model Summary
This repository contains a **Sentiment Analysis model** fine-tuned using **LoRA (Low-Rank Adaptation)** on top of **Google’s Gemma-2B** base model.
The model is designed for **educational, research, and applied business analytics use cases**, especially sentiment analysis of textual data such as customer feedback and social media content.
---
## Model Details
- **Model Name:** Sentiment Analyzer
- **Developed by:** Varun Agrawal
- **Hugging Face Username:** `09Vaarun`
- **Affiliation:** IIRM Jaipur
- **Model Type:** Natural Language Processing (Sentiment Analysis / Text Generation)
- **Base Model:** google/gemma-2b
- **Fine-tuning Technique:** PEFT (LoRA)
- **Language:** English
- **License:** Apache 2.0
---
## Intended Use
### ✅ Direct Use
This model can be used for:
- Sentiment analysis of:
- Customer reviews
- Social media posts
- Online feedback forms
- Business and marketing text
- Academic demonstrations of:
- Transformers
- Parameter-Efficient Fine-Tuning (PEFT)
- LoRA-based adaptation
### 🔄 Downstream Use
- Social media analytics projects
- Business intelligence dashboards
- NLP coursework and workshops
- Research experiments in sentiment analysis
### ❌ Out-of-Scope Use
- Medical, legal, or financial decision-making
- High-stakes automated systems without human review
---
## Bias, Risks, and Limitations
- The model may reflect biases present in the training data
- Performance may vary across domains and writing styles
- Not recommended for critical real-world decisions without further evaluation
### Recommendations
- Perform domain-specific validation before deployment
- Use human oversight for business applications
---
## How to Use the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = "google/gemma-2b"
adapter_model = "09Vaarun/sentiment-analyzer"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
text = "The service was excellent and the staff was very helpful."
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=50
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|