File size: 2,632 Bytes
ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 ba9ab26 c207d36 17649e2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
---
base_model: google/gemma-2b
library_name: peft
pipeline_tag: text-generation
tags:
- sentiment-analysis
- lora
- peft
- transformers
- gemma
---
# Sentiment Analyzer Model
## Overview
This repository contains a **Sentiment Analysis model** fine-tuned using **LoRA (Low-Rank Adaptation)** on top of the **Google Gemma-2B** base model.
The model is designed to analyze text input and determine the **sentiment polarity** (positive, negative, or neutral).
The model is hosted on Hugging Face and uploaded using the `huggingface_hub` Python API.
---
## Model Details
### Model Description
- **Developed by:** Archana S
- **Hugging Face Username:** `archanaachu776`
- **Model type:** Text Generation / Sentiment Analysis
- **Language(s):** English
- **Base Model:** google/gemma-2b
- **Fine-tuning Method:** PEFT (LoRA)
- **Library:** Transformers + PEFT
- **License:** Apache 2.0
- **Finetuned from:** google/gemma-2b
---
## Model Sources
- **Repository:** https://huggingface.co/archanaachu776/sentiment-analyzer
- **Base Model:** https://huggingface.co/google/gemma-2b
---
## Intended Uses
### Direct Use
- Analyze sentiment of user reviews
- Customer feedback analysis
- Social media sentiment monitoring
- Text classification tasks requiring sentiment polarity
### Downstream Use
- Can be integrated into chatbots
- Can be used in recommendation systems
- Can be extended for domain-specific sentiment analysis (e.g., product, finance, healthcare)
### Out-of-Scope Use
- Not suitable for generating medical, legal, or financial advice
- Not trained for multilingual sentiment analysis
- Not designed for toxicity or hate-speech detection
---
## Bias, Risks, and Limitations
- The model may reflect biases present in the training data
- Performance may vary for informal, slang-heavy, or sarcastic text
- Accuracy depends on the domain similarity to training data
### Recommendations
Users should validate predictions before using them in critical decision-making systems and consider further fine-tuning for domain-specific applications.
---
## How to Get Started
### Installation
```bash
pip install transformers peft torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "archanaachu776/sentiment-analyzer"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "The product quality is excellent and I am very happy."
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|