|
|
--- |
|
|
base_model: google/gemma-2b |
|
|
library_name: peft |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- base_model:adapter:google/gemma-2b |
|
|
- lora |
|
|
- transformers |
|
|
- sentiment-analysis |
|
|
- gemma |
|
|
--- |
|
|
|
|
|
# Model Card for sentiment-analyzer |
|
|
|
|
|
This model is a Fine-tuned LoRA (Low-Rank Adaptation) adapter for **Google's Gemma-2b**, specifically designed for **Sentiment Analysis**. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
This is a PEFT (Parameter-Efficient Fine-Tuning) adapter capable of analyzing text sentiment. It was trained using the LoRA method on top of the `google/gemma-2b` base model. It is designed to be lightweight and efficient while retaining the capabilities of the base model. |
|
|
|
|
|
- **Developed by:** Indrajeet Pimpalgaonkar (indrajeet77) |
|
|
- **Model type:** LoRA Adapter for Causal Language Modeling |
|
|
- **Language(s) (NLP):** English |
|
|
- **License:** Apache-2.0 (Inherited from Gemma) |
|
|
- **Finetuned from model:** [google/gemma-2b](https://huggingface.co/google/gemma-2b) |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
- **Repository:** [https://huggingface.co/indrajeet77/sentiment-analyzer](https://huggingface.co/indrajeet77/sentiment-analyzer) |
|
|
|
|
|
## Uses |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
The model is intended for analyzing the sentiment of input text (e.g., classifying text as Positive, Negative, or Neutral) or generating sentiment-aware responses. |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
You can load this model using the `peft` and `transformers` libraries. Since this is a LoRA adapter, you must load the base model first. |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
from peft import PeftModel |
|
|
|
|
|
# 1. Load Model & Adapter |
|
|
base_model = "google/gemma-2b" |
|
|
adapter_repo = "indrajeet77/sentiment-analyzer" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(base_model) |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
base_model, |
|
|
device_map="auto", |
|
|
torch_dtype=torch.float16 |
|
|
) |
|
|
|
|
|
model = PeftModel.from_pretrained(model, adapter_repo) |
|
|
|
|
|
|
|
|
# 2. Inference Function |
|
|
def get_sentiment(text): |
|
|
# We use "Few-Shot Prompting" to force the model to give a one-word answer |
|
|
prompt = f"""Classify the sentiment as positive, negative, or neutral. |
|
|
|
|
|
Text: The movie was terrible and boring. |
|
|
Sentiment: negative |
|
|
|
|
|
Text: I am so happy with this result! |
|
|
Sentiment: positive |
|
|
|
|
|
Text: {text} |
|
|
Sentiment:""" |
|
|
|
|
|
inputs = tokenizer(prompt, return_tensors="pt").to("cuda") |
|
|
|
|
|
with torch.no_grad(): |
|
|
outputs = model.generate( |
|
|
**inputs, |
|
|
max_new_tokens=2, |
|
|
do_sample=False, |
|
|
pad_token_id=tokenizer.eos_token_id |
|
|
) |
|
|
|
|
|
# Decode and clean the output |
|
|
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True) |
|
|
return response.strip().lower() |
|
|
|
|
|
# 3. Run It |
|
|
print("Model Loaded. Testing...") |
|
|
text = "The product quality is amazing" |
|
|
print(f"Text: {text}") |
|
|
print(f"Prediction: {get_sentiment(text)}") |