sentiment-analyzer / README.md
09Vaarun's picture
Update README.md
f9cbbb7 verified
metadata
base_model: google/gemma-2b
library_name: peft
pipeline_tag: text-generation
tags:
  - sentiment-analysis
  - nlp
  - lora
  - peft
  - transformers
  - business-analytics
  - social-media-analytics

Sentiment Analyzer (LoRA Fine-tuned Gemma-2B)

Model Summary

This repository contains a Sentiment Analysis model fine-tuned using LoRA (Low-Rank Adaptation) on top of Googleโ€™s Gemma-2B base model.
The model is designed for educational, research, and applied business analytics use cases, especially sentiment analysis of textual data such as customer feedback and social media content.


Model Details

  • Model Name: Sentiment Analyzer
  • Developed by: Varun Agrawal
  • Hugging Face Username: 09Vaarun
  • Affiliation: IIRM Jaipur
  • Model Type: Natural Language Processing (Sentiment Analysis / Text Generation)
  • Base Model: google/gemma-2b
  • Fine-tuning Technique: PEFT (LoRA)
  • Language: English
  • License: Apache 2.0

Intended Use

โœ… Direct Use

This model can be used for:

  • Sentiment analysis of:
    • Customer reviews
    • Social media posts
    • Online feedback forms
    • Business and marketing text
  • Academic demonstrations of:
    • Transformers
    • Parameter-Efficient Fine-Tuning (PEFT)
    • LoRA-based adaptation

๐Ÿ”„ Downstream Use

  • Social media analytics projects
  • Business intelligence dashboards
  • NLP coursework and workshops
  • Research experiments in sentiment analysis

โŒ Out-of-Scope Use

  • Medical, legal, or financial decision-making
  • High-stakes automated systems without human review

Bias, Risks, and Limitations

  • The model may reflect biases present in the training data
  • Performance may vary across domains and writing styles
  • Not recommended for critical real-world decisions without further evaluation

Recommendations

  • Perform domain-specific validation before deployment
  • Use human oversight for business applications

How to Use the Model

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model = "google/gemma-2b"
adapter_model = "09Vaarun/sentiment-analyzer"

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)

text = "The service was excellent and the staff was very helpful."
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(
    **inputs,
    max_new_tokens=50
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))