Sentiment Analyzer Model

Overview

This repository contains a Sentiment Analysis model fine-tuned using LoRA (Low-Rank Adaptation) on top of the Google Gemma-2B base model.
The model is designed to analyze text input and determine the sentiment polarity (positive, negative, or neutral).

The model is hosted on Hugging Face and uploaded using the huggingface_hub Python API.


Model Details

Model Description

  • Developed by: Archana S
  • Hugging Face Username: archanaachu776
  • Model type: Text Generation / Sentiment Analysis
  • Language(s): English
  • Base Model: google/gemma-2b
  • Fine-tuning Method: PEFT (LoRA)
  • Library: Transformers + PEFT
  • License: Apache 2.0
  • Finetuned from: google/gemma-2b

Model Sources


Intended Uses

Direct Use

  • Analyze sentiment of user reviews
  • Customer feedback analysis
  • Social media sentiment monitoring
  • Text classification tasks requiring sentiment polarity

Downstream Use

  • Can be integrated into chatbots
  • Can be used in recommendation systems
  • Can be extended for domain-specific sentiment analysis (e.g., product, finance, healthcare)

Out-of-Scope Use

  • Not suitable for generating medical, legal, or financial advice
  • Not trained for multilingual sentiment analysis
  • Not designed for toxicity or hate-speech detection

Bias, Risks, and Limitations

  • The model may reflect biases present in the training data
  • Performance may vary for informal, slang-heavy, or sarcastic text
  • Accuracy depends on the domain similarity to training data

Recommendations

Users should validate predictions before using them in critical decision-making systems and consider further fine-tuning for domain-specific applications.


How to Get Started

Installation

pip install transformers peft torch


from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "archanaachu776/sentiment-analyzer"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

text = "The product quality is excellent and I am very happy."

inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for archanaachu776/sentiment-analyzer

Base model

google/gemma-2b
Adapter
(23690)
this model