sentiment-analyzer / README.md
mysmmurf12's picture
Update README.md
616103c verified
metadata
language:
  - en
pipeline_tag: text-classification
tags:
  - sentiment-analysis
  - text-classification
  - opinion-mining
  - emotion-detection
  - nlp
  - natural-language-processing
  - transformers
  - peft
  - lora
  - adapter
  - fine-tuning
  - gemma
  - gemma-2b
  - base_model:google/gemma-2b
  - base_model:adapter:google/gemma-2b
library_name: peft
base_model: google/gemma-2b
license: apache-2.0
model-index:
  - name: Sentiment Analyzer (LoRA Gemma-2B)
    results:
      - task:
          type: text-classification
          name: Sentiment Analysis
        metrics:
          - type: accuracy
            value: not-reported

Sentiment Analyzer (LoRA Fine-Tuned Gemma-2B)

Model Overview

Sentiment Analyzer is a LoRA fine-tuned Gemma-2B transformer model for sentiment analysis and text classification tasks.
It uses PEFT (Parameter-Efficient Fine-Tuning) to deliver strong performance while keeping memory and compute requirements low.

This model is well-suited for:

  • Sentiment analysis
  • Opinion mining
  • Review classification
  • Emotion-aware text generation
  • Lightweight NLP deployments

Tasks

  • Text Classification
  • Sentiment Analysis

Model Details

  • Developed by: mysmmurf12
  • Shared by: mysmmurf12
  • Model type: Transformer-based Language Model
  • Base model: google/gemma-2b
  • Fine-tuning method: LoRA (Low-Rank Adaptation)
  • Library: PEFT + Transformers
  • Language: English
  • License: Apache 2.0 (inherits from base model)

Model Sources


Intended Uses

βœ… Direct Use

  • Sentiment classification (positive / negative / neutral)
  • Customer feedback and review analysis
  • Social media sentiment monitoring
  • Sentiment-aware chatbots

πŸ” Downstream Use

  • Integration into RAG pipelines
  • Domain-specific sentiment fine-tuning
  • Deployment via APIs, Streamlit apps, or dashboards

🚫 Out-of-Scope Use

  • Medical, legal, or financial decision-making
  • High-risk automated moderation
  • Multilingual sentiment tasks (English-focused)

Bias, Risks, and Limitations

  • May reflect biases present in training data
  • Less reliable on sarcasm or ambiguous language
  • Not evaluated on standardized sentiment benchmarks

Recommendation:
Use human validation for high-impact applications.


How to Use the Model

Load with Transformers + PEFT

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "google/gemma-2b"
adapter_model = "mysmmurf12/sentiment-analyzer"

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)

model = PeftModel.from_pretrained(model, adapter_model)

text = "The product quality is amazing!"
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(
    **inputs,
    max_new_tokens=50,
    temperature=0.7
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))