Model Card for sentiment-analyzer

This model is a Fine-tuned LoRA (Low-Rank Adaptation) adapter for Google's Gemma-2b, specifically designed for Sentiment Analysis.

Model Details

Model Description

This is a PEFT (Parameter-Efficient Fine-Tuning) adapter capable of analyzing text sentiment. It was trained using the LoRA method on top of the google/gemma-2b base model. It is designed to be lightweight and efficient while retaining the capabilities of the base model.

  • Developed by: Indrajeet Pimpalgaonkar (indrajeet77)
  • Model type: LoRA Adapter for Causal Language Modeling
  • Language(s) (NLP): English
  • License: Apache-2.0 (Inherited from Gemma)
  • Finetuned from model: google/gemma-2b

Model Sources

Uses

Direct Use

The model is intended for analyzing the sentiment of input text (e.g., classifying text as Positive, Negative, or Neutral) or generating sentiment-aware responses.

How to Get Started with the Model

You can load this model using the peft and transformers libraries. Since this is a LoRA adapter, you must load the base model first.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

#  1. Load Model & Adapter 
base_model = "google/gemma-2b"
adapter_repo = "indrajeet77/sentiment-analyzer"

tokenizer = AutoTokenizer.from_pretrained(base_model)

model = AutoModelForCausalLM.from_pretrained(
    base_model,
    device_map="auto",
    torch_dtype=torch.float16
)

model = PeftModel.from_pretrained(model, adapter_repo)


#  2. Inference Function 
def get_sentiment(text):
    # We use "Few-Shot Prompting" to force the model to give a one-word answer
    prompt = f"""Classify the sentiment as positive, negative, or neutral.

Text: The movie was terrible and boring.
Sentiment: negative

Text: I am so happy with this result!
Sentiment: positive

Text: {text}
Sentiment:"""

    inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

    with torch.no_grad():
        outputs = model.generate(
            **inputs, 
            max_new_tokens=2, 
            do_sample=False,
            pad_token_id=tokenizer.eos_token_id
        )

    # Decode and clean the output
    response = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
    return response.strip().lower()

#  3. Run It 
print("Model Loaded. Testing...")
text = "The product quality is amazing"
print(f"Text: {text}")
print(f"Prediction: {get_sentiment(text)}")
Downloads last month
33
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for indrajeet77/sentiment-analyzer

Base model

google/gemma-2b
Adapter
(23690)
this model