Text Generation
PEFT
Safetensors
Transformers
sentiment-analysis
nlp
lora
business-analytics
social-media-analytics
Instructions to use 09Vaarun/sentiment-analyzer with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use 09Vaarun/sentiment-analyzer with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("google/gemma-2b") model = PeftModel.from_pretrained(base_model, "09Vaarun/sentiment-analyzer") - Transformers
How to use 09Vaarun/sentiment-analyzer with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="09Vaarun/sentiment-analyzer")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("09Vaarun/sentiment-analyzer", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use 09Vaarun/sentiment-analyzer with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "09Vaarun/sentiment-analyzer" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "09Vaarun/sentiment-analyzer", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/09Vaarun/sentiment-analyzer
- SGLang
How to use 09Vaarun/sentiment-analyzer with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "09Vaarun/sentiment-analyzer" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "09Vaarun/sentiment-analyzer", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "09Vaarun/sentiment-analyzer" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "09Vaarun/sentiment-analyzer", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use 09Vaarun/sentiment-analyzer with Docker Model Runner:
docker model run hf.co/09Vaarun/sentiment-analyzer
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("09Vaarun/sentiment-analyzer", dtype="auto")Quick Links
Sentiment Analyzer (LoRA Fine-tuned Gemma-2B)
Model Summary
This repository contains a Sentiment Analysis model fine-tuned using LoRA (Low-Rank Adaptation) on top of Googleβs Gemma-2B base model.
The model is designed for educational, research, and applied business analytics use cases, especially sentiment analysis of textual data such as customer feedback and social media content.
Model Details
- Model Name: Sentiment Analyzer
- Developed by: Varun Agrawal
- Hugging Face Username:
09Vaarun - Affiliation: IIRM Jaipur
- Model Type: Natural Language Processing (Sentiment Analysis / Text Generation)
- Base Model: google/gemma-2b
- Fine-tuning Technique: PEFT (LoRA)
- Language: English
- License: Apache 2.0
Intended Use
β Direct Use
This model can be used for:
- Sentiment analysis of:
- Customer reviews
- Social media posts
- Online feedback forms
- Business and marketing text
- Academic demonstrations of:
- Transformers
- Parameter-Efficient Fine-Tuning (PEFT)
- LoRA-based adaptation
π Downstream Use
- Social media analytics projects
- Business intelligence dashboards
- NLP coursework and workshops
- Research experiments in sentiment analysis
β Out-of-Scope Use
- Medical, legal, or financial decision-making
- High-stakes automated systems without human review
Bias, Risks, and Limitations
- The model may reflect biases present in the training data
- Performance may vary across domains and writing styles
- Not recommended for critical real-world decisions without further evaluation
Recommendations
- Perform domain-specific validation before deployment
- Use human oversight for business applications
How to Use the Model
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = "google/gemma-2b"
adapter_model = "09Vaarun/sentiment-analyzer"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
text = "The service was excellent and the staff was very helpful."
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=50
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- -
Model tree for 09Vaarun/sentiment-analyzer
Base model
google/gemma-2b
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="09Vaarun/sentiment-analyzer")