PEFT
English
finance
sentiment
finbert
lora
efficient-tuning

Financial-Sentiment-LLM (Multi-Task LoRA)

A production-ready financial sentiment classifier fine-tuned on FinBERT. This model utilizes a Multi-Task Architecture (Classification + Regression) to achieve state-of-the-art performance across diverse financial text sources, including professional news, social media, and forum discussions.

Efficiency vs. Performance

We benchmarked this LoRA adapter against the fully fine-tuned baseline. It matches the full model on professional news but trades off slight accuracy on complex forum discussions for massive efficiency gains.

Metric / Dataset FinBERT (LoRA) FinBERT (Full)
Model Size ~5 MB ~420 MB
Overall Accuracy 83.2% 85.4%
Financial PhraseBank (News) 97.1% 95.9%
Twitter Financial News 80.5% 83.3%
FiQA (Forums) 72.6% 81.5%

Recommendation: Use this LoRA version for analyzing News headlines (where it actually beats the full model) or whenever model size is a critical constraint. For maximum accuracy on messy social media data, use the Full Multi-Task Model.

Architecture

Unlike standard sentiment classifiers, this model shares a bert-base backbone with two task-specific heads:

  1. Classification Head: Predicts Negative/Neutral/Positive (Optimized for News & Twitter).
  2. Regression Head: Predicts a continuous sentiment score (Optimized for FiQA forum discussions).

This approach yielded a +6.1% accuracy boost on Twitter data compared to single-task training, proving that learning continuous sentiment intensity helps the model understand noisy social text better.

Training Details

  • Base Model: ProsusAI/finbert
  • Technique: LoRA (Low-Rank Adaptation)
  • Rank (r): 16 (matches the experiment config)
  • Trainable Parameters: ~0.6% of full model
  • Hardware: Trained on NVIDIA RTX 4050 in ~10 mins.

Usage

To use this adapter, you need the peft library alongside transformers:

from peft import PeftModel, PeftConfig
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

# Load the specific LoRA configuration
# (Make sure this matches your REPO NAME exactly)
adapter_id = "pmatorras/financial-sentiment-analysis-lora"
config = PeftConfig.from_pretrained(adapter_id)

# Load the Base Model (ProsusAI/finbert)
# The config automatically knows the base model path
base_model = AutoModelForSequenceClassification.from_pretrained(config.base_model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)

# Load the LoRA Adapter on top
model = PeftModel.from_pretrained(base_model, adapter_id)

# Inference
text = "The company's cost-cutting measures are expected to boost margins significantly."
inputs = tokenizer(text, return_tensors="pt")

with torch.no_grad():
    outputs = model(**inputs)
    probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)

print(probabilities)
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for pmatorras/financial-sentiment-analysis-lora

Base model

ProsusAI/finbert
Adapter
(7)
this model

Datasets used to train pmatorras/financial-sentiment-analysis-lora