๐Ÿฆ Transaction Classification with Gemma-3 LoRA

Model License HuggingFace Python 3.8+ PEFT Accuracy

๐Ÿ“‹ Model Overview

This is a LoRA adapter for google/gemma-3-270m-it fine-tuned on transaction categorization. It classifies financial transactions into 10 categories with 98.54% accuracy.

Categories

  • Charity & Donations
  • Entertainment & Recreation
  • Financial Services
  • Food & Dining
  • Government & Legal
  • Healthcare & Medical
  • Income
  • Shopping & Retail
  • Transportation
  • Utilities & Services

๐Ÿ“Š Model Performance

Category Accuracy
Charity & Donations 100.00%
Entertainment & Recreation 100.00%
Financial Services 100.00%
Food & Dining 98.45%
Government & Legal 97.40%
Healthcare & Medical 97.90%
Income 99.90%
Shopping & Retail 93.35%
Transportation 99.25%
Utilities & Services 98.80%
Overall 98.54%

๐Ÿ”ง LoRA Configuration

  • Rank (r): 8
  • Alpha: 32
  • Dropout: 0.1
  • Target Modules: q_proj, k_proj, v_proj, o_proj
  • Trainable Parameters: 743,680 (0.28% of full model)
  • Adapter Size: 2.98 MB

๐Ÿš€ Quick Start

Installation

pip install torch transformers peft accelerate

Python Usage

from peft import PeftModel
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

# Load base model
base_model = AutoModelForSequenceClassification.from_pretrained(
    "google/gemma-3-270m-it",
    num_labels=10,
    torch_dtype=torch.bfloat16
)

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "finmigodeveloper/gemma-3-270m-transaction-lora")
tokenizer = AutoTokenizer.from_pretrained("finmigodeveloper/gemma-3-270m-transaction-lora")

# Define categories
categories = [
    'Charity & Donations', 'Entertainment & Recreation', 'Financial Services',
    'Food & Dining', 'Government & Legal', 'Healthcare & Medical',
    'Income', 'Shopping & Retail', 'Transportation', 'Utilities & Services'
]
id2label = {i: label for i, label in enumerate(categories)}

# Classify a transaction
def classify_transaction(transaction_text):
    text = f"<start_of_turn>user\nClassify this transaction: {transaction_text}<end_of_turn>\n<start_of_turn>model\n"
    inputs = tokenizer(text, return_tensors="pt")
    outputs = model(**inputs)
    pred = torch.argmax(outputs.logits, dim=-1).item()
    return id2label[pred]

# Examples
print(classify_transaction("Starbucks coffee"))  # Food & Dining
print(classify_transaction("Uber ride"))  # Transportation
print(classify_transaction("Netflix subscription"))  # Entertainment & Recreation

Using with Hugging Face Inference API

import requests

API_URL = "https://api-inference.huggingface.co/models/finmigodeveloper/gemma-3-270m-transaction-lora"
headers = {"Authorization": "Bearer YOUR_HF_TOKEN"}

def classify_transaction_api(text):
    payload = {"inputs": text}
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

# Example
result = classify_transaction_api("Starbucks coffee")
print(result)

๐ŸŽฏ Training Details

  • Training Samples: 80,000 (10k per category)
  • Validation Samples: 20,000
  • Epochs: 3
  • Batch Size: 32
  • Gradient Accumulation: 2 steps
  • Effective Batch Size: 64
  • Learning Rate: 2e-4
  • Optimizer: AdamW
  • Training Time: 68 minutes on Tesla T4
  • GPU Memory: 0.60 GB

๐ŸŒ Interactive Demo

Hugging Face Spaces

Try the live demo on Hugging Face Spaces! (Create a Space first)

๐Ÿ“ˆ Training Progress

Epoch Training Loss Validation Accuracy
1 0.0246 98.54%
2 0.0217 98.45%
3 0.0208 98.50%

โš ๏ธ Limitations

  • Shopping category (93.35% accuracy) could be improved with more diverse examples
  • Best suited for English transactions
  • May struggle with very short or ambiguous transaction descriptions
  • Trained on transaction descriptions only (not full bank statements)

๐Ÿ”ฎ Future Improvements

  • Add data augmentation for Shopping category
  • Support multiple languages
  • Add confidence thresholds for uncertain predictions
  • Create ensemble with other models
  • Add support for amount and currency context

๐Ÿ“š Citation

If you use this model, please cite:

@misc{gemma-3-270m-transaction-lora-2026,
  author = {finmigodeveloper},
  title = {Gemma-3 LoRA for Transaction Classification},
  year = {2026},
  publisher = {Hugging Face},
  journal = {Hugging Face Hub},
  howpublished = {\url{https://huggingface.co/finmigodeveloper/gemma-3-270m-transaction-lora}}
}

๐Ÿ“„ License

This model is licensed under the Gemma license. See the original model page for details.

๐Ÿค Contributing

Feel free to open issues or PRs if you have suggestions for improvements!


Made with โค๏ธ using Gemma-3 and LoRA
Downloads last month
23
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Dataset used to train finmigodeveloper/gemma-3-270m-transaction-lora

Evaluation results