Llama 3.2 1B β Financial Sentiment Score (MLX LoRA)
This repo contains LoRA adapters fine-tuned on Apple Silicon (MacBook Pro M4 Pro, 24GB unified memory) using MLX-LM.
The model predicts a continuous sentiment score in [-1, 1] from financial/news text.
Task
Given a news snippet, output a single float between -1 and 1 (inclusive).
No additional text.
Dataset
- Dataset:
MrPathak29/news-sentiment-score-prompt-v5 - Format: prompt β numeric completion (float in [-1, 1])
- Train subset used locally: 100k samples
- Avg prompt length: ~100 tokens, max_seq_length=128
Training
- Framework: MLX-LM (
mlx_lm.lora) - Fine-tune type: LoRA adapters
- max_seq_length: 128
- batch_size: 8
- iters: ~12,500 (β 1 epoch over 100k rows)
- num_layers: 16
Evaluation (500 test samples)
Metric: MAE / RMSE over float predictions (coverage 100%)
| Model | Coverage | MAE | RMSE |
|---|---|---|---|
Base (meta-llama/Llama-3.2-1B-Instruct) |
1.0 | 0.2999 | 0.3789 |
| Fine-tuned (this adapter) | 1.0 | 0.1092 | 0.1771 |
Relative improvement:
- MAE β ~64%
- RMSE β ~53%
Inference (MLX)
import re
from mlx_lm import load, generate
BASE_MODEL = "meta-llama/Llama-3.2-1B-Instruct"
ADAPTER_PATH = "./adapters_sentiment_1b_v5" # or downloaded repo folder
model, tokenizer = load(BASE_MODEL, adapter_path=ADAPTER_PATH)
news = "Stocks surged after strong quarterly earnings and upbeat guidance."
prompt = (
"You are a financial sentiment scorer.\n"
"Return a single number between -1 and 1 (inclusive). No words.\n\n"
f"News:\n{news}\n\n"
"Score:\n"
)
messages = [{"role": "user", "content": prompt}]
prompt_chat = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
out = generate(model, tokenizer, prompt=prompt_chat, max_tokens=16)
print(out) # e.g. "0.1432"
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for MrPathak29/llama32-1b-sentiment-mlx-lora-v5
Base model
meta-llama/Llama-3.2-1B-Instruct