Riksbanken Mistral LoRA
Swedish LoRA adapters for Mistral-7B-Instruct, fine-tuned on Riksbanken (Swedish Central Bank) monetary policy reports.
Model Description
This model is a LoRA (Low-Rank Adaptation) fine-tune of mistralai/Mistral-7B-Instruct-v0.3 trained on synthetic Q&A pairs generated from Riksbanken's monetary policy reports (2022-2025).
Training Data
- Dataset: tomdickson/riksbanken-qa
- Examples: ~5,000 Swedish Q&A pairs
- Topics: Monetary policy, inflation, interest rates (reporäntan), economic forecasts
Training Configuration
- LoRA rank: 16
- LoRA alpha: 16
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Epochs: 1
- Learning rate: 2e-4
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"mistralai/Mistral-7B-Instruct-v0.3",
torch_dtype=torch.bfloat16,
device_map="auto",
)
# Load LoRA adapters
model = PeftModel.from_pretrained(base_model, "tomdickson/riksbanken-mistral-lora")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3")
# Generate
messages = [{"role": "user", "content": "Vad är reporäntan?"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs.to("cuda"), max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Demo
Try the model at: https://swesovereignai.web.app
Training
See the Finetuning LLMs project for training code.
License
Apache 2.0
- Downloads last month
- 22
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for tomdickson/riksbanken-mistral-lora
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3