Model Card: Turkish Finance 7B LoRA Adapter

Summary

This model is a LoRA (Low-Rank Adaptation) adapter for Qwen2.5-7B (4-bit quantized via Unsloth), fine-tuned for Turkish financial language and reasoning. It is intended for use as a conversational assistant in the context of Turkish capital markets (BIST), crypto, and related finance topics when combined with tools (e.g. MCP for live data). The adapter is small and can be loaded on top of the base model for inference.

Disclaimer: This model and any outputs are for informational and educational purposes only. This is not investment advice. Consult a qualified professional before making any financial decisions.


Model Details

Model Description

  • Developed by: Turkish Finance AI Advisor project (see repository).
  • Model type: Decoder-only causal LM with PEFT/LoRA adapter; base is Qwen2.5-7B (4-bit, Unsloth).
  • Language(s): Turkish (primary), English.
  • License: MIT.
  • Finetuned from: unsloth/Qwen2.5-7B-bnb-4bit.

Model Sources


Uses

Direct Use

The adapter is designed to be loaded together with the base model for text generation. Typical use cases include:

  • Answering questions about Turkish finance and markets in Turkish (and English).
  • Assisting in understanding financial terms, instruments, and basic reasoning when used with up-to-date data tools (e.g. BIST, crypto, news MCP servers).

Out-of-Scope Use

  • Not for trading or investment decisions. The model does not provide personalized or real-time investment advice.
  • Not intended for legal, tax, or regulatory advice.
  • May hallucinate; always verify important facts with authoritative sources.

Bias, Risks, and Limitations

  • Financial content: Outputs can be incorrect or outdated. Never rely on the model alone for financial decisions.
  • Language: Optimized for Turkish (and some English); quality may vary for other languages.
  • Data and recency: Training data has a cutoff; combine with live data (APIs, MCP tools) for current information.
  • Hallucination: As with all LLMs, the model may generate plausible but false statements.

Recommendations

Users should (1) treat all outputs as non-binding information, (2) seek professional advice for real investments, and (3) use the model together with verified data sources and tooling where applicable.


How to Get Started with the Model

Requirements: transformers, peft, accelerate, and optionally unsloth for faster inference.

Load adapter on top of base model (PEFT)

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_name = "unsloth/Qwen2.5-7B-bnb-4bit"  # or "Qwen/Qwen2.5-7B" for full precision
adapter_id = "ahmet1338/stock_market_wizard"     # your Hugging Face repo

model = AutoModelForCausalLM.from_pretrained(
    base_model_name,
    load_in_4bit=True,  # set False if using non-quantized base
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter_id)
tokenizer = AutoTokenizer.from_pretrained(adapter_id)

# Example
inputs = tokenizer("Türkiye'de BIST nedir?", return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

With Unsloth (faster inference)

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/Qwen2.5-7B-bnb-4bit",
    max_seq_length=2048,
    load_in_4bit=True,
)
model = FastLanguageModel.get_peft_model(model)  # if not already merged
model.load_adapter("ahmet1338/stock_market_wizard")
# Then use for generation as above.

Training Details

Training Data

Training Procedure

  • Method: QLoRA (4-bit base + LoRA) via Unsloth.
  • Base model: unsloth/Qwen2.5-7B-bnb-4bit.
  • LoRA: rank 64, alpha 16; target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj.
  • Chat template: ChatML-style (role/content).

Training Hyperparameters

Hyperparameter Value
Epochs 3
Max sequence length 2048
Batch size (per device) 2
Gradient accumulation steps 8
Learning rate 2e-4
LoRA r 64
LoRA alpha 16
Precision bf16 (Ampere+) / fp16 (e.g. T4)

Training can be reproduced with the project’s training script (e.g. training/colab_train.py) and the same dataset and hyperparameters.


Evaluation

No formal benchmark evaluation is reported. The model is intended for qualitative use as a Turkish finance-oriented assistant; users should validate outputs for their own use cases.


Technical Specifications

  • Architecture: Same as Qwen2.5-7B with an additive LoRA adapter; base is 4-bit quantized (BNB).
  • Framework: PEFT 0.18.1; training with Unsloth, TRL SFTTrainer, Transformers.

Citation

If you use this adapter or the project in your work, please cite the base model (Qwen2.5) and the dataset (Turkish-Finance-SFT-Dataset) as appropriate. Example:

Qwen2.5:

@article{qwen2.5,
  title={Qwen2.5},
  author={Qwen Team},
  year={2024},
}

Model Card Contact

For issues related to this model card or the adapter, please open an issue in the project repository. This model is not investment advice; use at your own risk and always seek professional guidance for financial decisions.

Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train ahmet1338/stock_market_wizard