TimberGu's picture
Upload .github/README.md with huggingface_hub
95d3957 verified
metadata
language:
  - en
  - zh
tags:
  - llama
  - finance
  - lora
  - instruction-tuning
license: apache-2.0

Llama for Finance

A financial domain instruction-tuned Llama-3 model using LoRA on the Finance-Instruct-500k dataset.

Model Details

  • Base Model: meta-llama/Meta-Llama-3.1-8B-Instruct
  • Training: LoRA fine-tuning
  • Domain: Finance, Economics, Investment
  • Language: English, Chinese
  • Context Length: 4096 tokens
  • Training Data: Finance-Instruct-500k
  • Evaluation: Held-out test + FinanceBench

Usage

This is a LoRA adapter for Llama-3. You need access to the base model.

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")
model = PeftModel.from_pretrained(base_model, "TimberGu/Llama_for_Finance")
tokenizer = AutoTokenizer.from_pretrained("TimberGu/Llama_for_Finance")

Evaluation

See Evaluation.ipynb for:

  • BLEU/ROUGE/Perplexity on held-out data
  • FinanceBench benchmark results