Qwen2.5-7B-Instruct Fine-tuned Model

This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct on FOMC (Federal Open Market Committee) and Beige Book data.

Model Details

  • Base Model: Qwen/Qwen2.5-7B-Instruct
  • Fine-tuning Method: LoRA via PEFT
  • Domain: Finance, Federal Reserve
  • Training Data: FOMC and Beige Book dataset

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-7B-Instruct",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct", trust_remote_code=True)

# Load LoRA adapter
model = PeftModel.from_pretrained(
    base_model,
    "jaeyoungk/qwen-sft",
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

# Sample usage
messages = [
    {"role": "user", "content": "What are the key risks to the economic outlook according to FOMC?"}
]

# Apply chat template
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(**inputs, max_length=2048, temperature=0.7, do_sample=True)
    
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support