YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Qwen3-8B SDF LoRA

This repository hosts the final LoRA adapter weights extracted from qwen/tinker_impl/models/final_weights.tar.gz in the research-sprint-mats-9.0 workspace. The adapters target the base model Qwen/Qwen3-8B.

Files

  • adapter_model.safetensors: LoRA adapter parameters.
  • adapter_config.json: PEFT configuration (LoRA rank 32, alpha 32, dropout 0).
  • checkpoint_complete: Sentinel file marking a finished training run.

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model_id = "Qwen/Qwen3-8B"
lora_id = "<username>/qwen3-8b-sdf"

model = AutoModelForCausalLM.from_pretrained(base_model_id, device_map="auto")
model = PeftModel.from_pretrained(model, lora_id)

tokenizer = AutoTokenizer.from_pretrained(base_model_id)

Adjust device_map and precision flags (torch_dtype) as needed for your hardware.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support