Qwen3.5-9B-Physics

A parameter-efficient fine-tuned LoRA adapter built on Qwen/Qwen3.5-9B, optimized for physics problem-solving. Trained with LLaMA Factory on the camel_physics dataset.

Model Demo

This repository provides both the lightweight LoRA adapter and the standalone quantized GGUF model for local deployment.


Model Details

  • Base Model: Qwen/Qwen3.5-9B

  • Fine-tuning Method: Supervised Fine-Tuning (SFT) + LoRA

  • Training Framework: LLaMA Factory

  • Training Dataset: camel_ai/physics (5k curated physics question-answer samples)

  • Training Precision: 4-bit quantized training (bitsandbytes, BF16)

  • LoRA Hyperparameters:

    • LoRA Rank: 16

    • LoRA Alpha: 32

    • LoRA Dropout: 0.0

  • Quantized Model: Q4_K_M GGUF (6.4GB)


Model Capabilities

  • Specialized in high school and undergraduate physics problem-solving, formula derivation and conceptual analysis

  • Preserves the general conversational and reasoning ability of the original Qwen3.5-9B base model

  • Dual deployment support: lightweight LoRA adapter for development and optimized GGUF model for local inference

  • Compatible with Transformers, PEFT, llama.cpp and Ollama


Usage

1. Load LoRA Adapter (For Development)

Combine the LoRA adapter with the official Qwen3.5-9B base model for full fine-tuned capabilities:

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model_id = "Qwen/Qwen3.5-9B"
lora_model_id = "Alumin-Hydro/Qwen3.5-9B-Physics"

# Load base model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    device_map="auto",
    torch_dtype="auto"
)

# Load physics LoRA adapter
model = PeftModel.from_pretrained(model, lora_model_id)
model.eval()

2. Run Quantized GGUF Model (For Local Deployment)

A standalone Q4_K_M quantized GGUF model is provided for fast local inference without additional dependencies.

Ollama Deployment

ollama create qwen3.5-9b-physics -f Modelfile
ollama run qwen3.5-9b-physics

llama.cpp Deployment

Directly load qwen3.5-9b-physics-q4_K_M.gguf with llama.cpp or any GGUF-compatible inference framework.


File Description

  • adapter_model.safetensors & adapter_config.json: Lightweight LoRA adapter (~169MB)

  • qwen3.5-9b-physics-q4_K_M.gguf: Merged & quantized full model (Q4_K_M, 6.4GB)

  • Modelfile: Official Ollama configuration file


License

This model follows the Apache 2.0 license, consistent with the original Qwen3.5-9B base model.

Downloads last month
126
GGUF
Model size
9B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Alumin-Hydro/Qwen3.5-9B-Physics

Finetuned
Qwen/Qwen3.5-9B
Quantized
(202)
this model

Dataset used to train Alumin-Hydro/Qwen3.5-9B-Physics