qwen-25-14b-instruct-quantum-physics

This model is a fine-tuned version of Qwen/Qwen2.5-14B-Instruct using LoRA (Low-Rank Adaptation) on a quantum physics dataset.

Evaluation

Metric Base Model Fine-Tuned (SFT) Fine-Tuned (latest)
Overall Accuracy 24.0% 41.4% 53.7%
Factual Accuracy β€” β€” 55.0
Completeness β€” β€” 51.0
Technical Precision β€” β€” 54.3

Evaluated on BoltzmannEntropy/QuantumLLMInstruct with RAG-augmented judging (Semantic Scholar, 5 papers per question).

Available Formats

  • GGUF (Q4_K_M): qwen-25-14b-quantum-physics-q4_k_m.gguf β€” 8.4 GB, quantized for efficient inference
  • GGUF (FP16): _temp_merged_qwen-25-14b-instruct-14b-quantum-physics-20260125-007.fp16.gguf β€” full precision

Usage

Using GGUF (with llama.cpp, Ollama, LM Studio, etc.)

# Download the quantized GGUF
huggingface-cli download Kylan12/qwen-25-14b-instruct-quantum-physics qwen-25-14b-quantum-physics-q4_k_m.gguf

# Use with llama.cpp
./llama.cpp/build/bin/llama-cli -m qwen-25-14b-quantum-physics-q4_k_m.gguf -p "Your prompt here"

Using HuggingFace Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Kylan12/qwen-25-14b-instruct-quantum-physics")
tokenizer = AutoTokenizer.from_pretrained("Kylan12/qwen-25-14b-instruct-quantum-physics")

prompt = "Calculate the expectation value of the Pauli Z operator for a qubit in the state |+⟩"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0]))

Training Details

  • Base Model: Qwen/Qwen2.5-14B-Instruct
  • Training Method: LoRA (Low-Rank Adaptation)
  • Quantization: 4-bit NF4 via bitsandbytes
  • LoRA Rank: 16
  • LoRA Alpha: 16
  • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj

Limitations

This model inherits the limitations of the base Qwen2.5-14B-Instruct model and may have additional domain-specific limitations due to the fine-tuning dataset.

License

This model is released under the Apache 2.0 license, consistent with the base Qwen model.

Downloads last month
37
GGUF
Model size
15B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Kylan12/qwen-25-14b-instruct-quantum-physics

Base model

Qwen/Qwen2.5-14B
Adapter
(264)
this model