qwen-25-14b-instruct-quantum-physics

This model is a fine-tuned version of Qwen/Qwen2.5-14B-Instruct using LoRA (Low-Rank Adaptation) on a quantum physics dataset. This fine-tuned version scores 41.39% on a quantum physics test set, up from 24% on the base Qwen 2.5 14B Instruct model using standard Supervised Fine-Tuning (SFT)

Model Description

Fine-tuned Qwen2.5-14B model for quantum physics domain tasks.

Available Formats

  • GGUF: _temp_merged_qwen-25-14b-instruct-14b-quantum-physics-20260125-007.fp16.gguf - FP16 format for inference with llama.cpp

Usage

Using GGUF (with llama.cpp, Ollama, LM Studio, etc.)

# Download the GGUF file
huggingface-cli download Kylan12/qwen-25-14b-instruct-quantum-physics _temp_merged_qwen-25-14b-instruct-14b-quantum-physics-20260125-007.fp16.gguf

# Use with llama.cpp
./llama.cpp/build/bin/llama-cli -m _temp_merged_qwen-25-14b-instruct-14b-quantum-physics-20260125-007.fp16.gguf -p "Your prompt here"

Using HuggingFace Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Kylan12/qwen-25-14b-instruct-quantum-physics")
tokenizer = AutoTokenizer.from_pretrained("Kylan12/qwen-25-14b-instruct-quantum-physics")

prompt = "Calculate the expectation value of the Pauli Z operator for a qubit in the state |+⟩"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0]))

Training Details standard Supervised Fine-Tuning (SFT)

  • Base Model: Qwen/Qwen2.5-14B-Instruct
  • Training Method: LoRA (Low-Rank Adaptation)
  • LoRA Rank: 16
  • LoRA Alpha: 16
  • Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj

Evaluation

Metric Base Model Fine-Tuned
Overall 24.0% 41.39%

Limitations

This model inherits the limitations of the base Qwen2.5-14B-Instruct model and may have additional domain-specific limitations due to the fine-tuning dataset.

Citation

If you use this model, please cite:

@misc{qwen_25_14b_instruct_quantum_physics,
  author = {Kylan12},
  title = {qwen-25-14b-instruct-quantum-physics},
  year = {2025},
  publisher = {HuggingFace},
  url = {https://huggingface.co/Kylan12/qwen-25-14b-instruct-quantum-physics}
}

License

This model is released under the Apache 2.0 license, consistent with the base Qwen model.

Downloads last month
25
GGUF
Model size
15B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support