qwen-quantum
This model is a fine-tuned version of Qwen/Qwen2.5-14B-Instruct using LoRA (Low-Rank Adaptation) on a chemistry dataset.
Model Description
Fine-tuned Qwen2.5-14B model for chemistry domain tasks.
Available Formats
- GGUF:
qwen_quantum_merged-q4_k_m.gguf- Quantized for efficient inference with llama.cpp
Usage
Using GGUF (with llama.cpp, Ollama, LM Studio, etc.)
# Download the GGUF file
huggingface-cli download Kylan12/qwen-quantum qwen_quantum_merged-q4_k_m.gguf
# Use with llama.cpp
./llama.cpp/build/bin/llama-cli -m qwen_quantum_merged-q4_k_m.gguf -p "Your prompt here"
Using HuggingFace Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Kylan12/qwen-quantum")
tokenizer = AutoTokenizer.from_pretrained("Kylan12/qwen-quantum")
prompt = "What is the IUPAC name for..."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0]))
Training Details
- Base Model: Qwen/Qwen2.5-14B-Instruct
- Training Method: LoRA (Low-Rank Adaptation)
- Dataset: camel-ai/chemistry
- LoRA Rank: 16
- LoRA Alpha: 16
- Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Limitations
This model inherits the limitations of the base Qwen2.5-14B-Instruct model and may have additional domain-specific limitations due to the fine-tuning dataset.
Citation
If you use this model, please cite:
@misc{qwen_quantum,
author = {Your Name},
title = {qwen-quantum},
year = {2025},
publisher = {HuggingFace},
url = {https://huggingface.co/Kylan12/qwen-quantum}
}
License
This model is released under the Apache 2.0 license, consistent with the base Qwen model.
- Downloads last month
- 17
Hardware compatibility
Log In
to add your hardware
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support