qwen3-8b-physics / README.md
Fxde42's picture
Upload folder using huggingface_hub
f7054f0 verified
metadata
license: apache-2.0
base_model: Qwen/Qwen3-8B
tags:
  - physics
  - education
  - qwen3
  - fine-tuned
language:
  - en
pipeline_tag: text-generation

Qwen3-8B Fine-tuned for Physics

This model is a fine-tuned version of Qwen/Qwen3-8B on physics question-answering tasks.

Model Details

  • Base Model: Qwen3-8B
  • Fine-tuning Method: LoRA
  • Dataset: Physics Q&A (20,000 samples)
  • Training: [Your training details]

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("your-username/qwen3-8b-physics")
model = AutoModelForCausalLM.from_pretrained("your-username/qwen3-8b-physics")

# Example usage
prompt = "Solve this quantum mechanics problem:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0]))