| # 🧠 Phyen — Fine-Tuned Qwen Model for Physics and Engineering Reasoning | |
| **Model ID:** [parani01/phyen](https://huggingface.co/parani01/phyen) | |
| **Base Model:** Qwen (7B or equivalent) | |
| **Type:** Fully fine-tuned and merged model (not LoRA) | |
| **Framework:** PyTorch + Transformers | |
| **Files:** 4 × `.safetensors` shards + tokenizer | |
| --- | |
| ## 🧩 Model Summary | |
| **Phyen** is a specialized variant of the Qwen model, fine-tuned for **physics**, **engineering**, and **technical scientific reasoning**. | |
| It has been trained and merged to perform better on domain-specific text such as: | |
| - Thermodynamics | |
| - Fluid mechanics | |
| - Structural analysis | |
| - General physics conceptual reasoning | |
| It retains general Qwen language ability but prioritizes scientific precision. | |
| --- | |
| ## 🚀 Usage Example | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| model_id = "parani01/phyen" | |
| tokenizer = AutoTokenizer.from_pretrained(model_id) | |
| model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") | |
| prompt = "Explain the laws of thermodynamics in simple words." | |
| inputs = tokenizer(prompt, return_tensors="pt").to(model.device) | |
| outputs = model.generate(**inputs, max_new_tokens=120) | |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) | |
| ``` | |
| --- | |
| ## 📘 Intended Use | |
| **Intended for:** | |
| - Physics and engineering question answering | |
| - Scientific writing and conceptual reasoning | |
| - Educational or research assistants | |
| **Not intended for:** | |
| - General conversation | |
| - Legal or medical advice | |
| - Sensitive or factual decision-making outside the training domain | |
| --- | |
| ## ⚙️ Technical Details | |
| | Field | Description | | |
| |-------|--------------| | |
| | **Architecture** | Qwen-style Transformer | | |
| | **Parameters** | ~7 Billion | | |
| | **Precision** | bfloat16 / float16 (auto-detect) | | |
| | **Framework** | PyTorch + safetensors | | |
| | **Tokenizer** | Qwen tokenizer | | |
| --- | |
| ## ⚠️ Limitations | |
| - May hallucinate answers outside scientific domains | |
| - Requires GPU (≥16GB VRAM recommended) for efficient inference | |
| - Does not guarantee factual correctness in all contexts | |
| --- | |
| ## 🧩 Training Information | |
| - **Base Model:** Qwen-7B (open-source base) | |
| - **Fine-tuned with:** domain-specific corpus of physics and engineering text | |
| - **Merged into:** full model weights (`merged_vlm_physics`) | |
| --- | |
| ## 🏷️ License | |
| Add your license here (for example, Apache-2.0 or MIT). | |
| Ensure you comply with the original Qwen base model’s license. | |
| --- | |
| ## 👨💻 Author | |
| Developed and fine-tuned by **Parani Dharan** | |
| Published at: [Hugging Face — parani01](https://huggingface.co/parani01) | |
| --- | |
| ## 💬 Citation | |
| If you use this model, please cite it as: | |
| ``` | |
| @model{parani2025phyen, | |
| title={Phyen: Fine-tuned Qwen Model for Physics and Engineering Reasoning}, | |
| author={Parani Dharan}, | |
| year={2025}, | |
| publisher={Hugging Face} | |
| } | |
| ``` |