File size: 2,878 Bytes
988909d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103

# 🧠 Phyen — Fine-Tuned Qwen Model for Physics and Engineering Reasoning

**Model ID:** [parani01/phyen](https://huggingface.co/parani01/phyen)  
**Base Model:** Qwen (7B or equivalent)  
**Type:** Fully fine-tuned and merged model (not LoRA)  
**Framework:** PyTorch + Transformers  
**Files:** 4 × `.safetensors` shards + tokenizer  

---

## 🧩 Model Summary
**Phyen** is a specialized variant of the Qwen model, fine-tuned for **physics**, **engineering**, and **technical scientific reasoning**.  
It has been trained and merged to perform better on domain-specific text such as:
- Thermodynamics  
- Fluid mechanics  
- Structural analysis  
- General physics conceptual reasoning  

It retains general Qwen language ability but prioritizes scientific precision.

---

## 🚀 Usage Example

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "parani01/phyen"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

prompt = "Explain the laws of thermodynamics in simple words."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=120)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

---

## 📘 Intended Use
**Intended for:**
- Physics and engineering question answering  
- Scientific writing and conceptual reasoning  
- Educational or research assistants  

**Not intended for:**
- General conversation  
- Legal or medical advice  
- Sensitive or factual decision-making outside the training domain  

---

## ⚙️ Technical Details

| Field | Description |
|-------|--------------|
| **Architecture** | Qwen-style Transformer |
| **Parameters** | ~7 Billion |
| **Precision** | bfloat16 / float16 (auto-detect) |
| **Framework** | PyTorch + safetensors |
| **Tokenizer** | Qwen tokenizer |

---

## ⚠️ Limitations
- May hallucinate answers outside scientific domains  
- Requires GPU (≥16GB VRAM recommended) for efficient inference  
- Does not guarantee factual correctness in all contexts  

---

## 🧩 Training Information
- **Base Model:** Qwen-7B (open-source base)  
- **Fine-tuned with:** domain-specific corpus of physics and engineering text  
- **Merged into:** full model weights (`merged_vlm_physics`)  

---

## 🏷️ License
Add your license here (for example, Apache-2.0 or MIT).  
Ensure you comply with the original Qwen base model’s license.

---

## 👨‍💻 Author
Developed and fine-tuned by **Parani Dharan**  
Published at: [Hugging Face — parani01](https://huggingface.co/parani01)

---

## 💬 Citation

If you use this model, please cite it as:

```
@model{parani2025phyen,
  title={Phyen: Fine-tuned Qwen Model for Physics and Engineering Reasoning},
  author={Parani Dharan},
  year={2025},
  publisher={Hugging Face}
}
```