parani01 commited on
Commit
988909d
·
verified ·
1 Parent(s): 515fc26

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md CHANGED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # 🧠 Phyen — Fine-Tuned Qwen Model for Physics and Engineering Reasoning
3
+
4
+ **Model ID:** [parani01/phyen](https://huggingface.co/parani01/phyen)
5
+ **Base Model:** Qwen (7B or equivalent)
6
+ **Type:** Fully fine-tuned and merged model (not LoRA)
7
+ **Framework:** PyTorch + Transformers
8
+ **Files:** 4 × `.safetensors` shards + tokenizer
9
+
10
+ ---
11
+
12
+ ## 🧩 Model Summary
13
+ **Phyen** is a specialized variant of the Qwen model, fine-tuned for **physics**, **engineering**, and **technical scientific reasoning**.
14
+ It has been trained and merged to perform better on domain-specific text such as:
15
+ - Thermodynamics
16
+ - Fluid mechanics
17
+ - Structural analysis
18
+ - General physics conceptual reasoning
19
+
20
+ It retains general Qwen language ability but prioritizes scientific precision.
21
+
22
+ ---
23
+
24
+ ## 🚀 Usage Example
25
+
26
+ ```python
27
+ from transformers import AutoTokenizer, AutoModelForCausalLM
28
+
29
+ model_id = "parani01/phyen"
30
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
31
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
32
+
33
+ prompt = "Explain the laws of thermodynamics in simple words."
34
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
35
+ outputs = model.generate(**inputs, max_new_tokens=120)
36
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
37
+ ```
38
+
39
+ ---
40
+
41
+ ## 📘 Intended Use
42
+ **Intended for:**
43
+ - Physics and engineering question answering
44
+ - Scientific writing and conceptual reasoning
45
+ - Educational or research assistants
46
+
47
+ **Not intended for:**
48
+ - General conversation
49
+ - Legal or medical advice
50
+ - Sensitive or factual decision-making outside the training domain
51
+
52
+ ---
53
+
54
+ ## ⚙️ Technical Details
55
+
56
+ | Field | Description |
57
+ |-------|--------------|
58
+ | **Architecture** | Qwen-style Transformer |
59
+ | **Parameters** | ~7 Billion |
60
+ | **Precision** | bfloat16 / float16 (auto-detect) |
61
+ | **Framework** | PyTorch + safetensors |
62
+ | **Tokenizer** | Qwen tokenizer |
63
+
64
+ ---
65
+
66
+ ## ⚠️ Limitations
67
+ - May hallucinate answers outside scientific domains
68
+ - Requires GPU (≥16GB VRAM recommended) for efficient inference
69
+ - Does not guarantee factual correctness in all contexts
70
+
71
+ ---
72
+
73
+ ## 🧩 Training Information
74
+ - **Base Model:** Qwen-7B (open-source base)
75
+ - **Fine-tuned with:** domain-specific corpus of physics and engineering text
76
+ - **Merged into:** full model weights (`merged_vlm_physics`)
77
+
78
+ ---
79
+
80
+ ## 🏷️ License
81
+ Add your license here (for example, Apache-2.0 or MIT).
82
+ Ensure you comply with the original Qwen base model’s license.
83
+
84
+ ---
85
+
86
+ ## 👨‍💻 Author
87
+ Developed and fine-tuned by **Parani Dharan**
88
+ Published at: [Hugging Face — parani01](https://huggingface.co/parani01)
89
+
90
+ ---
91
+
92
+ ## 💬 Citation
93
+
94
+ If you use this model, please cite it as:
95
+
96
+ ```
97
+ @model{parani2025phyen,
98
+ title={Phyen: Fine-tuned Qwen Model for Physics and Engineering Reasoning},
99
+ author={Parani Dharan},
100
+ year={2025},
101
+ publisher={Hugging Face}
102
+ }
103
+ ```