Update README.md
Browse files
README.md
CHANGED
|
@@ -11,7 +11,7 @@ A Vision Transformer model trained on multi-physics simulation data for scientif
|
|
| 11 |
- **Developed by:** PhysicsAlchemists Research Team
|
| 12 |
- **Model type:** Vision Transformer (ViT-Huge)
|
| 13 |
- **Language(s):** N/A (Computer Vision)
|
| 14 |
-
- **License:**
|
| 15 |
- **Finetuned from model:** Trained from scratch on physics simulation data
|
| 16 |
- **Training Steps:** 195,930 steps
|
| 17 |
|
|
@@ -70,13 +70,13 @@ from PIL import Image
|
|
| 70 |
import torch
|
| 71 |
|
| 72 |
# Load model and processor
|
| 73 |
-
model = AutoModel.from_pretrained("
|
| 74 |
-
processor = AutoImageProcessor.from_pretrained("
|
| 75 |
|
| 76 |
# Load your physics image
|
| 77 |
image = Image.open("physics_simulation.png").convert('RGB')
|
| 78 |
|
| 79 |
-
#
|
| 80 |
image = expand_to_square(image, background_color=(128, 128, 128))
|
| 81 |
image = image.resize((224, 224), Image.BILINEAR)
|
| 82 |
|
|
@@ -167,12 +167,12 @@ pip install transformers torch torchvision pillow
|
|
| 167 |
## Citation
|
| 168 |
|
| 169 |
```bibtex
|
| 170 |
-
@misc{physics-vit-
|
| 171 |
-
title={Physics
|
| 172 |
-
author={
|
| 173 |
-
year={
|
| 174 |
howpublished={HuggingFace Model Hub},
|
| 175 |
-
url={https://huggingface.co/
|
| 176 |
}
|
| 177 |
```
|
| 178 |
|
|
@@ -181,3 +181,8 @@ pip install transformers torch torchvision pillow
|
|
| 181 |
- Built using [Cerebras ModelZoo](https://github.com/Cerebras/modelzoo)
|
| 182 |
- Trained on Cerebras CS-X systems
|
| 183 |
- Based on Vision Transformer architecture
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
- **Developed by:** PhysicsAlchemists Research Team
|
| 12 |
- **Model type:** Vision Transformer (ViT-Huge)
|
| 13 |
- **Language(s):** N/A (Computer Vision)
|
| 14 |
+
- **License:** MIT Licence
|
| 15 |
- **Finetuned from model:** Trained from scratch on physics simulation data
|
| 16 |
- **Training Steps:** 195,930 steps
|
| 17 |
|
|
|
|
| 70 |
import torch
|
| 71 |
|
| 72 |
# Load model and processor
|
| 73 |
+
model = AutoModel.from_pretrained("JessicaE/physics-vit-full")
|
| 74 |
+
processor = AutoImageProcessor.from_pretrained("JessicaE/physics-vit-full")
|
| 75 |
|
| 76 |
# Load your physics image
|
| 77 |
image = Image.open("physics_simulation.png").convert('RGB')
|
| 78 |
|
| 79 |
+
# Apply custom preprocessing
|
| 80 |
image = expand_to_square(image, background_color=(128, 128, 128))
|
| 81 |
image = image.resize((224, 224), Image.BILINEAR)
|
| 82 |
|
|
|
|
| 167 |
## Citation
|
| 168 |
|
| 169 |
```bibtex
|
| 170 |
+
@misc{physics-vit-2025,
|
| 171 |
+
title={PhySiViT : A Physics Simulation Vision Transformer},
|
| 172 |
+
author={Jessica Ezemba, James Afful, Mei-Yu Wang},
|
| 173 |
+
year={2025},
|
| 174 |
howpublished={HuggingFace Model Hub},
|
| 175 |
+
url={https://huggingface.co/JessicaE/physics-vit-full}
|
| 176 |
}
|
| 177 |
```
|
| 178 |
|
|
|
|
| 181 |
- Built using [Cerebras ModelZoo](https://github.com/Cerebras/modelzoo)
|
| 182 |
- Trained on Cerebras CS-X systems
|
| 183 |
- Based on Vision Transformer architecture
|
| 184 |
+
- This work was made possible thanks to the ByteBoost cybertraining program which is funded by the National Science Foundation Cybertraining awards: 2320990, 2320991, and 2320992, and the Neocortex project, the ACES platform, and the Ookami cluster.
|
| 185 |
+
- The Neocortex project is supported by National Science Foundation award number 2005597.
|
| 186 |
+
- The ACES (Accelerating Computing for Emerging Sciences) platform was funded by National Science Foundation award number 2112356.
|
| 187 |
+
- The Ookami cluster is supported by National Science Foundation award number 1927880.
|
| 188 |
+
|