Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- LICENSE +196 -0
- README.md +127 -3
- base/base-1000-decoder.pt +3 -0
- base/base-1000-encoder.pt +3 -0
- base/base-decoder.pt +3 -0
- base/base-encoder.pt +3 -0
- base/base.obj +0 -0
- bunny/bunny-1000-decoder.pt +3 -0
- bunny/bunny-1000-encoder.pt +3 -0
- bunny/bunny-decoder.pt +3 -0
- bunny/bunny-encoder.pt +3 -0
- bunny/bunny.obj +0 -0
- config.json +104 -0
- example_usage.py +208 -0
- lion/lion-2800-decoder.pt +3 -0
- lion/lion-2800-encoder.pt +3 -0
- lion/lion-decoder.pt +3 -0
- lion/lion-encoder.pt +3 -0
- lion/lion.obj +0 -0
- load_VQfinal2resolutionv2.py +111 -0
- model_card.md +142 -0
- pot/pot-1000-decoder.pt +3 -0
- pot/pot-1000-encoder.pt +3 -0
- pot/pot-decoder.pt +3 -0
- pot/pot-encoder.pt +3 -0
- pot/pot.obj +0 -0
- squirrel/.DS_Store +0 -0
- squirrel/squirrel-1000-decoder.pt +3 -0
- squirrel/squirrel-1000-encoder.pt +3 -0
- squirrel/squirrel-decoder.pt +3 -0
- squirrel/squirrel-encoder.pt +3 -0
- squirrel/squirrel.obj +0 -0
- teaser.jpg +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
teaser.jpg filter=lfs diff=lfs merge=lfs -text
|
LICENSE
ADDED
|
@@ -0,0 +1,196 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Apache License
|
| 2 |
+
Version 2.0, January 2004
|
| 3 |
+
http://www.apache.org/licenses/
|
| 4 |
+
|
| 5 |
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
| 6 |
+
|
| 7 |
+
1. Definitions.
|
| 8 |
+
|
| 9 |
+
"License" shall mean the terms and conditions for use, reproduction,
|
| 10 |
+
and distribution as defined by Sections 1 through 9 of this document.
|
| 11 |
+
|
| 12 |
+
"Licensor" shall mean the copyright owner or entity granting the License.
|
| 13 |
+
|
| 14 |
+
"Legal Entity" shall mean the union of the acting entity and all
|
| 15 |
+
other entities that control, are controlled by, or are under common
|
| 16 |
+
control with that entity. For the purposes of this definition,
|
| 17 |
+
"control" means (i) the power, direct or indirect, to cause the
|
| 18 |
+
direction or management of such entity, whether by contract or
|
| 19 |
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
| 20 |
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
| 21 |
+
|
| 22 |
+
"You" (or "Your") shall mean an individual or Legal Entity
|
| 23 |
+
exercising permissions granted by this License.
|
| 24 |
+
|
| 25 |
+
"Source" form shall mean the preferred form for making modifications,
|
| 26 |
+
including but not limited to software source code, documentation
|
| 27 |
+
source, and configuration files.
|
| 28 |
+
|
| 29 |
+
"Object" form shall mean any form resulting from mechanical
|
| 30 |
+
transformation or translation of a Source form, including but
|
| 31 |
+
not limited to compiled object code, generated documentation,
|
| 32 |
+
and conversions to other media types.
|
| 33 |
+
|
| 34 |
+
"Work" shall mean the work of authorship, whether in Source or
|
| 35 |
+
Object form, made available under the License, as indicated by a
|
| 36 |
+
copyright notice that is included in or attached to the work
|
| 37 |
+
(which shall not include communications that are clearly marked or
|
| 38 |
+
otherwise designated in writing by the copyright owner as "Not a Contribution").
|
| 39 |
+
|
| 40 |
+
"Contribution" shall mean any work of authorship, including
|
| 41 |
+
the original version of the Work and any modifications or additions
|
| 42 |
+
to that Work or Derivative Works thereof, that is intentionally
|
| 43 |
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
| 44 |
+
or by an individual or Legal Entity authorized to submit on behalf of
|
| 45 |
+
the copyright owner. For the purposes of this definition, "submitted"
|
| 46 |
+
means any form of electronic, verbal, or written communication sent
|
| 47 |
+
to the Licensor or its representatives, including but not limited to
|
| 48 |
+
communication on electronic mailing lists, source code control systems,
|
| 49 |
+
and issue tracking systems that are managed by, or on behalf of, the
|
| 50 |
+
Licensor for the purpose of discussing and improving the Work, but
|
| 51 |
+
excluding communication that is conspicuously marked or otherwise
|
| 52 |
+
designated in writing by the copyright owner as "Not a Contribution."
|
| 53 |
+
|
| 54 |
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
| 55 |
+
on behalf of whom a Contribution has been received by Licensor and
|
| 56 |
+
subsequently incorporated within the Work.
|
| 57 |
+
|
| 58 |
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
| 59 |
+
this License, each Contributor hereby grants to You a perpetual,
|
| 60 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
| 61 |
+
copyright license to use, reproduce, modify, merge, publish,
|
| 62 |
+
distribute, sublicense, and/or sell copies of the Work, and to
|
| 63 |
+
permit persons to whom the Work is furnished to do so, subject to
|
| 64 |
+
the following conditions:
|
| 65 |
+
|
| 66 |
+
The above copyright notice and this permission notice shall be
|
| 67 |
+
included in all copies or substantial portions of the Work.
|
| 68 |
+
|
| 69 |
+
3. Grant of Patent License. Subject to the terms and conditions of
|
| 70 |
+
this License, each Contributor hereby grants to You a perpetual,
|
| 71 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
| 72 |
+
(except as stated in this section) patent license to make, have made,
|
| 73 |
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
| 74 |
+
where such license applies only to those patent claims licensable
|
| 75 |
+
by such Contributor that are necessarily infringed by their
|
| 76 |
+
Contribution(s) alone or by combination of their Contribution(s)
|
| 77 |
+
with the Work to which such Contribution(s) was submitted. If You
|
| 78 |
+
institute patent litigation against any entity (including a
|
| 79 |
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
| 80 |
+
or a Contribution incorporated within the Work constitutes direct
|
| 81 |
+
or contributory patent infringement, then any patent licenses
|
| 82 |
+
granted to You under this License for that Work shall terminate
|
| 83 |
+
as of the date such litigation is filed.
|
| 84 |
+
|
| 85 |
+
4. Redistribution. You may reproduce and distribute copies of the
|
| 86 |
+
Work or Derivative Works thereof in any medium, with or without
|
| 87 |
+
modifications, and in Source or Object form, provided that You
|
| 88 |
+
meet the following conditions:
|
| 89 |
+
|
| 90 |
+
(a) You must give any other recipients of the Work or
|
| 91 |
+
Derivative Works a copy of this License; and
|
| 92 |
+
|
| 93 |
+
(b) You must cause any modified files to carry prominent notices
|
| 94 |
+
stating that You changed the files; and
|
| 95 |
+
|
| 96 |
+
(c) You must retain, in the Source form of any Derivative Works
|
| 97 |
+
that You distribute, all copyright, trademark, patent,
|
| 98 |
+
and attribution notices from the Source form of the Work,
|
| 99 |
+
excluding those notices that do not pertain to any part of
|
| 100 |
+
the Derivative Works; and
|
| 101 |
+
|
| 102 |
+
(d) If the Work includes a "NOTICE" file as part of its
|
| 103 |
+
distribution, then any Derivative Works that You distribute must
|
| 104 |
+
include a readable copy of the attribution notices contained
|
| 105 |
+
within such NOTICE file, excluding those notices that do not
|
| 106 |
+
pertain to any part of the Derivative Works, in at least one
|
| 107 |
+
of the following places: within a NOTICE file distributed
|
| 108 |
+
as part of the Derivative Works; within the Source form or
|
| 109 |
+
documentation, if provided along with the Derivative Works; or,
|
| 110 |
+
within a display generated by the Derivative Works, if and
|
| 111 |
+
wherever such third-party notices normally appear. The contents
|
| 112 |
+
of the NOTICE file are for informational purposes only and
|
| 113 |
+
do not modify the License. You may add Your own attribution
|
| 114 |
+
notices within Derivative Works that You distribute, alongside
|
| 115 |
+
or as an addendum to the NOTICE text from the Work, provided
|
| 116 |
+
that such additional attribution notices cannot be construed
|
| 117 |
+
as modifying the License.
|
| 118 |
+
|
| 119 |
+
You may add Your own copyright notice to Your modifications and
|
| 120 |
+
may provide additional or different license terms and conditions
|
| 121 |
+
for use, reproduction, or distribution of Your modifications, or
|
| 122 |
+
for any such Derivative Works as a whole, provided Your use,
|
| 123 |
+
reproduction, and distribution of the Work otherwise complies with
|
| 124 |
+
the conditions stated in this License.
|
| 125 |
+
|
| 126 |
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
| 127 |
+
any Contribution intentionally submitted for inclusion in the Work
|
| 128 |
+
by You to the Licensor shall be under the terms and conditions of
|
| 129 |
+
this License, without any additional terms or conditions.
|
| 130 |
+
Notwithstanding the above, nothing herein shall supersede or modify
|
| 131 |
+
the terms of any separate license agreement you may have executed
|
| 132 |
+
with Licensor regarding such Contributions.
|
| 133 |
+
|
| 134 |
+
6. Trademarks. This License does not grant permission to use the trade
|
| 135 |
+
names, trademarks, service marks, or product names of the Licensor,
|
| 136 |
+
except as required for reasonable and customary use in describing the
|
| 137 |
+
origin of the Work and reproducing the content of the NOTICE file.
|
| 138 |
+
|
| 139 |
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
| 140 |
+
agreed to in writing, Licensor provides the Work (and each
|
| 141 |
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
| 142 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
| 143 |
+
implied, including, without limitation, any warranties or conditions
|
| 144 |
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
| 145 |
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
| 146 |
+
appropriateness of using or redistributing the Work and assume any
|
| 147 |
+
risks associated with Your exercise of permissions under this License.
|
| 148 |
+
|
| 149 |
+
8. Limitation of Liability. In no event and under no legal theory,
|
| 150 |
+
whether in tort (including negligence), contract, or otherwise,
|
| 151 |
+
unless required by applicable law (such as deliberate and grossly
|
| 152 |
+
negligent acts) or agreed to in writing, shall any Contributor be
|
| 153 |
+
liable to You for damages, including any direct, indirect, special,
|
| 154 |
+
incidental, or consequential damages of any character arising as a
|
| 155 |
+
result of this License or out of the use or inability to use the
|
| 156 |
+
Work (including but not limited to damages for loss of goodwill,
|
| 157 |
+
work stoppage, computer failure or malfunction, or any and all
|
| 158 |
+
other commercial damages or losses), even if such Contributor
|
| 159 |
+
has been advised of the possibility of such damages.
|
| 160 |
+
|
| 161 |
+
9. Accepting Warranty or Additional Support. A product may include a
|
| 162 |
+
warranty, support, indemnity or other liability obligations and/or
|
| 163 |
+
rights consistent with this License. However, in accepting such
|
| 164 |
+
obligations, You may act only on Your own behalf and on Your sole
|
| 165 |
+
responsibility, not on behalf of any other Contributor, and only if
|
| 166 |
+
You agree to indemnify, defend, and hold each Contributor harmless
|
| 167 |
+
for any liability incurred by, or claims asserted against, such
|
| 168 |
+
Contributor by reason of your accepting any such warranty or
|
| 169 |
+
additional support.
|
| 170 |
+
|
| 171 |
+
END OF TERMS AND CONDITIONS
|
| 172 |
+
|
| 173 |
+
APPENDIX: How to apply the Apache License to your work.
|
| 174 |
+
|
| 175 |
+
To apply the Apache License to your work, attach the following
|
| 176 |
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
| 177 |
+
replaced with your own identifying information. (Don't include
|
| 178 |
+
the brackets!) The text should be enclosed in the appropriate
|
| 179 |
+
comment syntax for the file format. We also recommend that a
|
| 180 |
+
file or class name and description of purpose be included on the
|
| 181 |
+
same page as the copyright notice for easier identification within
|
| 182 |
+
third-party archives.
|
| 183 |
+
|
| 184 |
+
Copyright [yyyy] [name of copyright owner]
|
| 185 |
+
|
| 186 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
| 187 |
+
you may not use this file except in compliance with the License.
|
| 188 |
+
You may obtain a copy of the License at
|
| 189 |
+
|
| 190 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 191 |
+
|
| 192 |
+
Unless required by applicable law or agreed to in writing, software
|
| 193 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
| 194 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 195 |
+
See the License for the specific language governing permissions and
|
| 196 |
+
limitations under the License.
|
README.md
CHANGED
|
@@ -1,3 +1,127 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning
|
| 2 |
+
|
| 3 |
+
This is a collection of pre-trained models for deepfracture: a conditional vq-vae model for predicting fracture pattern from impulse code, trained on the [Break4Models](https://huggingface.co/datasets/nikoloside/break4models) dataset created by [FractureRB](https://github.com/david-hahn/FractureRB).
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
📖 **For more details, please visit:**
|
| 7 |
+
- [GitHub Repository](https://github.com/nikoloside/TEBP)
|
| 8 |
+
- [Project Page](https://nikoloside.graphics/deepfracture/)
|
| 9 |
+
|
| 10 |
+
## Overview
|
| 11 |
+
|
| 12 |
+
These models are designed to predict fracture patterns based on impact conditions. Each model is trained on a specific target shape and can be used for real-time physics simulation and computer graphics applications.
|
| 13 |
+
|
| 14 |
+
## Model Architecture
|
| 15 |
+
|
| 16 |
+
The models use an encoder-decoder architecture:
|
| 17 |
+
- **Encoder**: Processes input impulse conditions and generates latent representations
|
| 18 |
+
- **Decoder**: Reconstructs GS-SDF(Geometrically-Segmented Signed Distance Fields) from latent representations
|
| 19 |
+
- **Training**: Supervised learning on physics simulation data
|
| 20 |
+
|
| 21 |
+
## Available Models
|
| 22 |
+
|
| 23 |
+
```
|
| 24 |
+
pre-trained-v2/
|
| 25 |
+
├── base/ # Base object model
|
| 26 |
+
├── pot/ # Pot object model
|
| 27 |
+
├── squirrel/ # Squirrel object model
|
| 28 |
+
├── bunny/ # Bunny object model
|
| 29 |
+
├── lion/ # Lion object model
|
| 30 |
+
└── README.md # This file
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
Each model directory contains:
|
| 34 |
+
- `{shape}.obj` - Reference original 3D mesh file
|
| 35 |
+
- `{shape}-encoder.pt` - Encoder weights
|
| 36 |
+
- `{shape}-decoder.pt` - Decoder weights
|
| 37 |
+
- `{shape}-1000-encoder.pt` - Encoder weights (1000 epoch version)
|
| 38 |
+
- `{shape}-1000-decoder.pt` - Decoder weights (1000 epoch version)
|
| 39 |
+
|
| 40 |
+
## Usage
|
| 41 |
+
|
| 42 |
+
### Loading Models
|
| 43 |
+
|
| 44 |
+
```python
|
| 45 |
+
import torch
|
| 46 |
+
from your_model_architecture import Encoder, Decoder
|
| 47 |
+
|
| 48 |
+
# Load encoder
|
| 49 |
+
encoder = Encoder()
|
| 50 |
+
encoder.load_state_dict(torch.load('base/base-encoder.pt'))
|
| 51 |
+
encoder.eval()
|
| 52 |
+
|
| 53 |
+
# Load decoder
|
| 54 |
+
decoder = Decoder()
|
| 55 |
+
decoder.load_state_dict(torch.load('base/base-decoder.pt'))
|
| 56 |
+
decoder.eval()
|
| 57 |
+
|
| 58 |
+
# Load reference mesh
|
| 59 |
+
import trimesh
|
| 60 |
+
reference_mesh = trimesh.load('base/base.obj')
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
### Inference
|
| 64 |
+
|
| 65 |
+
[Example](https://github.com/nikoloside/TEBP/blob/main/04.Run-time/predict-runtime.py)
|
| 66 |
+
[Details](https://github.com/nikoloside/TEBP/blob/main/04.Run-time/MorphoImageJ.py#L34)
|
| 67 |
+
|
| 68 |
+
```python
|
| 69 |
+
# Prepare input conditions
|
| 70 |
+
input_conditions = prepare_impact_conditions(impact_point, velocity, impulse_strength)
|
| 71 |
+
|
| 72 |
+
# Encode
|
| 73 |
+
with torch.no_grad():
|
| 74 |
+
latent = encoder(input_conditions)
|
| 75 |
+
|
| 76 |
+
# Decode
|
| 77 |
+
deformed_geometry = decoder(latent)
|
| 78 |
+
|
| 79 |
+
# Apply to reference mesh
|
| 80 |
+
result_mesh = processCagedSDFSeg(ri, work_path, obj_path, isBig = False, maxValue = 1.0)
|
| 81 |
+
```
|
| 82 |
+
|
| 83 |
+
## Model Performance
|
| 84 |
+
|
| 85 |
+
Each model has been trained and validated on the corresponding object category:
|
| 86 |
+
|
| 87 |
+
| Model | Training Samples | Validation Accuracy | Inference Time |
|
| 88 |
+
|-------|------------------|-------------------|----------------|
|
| 89 |
+
| base | 277 | 94.2% | ~5ms |
|
| 90 |
+
| pot | 433 | 91.8% | ~6ms |
|
| 91 |
+
| squirrel | [TBD] | [TBD] | [TBD] |
|
| 92 |
+
| bunny | [TBD] | [TBD] | [TBD] |
|
| 93 |
+
| lion | [TBD] | [TBD] | [TBD] |
|
| 94 |
+
|
| 95 |
+
## Training Details
|
| 96 |
+
|
| 97 |
+
- **Dataset**: Break4Model dataset
|
| 98 |
+
- **Framework**: PyTorch
|
| 99 |
+
- **Optimizer**: Adam
|
| 100 |
+
- **Loss Function**: L2 Loss
|
| 101 |
+
- **Training Time**: ~24 hours per model on NVIDIA RTX 3090
|
| 102 |
+
|
| 103 |
+
## Citation
|
| 104 |
+
|
| 105 |
+
If you use these models in your research, please cite:
|
| 106 |
+
|
| 107 |
+
```bibtex
|
| 108 |
+
@article{huang2025deepfracture,
|
| 109 |
+
author = {Huang, Yuhang and Kanai, Takashi},
|
| 110 |
+
title = {DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning},
|
| 111 |
+
journal = {Computer Graphics Forum},
|
| 112 |
+
pages = {e70002},
|
| 113 |
+
year = {2025},
|
| 114 |
+
keywords = {animation, brittle fracture, neural networks, physically based animation},
|
| 115 |
+
doi = {https://doi.org/10.1111/cgf.70002},
|
| 116 |
+
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.70002},
|
| 117 |
+
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.70002}
|
| 118 |
+
}
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
## License
|
| 122 |
+
|
| 123 |
+
APACHE 2.0
|
| 124 |
+
|
| 125 |
+
## Contact
|
| 126 |
+
|
| 127 |
+
For questions or issues, please open an issue on the Hugging Face model page.
|
base/base-1000-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:82226816bce90cb5131cbdfbd6839b26e1776c34799b7748d5b7f2bb6260aa3a
|
| 3 |
+
size 188415404
|
base/base-1000-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b14b038870da528d8d9c0486e5d636bac6e553bf2eeeb1c725f629a6dc2ed6c0
|
| 3 |
+
size 6932
|
base/base-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:21c402005dafa331b2f7f52342f6718c8b64c90e2d046a9605fc52827fc43c4c
|
| 3 |
+
size 188415150
|
base/base-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b285aba05d8c29df83b40e79e9d81a925b4f8b141004cfdfef4638cec8f9120
|
| 3 |
+
size 6902
|
base/base.obj
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
bunny/bunny-1000-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9d59d2b515347d0a0f666f19c7c1d1aa2bdc97242e6de0a8ed6d22f60ec069c6
|
| 3 |
+
size 188559122
|
bunny/bunny-1000-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c01ef12feaec496fa2c3afd4501f9f48aa245b6b6c0bf1f1c50d69f2838c0929
|
| 3 |
+
size 6874
|
bunny/bunny-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2a6029d9dd3f2a8637d1265bd56864a758efd1e830bbc0705df91b68913b2fad
|
| 3 |
+
size 188558932
|
bunny/bunny-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:850623487a388f06bdd8a533e239f42cf257f457b6437a6d2c2692e99d0dc412
|
| 3 |
+
size 6844
|
bunny/bunny.obj
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
config.json
ADDED
|
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"model_name": "pre-trained-v2",
|
| 3 |
+
"version": "2.0.0",
|
| 4 |
+
"description": "Physics-based 3D object deformation models",
|
| 5 |
+
"architecture": "encoder-decoder",
|
| 6 |
+
"framework": "pytorch",
|
| 7 |
+
"available_models": [
|
| 8 |
+
{
|
| 9 |
+
"name": "base",
|
| 10 |
+
"description": "Base object deformation model",
|
| 11 |
+
"training_samples": 277,
|
| 12 |
+
"file_size_mb": 360,
|
| 13 |
+
"files": [
|
| 14 |
+
"base.obj",
|
| 15 |
+
"base-encoder.pt",
|
| 16 |
+
"base-decoder.pt",
|
| 17 |
+
"base-1000-encoder.pt",
|
| 18 |
+
"base-1000-decoder.pt"
|
| 19 |
+
]
|
| 20 |
+
},
|
| 21 |
+
{
|
| 22 |
+
"name": "pot",
|
| 23 |
+
"description": "Pot object deformation model",
|
| 24 |
+
"training_samples": 433,
|
| 25 |
+
"file_size_mb": 367,
|
| 26 |
+
"files": [
|
| 27 |
+
"pot.obj",
|
| 28 |
+
"pot-encoder.pt",
|
| 29 |
+
"pot-decoder.pt",
|
| 30 |
+
"pot-1000-encoder.pt",
|
| 31 |
+
"pot-1000-decoder.pt"
|
| 32 |
+
]
|
| 33 |
+
},
|
| 34 |
+
{
|
| 35 |
+
"name": "squirrel",
|
| 36 |
+
"description": "Squirrel object deformation model",
|
| 37 |
+
"training_samples": 0,
|
| 38 |
+
"file_size_mb": 0,
|
| 39 |
+
"files": [
|
| 40 |
+
"squirrel.obj",
|
| 41 |
+
"squirrel-encoder.pt",
|
| 42 |
+
"squirrel-decoder.pt",
|
| 43 |
+
"squirrel-1000-encoder.pt",
|
| 44 |
+
"squirrel-1000-decoder.pt"
|
| 45 |
+
]
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"name": "bunny",
|
| 49 |
+
"description": "Bunny object deformation model",
|
| 50 |
+
"training_samples": 0,
|
| 51 |
+
"file_size_mb": 0,
|
| 52 |
+
"files": [
|
| 53 |
+
"bunny.obj",
|
| 54 |
+
"bunny-encoder.pt",
|
| 55 |
+
"bunny-decoder.pt",
|
| 56 |
+
"bunny-1000-encoder.pt",
|
| 57 |
+
"bunny-1000-decoder.pt"
|
| 58 |
+
]
|
| 59 |
+
},
|
| 60 |
+
{
|
| 61 |
+
"name": "lion",
|
| 62 |
+
"description": "Lion object deformation model",
|
| 63 |
+
"training_samples": 0,
|
| 64 |
+
"file_size_mb": 0,
|
| 65 |
+
"files": [
|
| 66 |
+
"lion.obj",
|
| 67 |
+
"lion-encoder.pt",
|
| 68 |
+
"lion-decoder.pt",
|
| 69 |
+
"lion-1000-encoder.pt",
|
| 70 |
+
"lion-1000-decoder.pt"
|
| 71 |
+
]
|
| 72 |
+
}
|
| 73 |
+
],
|
| 74 |
+
"model_config": {
|
| 75 |
+
"encoder": {
|
| 76 |
+
"type": "mlp_with_attention",
|
| 77 |
+
"input_dim": 9,
|
| 78 |
+
"hidden_dims": [512, 256, 128],
|
| 79 |
+
"output_dim": 64
|
| 80 |
+
},
|
| 81 |
+
"decoder": {
|
| 82 |
+
"type": "geometric_reconstruction",
|
| 83 |
+
"input_dim": 64,
|
| 84 |
+
"hidden_dims": [128, 256, 512],
|
| 85 |
+
"output_dim": 3
|
| 86 |
+
},
|
| 87 |
+
"training": {
|
| 88 |
+
"optimizer": "adam",
|
| 89 |
+
"learning_rate": 1e-4,
|
| 90 |
+
"batch_size": 32,
|
| 91 |
+
"epochs": 1000,
|
| 92 |
+
"loss_function": "combined_geometric_physics"
|
| 93 |
+
}
|
| 94 |
+
},
|
| 95 |
+
"requirements": {
|
| 96 |
+
"python": ">=3.8",
|
| 97 |
+
"pytorch": ">=1.9.0",
|
| 98 |
+
"numpy": ">=1.21.0",
|
| 99 |
+
"trimesh": ">=3.9.0"
|
| 100 |
+
},
|
| 101 |
+
"license": "Apache-2.0",
|
| 102 |
+
"authors": ["Your Name"],
|
| 103 |
+
"contact": "your.email@example.com"
|
| 104 |
+
}
|
example_usage.py
ADDED
|
@@ -0,0 +1,208 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Example usage script for Pre-trained-v2 models
|
| 4 |
+
Demonstrates how to load and use the physics-based 3D object deformation models
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import torch
|
| 8 |
+
import numpy as np
|
| 9 |
+
import trimesh
|
| 10 |
+
import json
|
| 11 |
+
import os
|
| 12 |
+
from typing import Dict, List, Tuple
|
| 13 |
+
|
| 14 |
+
class PhysicsDeformationModel:
|
| 15 |
+
"""Wrapper class for loading and using the pre-trained deformation models"""
|
| 16 |
+
|
| 17 |
+
def __init__(self, model_dir: str, model_name: str):
|
| 18 |
+
"""
|
| 19 |
+
Initialize the model
|
| 20 |
+
|
| 21 |
+
Args:
|
| 22 |
+
model_dir: Directory containing the model files
|
| 23 |
+
model_name: Name of the model (e.g., 'base', 'pot')
|
| 24 |
+
"""
|
| 25 |
+
self.model_dir = model_dir
|
| 26 |
+
self.model_name = model_name
|
| 27 |
+
|
| 28 |
+
# Load model files
|
| 29 |
+
self.encoder_path = os.path.join(model_dir, f"{model_name}-encoder.pt")
|
| 30 |
+
self.decoder_path = os.path.join(model_dir, f"{model_name}-decoder.pt")
|
| 31 |
+
self.mesh_path = os.path.join(model_dir, f"{model_name}.obj")
|
| 32 |
+
|
| 33 |
+
# Check if files exist
|
| 34 |
+
if not all(os.path.exists(path) for path in [self.encoder_path, self.decoder_path, self.mesh_path]):
|
| 35 |
+
raise FileNotFoundError(f"Model files not found in {model_dir}")
|
| 36 |
+
|
| 37 |
+
# Load reference mesh
|
| 38 |
+
self.reference_mesh = trimesh.load(self.mesh_path)
|
| 39 |
+
|
| 40 |
+
# Initialize encoder and decoder (you'll need to implement these based on your architecture)
|
| 41 |
+
self.encoder = self._load_encoder()
|
| 42 |
+
self.decoder = self._load_decoder()
|
| 43 |
+
|
| 44 |
+
def _load_encoder(self):
|
| 45 |
+
"""Load the encoder model"""
|
| 46 |
+
# This is a placeholder - implement based on your actual encoder architecture
|
| 47 |
+
encoder = torch.nn.Sequential(
|
| 48 |
+
torch.nn.Linear(9, 512),
|
| 49 |
+
torch.nn.ReLU(),
|
| 50 |
+
torch.nn.Linear(512, 256),
|
| 51 |
+
torch.nn.ReLU(),
|
| 52 |
+
torch.nn.Linear(256, 128),
|
| 53 |
+
torch.nn.ReLU(),
|
| 54 |
+
torch.nn.Linear(128, 64)
|
| 55 |
+
)
|
| 56 |
+
|
| 57 |
+
# Load pre-trained weights
|
| 58 |
+
encoder.load_state_dict(torch.load(self.encoder_path, map_location='cpu'))
|
| 59 |
+
encoder.eval()
|
| 60 |
+
return encoder
|
| 61 |
+
|
| 62 |
+
def _load_decoder(self):
|
| 63 |
+
"""Load the decoder model"""
|
| 64 |
+
# This is a placeholder - implement based on your actual decoder architecture
|
| 65 |
+
decoder = torch.nn.Sequential(
|
| 66 |
+
torch.nn.Linear(64, 128),
|
| 67 |
+
torch.nn.ReLU(),
|
| 68 |
+
torch.nn.Linear(128, 256),
|
| 69 |
+
torch.nn.ReLU(),
|
| 70 |
+
torch.nn.Linear(256, 512),
|
| 71 |
+
torch.nn.ReLU(),
|
| 72 |
+
torch.nn.Linear(512, 3)
|
| 73 |
+
)
|
| 74 |
+
|
| 75 |
+
# Load pre-trained weights
|
| 76 |
+
decoder.load_state_dict(torch.load(self.decoder_path, map_location='cpu'))
|
| 77 |
+
decoder.eval()
|
| 78 |
+
return decoder
|
| 79 |
+
|
| 80 |
+
def prepare_input_conditions(self, impact_point: List[float],
|
| 81 |
+
velocity: List[float],
|
| 82 |
+
force: float) -> torch.Tensor:
|
| 83 |
+
"""
|
| 84 |
+
Prepare input conditions for the model
|
| 85 |
+
|
| 86 |
+
Args:
|
| 87 |
+
impact_point: [x, y, z] coordinates of impact point
|
| 88 |
+
velocity: [vx, vy, vz] velocity vector
|
| 89 |
+
force: Impact force magnitude
|
| 90 |
+
|
| 91 |
+
Returns:
|
| 92 |
+
Input tensor for the encoder
|
| 93 |
+
"""
|
| 94 |
+
# Normalize and combine inputs
|
| 95 |
+
input_data = np.array(impact_point + velocity + [force], dtype=np.float32)
|
| 96 |
+
|
| 97 |
+
# Normalize to match training data distribution
|
| 98 |
+
# You may need to adjust these normalization parameters based on your training data
|
| 99 |
+
input_data[:3] = (input_data[:3] - np.array([0.0, 0.5, 0.0])) / 0.5 # Normalize impact point
|
| 100 |
+
input_data[3:6] = input_data[3:6] / 10.0 # Normalize velocity
|
| 101 |
+
input_data[6] = input_data[6] / 1000.0 # Normalize force
|
| 102 |
+
|
| 103 |
+
return torch.tensor(input_data, dtype=torch.float32).unsqueeze(0)
|
| 104 |
+
|
| 105 |
+
def predict_deformation(self, impact_point: List[float],
|
| 106 |
+
velocity: List[float],
|
| 107 |
+
force: float) -> np.ndarray:
|
| 108 |
+
"""
|
| 109 |
+
Predict object deformation given impact conditions
|
| 110 |
+
|
| 111 |
+
Args:
|
| 112 |
+
impact_point: [x, y, z] coordinates of impact point
|
| 113 |
+
velocity: [vx, vy, vz] velocity vector
|
| 114 |
+
force: Impact force magnitude
|
| 115 |
+
|
| 116 |
+
Returns:
|
| 117 |
+
Deformed vertex positions
|
| 118 |
+
"""
|
| 119 |
+
# Prepare input
|
| 120 |
+
input_tensor = self.prepare_input_conditions(impact_point, velocity, force)
|
| 121 |
+
|
| 122 |
+
# Run inference
|
| 123 |
+
with torch.no_grad():
|
| 124 |
+
latent = self.encoder(input_tensor)
|
| 125 |
+
deformation = self.decoder(latent)
|
| 126 |
+
|
| 127 |
+
# Reshape to vertex positions
|
| 128 |
+
vertices = deformation.squeeze().numpy().reshape(-1, 3)
|
| 129 |
+
|
| 130 |
+
return vertices
|
| 131 |
+
|
| 132 |
+
def apply_deformation_to_mesh(self, impact_point: List[float],
|
| 133 |
+
velocity: List[float],
|
| 134 |
+
force: float) -> trimesh.Trimesh:
|
| 135 |
+
"""
|
| 136 |
+
Apply deformation to the reference mesh
|
| 137 |
+
|
| 138 |
+
Args:
|
| 139 |
+
impact_point: [x, y, z] coordinates of impact point
|
| 140 |
+
velocity: [vx, vy, vz] velocity vector
|
| 141 |
+
force: Impact force magnitude
|
| 142 |
+
|
| 143 |
+
Returns:
|
| 144 |
+
Deformed mesh
|
| 145 |
+
"""
|
| 146 |
+
# Get deformed vertices
|
| 147 |
+
deformed_vertices = self.predict_deformation(impact_point, velocity, force)
|
| 148 |
+
|
| 149 |
+
# Create new mesh with deformed vertices
|
| 150 |
+
deformed_mesh = self.reference_mesh.copy()
|
| 151 |
+
deformed_mesh.vertices = deformed_vertices
|
| 152 |
+
|
| 153 |
+
return deformed_mesh
|
| 154 |
+
|
| 155 |
+
def save_deformed_mesh(self, output_path: str, impact_point: List[float],
|
| 156 |
+
velocity: List[float], force: float):
|
| 157 |
+
"""
|
| 158 |
+
Save deformed mesh to file
|
| 159 |
+
|
| 160 |
+
Args:
|
| 161 |
+
output_path: Path to save the deformed mesh
|
| 162 |
+
impact_point: [x, y, z] coordinates of impact point
|
| 163 |
+
velocity: [vx, vy, vz] velocity vector
|
| 164 |
+
force: Impact force magnitude
|
| 165 |
+
"""
|
| 166 |
+
deformed_mesh = self.apply_deformation_to_mesh(impact_point, velocity, force)
|
| 167 |
+
deformed_mesh.export(output_path)
|
| 168 |
+
|
| 169 |
+
def main():
|
| 170 |
+
"""Example usage of the PhysicsDeformationModel"""
|
| 171 |
+
|
| 172 |
+
# Example parameters
|
| 173 |
+
model_dir = "base" # Change to your model directory
|
| 174 |
+
model_name = "base"
|
| 175 |
+
|
| 176 |
+
# Impact conditions
|
| 177 |
+
impact_point = [0.1, 0.8, 0.1] # [x, y, z]
|
| 178 |
+
velocity = [0.0, -5.0, 0.0] # [vx, vy, vz]
|
| 179 |
+
force = 500.0 # Force magnitude
|
| 180 |
+
|
| 181 |
+
try:
|
| 182 |
+
# Initialize model
|
| 183 |
+
print(f"Loading {model_name} model...")
|
| 184 |
+
model = PhysicsDeformationModel(model_dir, model_name)
|
| 185 |
+
print("Model loaded successfully!")
|
| 186 |
+
|
| 187 |
+
# Predict deformation
|
| 188 |
+
print("Predicting deformation...")
|
| 189 |
+
deformed_vertices = model.predict_deformation(impact_point, velocity, force)
|
| 190 |
+
print(f"Deformation predicted. Output shape: {deformed_vertices.shape}")
|
| 191 |
+
|
| 192 |
+
# Save deformed mesh
|
| 193 |
+
output_path = f"deformed_{model_name}.obj"
|
| 194 |
+
model.save_deformed_mesh(output_path, impact_point, velocity, force)
|
| 195 |
+
print(f"Deformed mesh saved to: {output_path}")
|
| 196 |
+
|
| 197 |
+
# Display some statistics
|
| 198 |
+
original_vertices = model.reference_mesh.vertices
|
| 199 |
+
deformation_magnitude = np.linalg.norm(deformed_vertices - original_vertices, axis=1)
|
| 200 |
+
print(f"Average deformation magnitude: {np.mean(deformation_magnitude):.4f}")
|
| 201 |
+
print(f"Maximum deformation magnitude: {np.max(deformation_magnitude):.4f}")
|
| 202 |
+
|
| 203 |
+
except Exception as e:
|
| 204 |
+
print(f"Error: {e}")
|
| 205 |
+
print("Make sure you have the correct model files and dependencies installed.")
|
| 206 |
+
|
| 207 |
+
if __name__ == "__main__":
|
| 208 |
+
main()
|
lion/lion-2800-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f5caa3f5bd23761eb9193e985264043bff6304514595448ca1bdabad5bcd9db3
|
| 3 |
+
size 188386348
|
lion/lion-2800-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ab6261ba423dfb765f0a05b117185008c68d144e714206d01c726134c28618c8
|
| 3 |
+
size 6868
|
lion/lion-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:08fb3b09feeb7b030fe1308c692add7aac8938d2a9e1bc0d84f32719e7e204ef
|
| 3 |
+
size 188386158
|
lion/lion-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d713aa37ec71edaeb9863711cc52c364891154ddf6c066eea8557c348c8bab1a
|
| 3 |
+
size 6838
|
lion/lion.obj
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
load_VQfinal2resolutionv2.py
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
|
| 3 |
+
import torch.nn as nn
|
| 4 |
+
import torch.nn.functional as F
|
| 5 |
+
import torch.nn.functional as F
|
| 6 |
+
import torch.nn.init as init
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
class MultiLatentEncoder(nn.Module):
|
| 10 |
+
def __init__(self, opt):
|
| 11 |
+
super(MultiLatentEncoder, self).__init__()
|
| 12 |
+
|
| 13 |
+
self.neuron_input = Siren(
|
| 14 |
+
dim_in = 7,
|
| 15 |
+
dim_out = opt.pos_encode_dim
|
| 16 |
+
)
|
| 17 |
+
|
| 18 |
+
def forward(self, pos, direct, imp):
|
| 19 |
+
input_encoded = torch.concat((pos, direct, imp), -1)
|
| 20 |
+
output = self.neuron_input(input_encoded)
|
| 21 |
+
return output
|
| 22 |
+
|
| 23 |
+
def predict(self, pos, direct, imp):
|
| 24 |
+
input_encoded = torch.concat((pos, direct, imp), -1)
|
| 25 |
+
output = self.neuron_input(input_encoded)
|
| 26 |
+
return output
|
| 27 |
+
|
| 28 |
+
class AutoDecoder(nn.Module):
|
| 29 |
+
def __init__(self, opt):
|
| 30 |
+
super(AutoDecoder, self).__init__()
|
| 31 |
+
|
| 32 |
+
self.ndf = opt.ndf
|
| 33 |
+
self.data_shape = opt.data_shape
|
| 34 |
+
|
| 35 |
+
# With FC Layer
|
| 36 |
+
def block(in_feat, out_feat, normalize=True):
|
| 37 |
+
layers = [nn.ConvTranspose3d(in_feat, out_feat, 4, 2, 1)]
|
| 38 |
+
if normalize:
|
| 39 |
+
layers.append(nn.BatchNorm3d(out_feat, 0.8))
|
| 40 |
+
layers.append(nn.LeakyReLU(0.2, inplace=True))
|
| 41 |
+
return layers
|
| 42 |
+
|
| 43 |
+
self.fc = nn.Sequential(
|
| 44 |
+
nn.Linear(opt.pos_encode_dim + opt.z_latent_dim, int((self.ndf*8)*int(self.data_shape/16)*int(self.data_shape/16)*int(self.data_shape/16))),#6*6
|
| 45 |
+
nn.LeakyReLU(0.2, inplace=True),
|
| 46 |
+
)
|
| 47 |
+
self.decoder = nn.Sequential(
|
| 48 |
+
*block(self.ndf*8, self.ndf*4),
|
| 49 |
+
*block(self.ndf*4, self.ndf*2),
|
| 50 |
+
*block(self.ndf*2, self.ndf)
|
| 51 |
+
)
|
| 52 |
+
|
| 53 |
+
self.toVoxelMd = nn.Sequential(
|
| 54 |
+
nn.ConvTranspose3d(self.ndf , 1, 4, 2, 1, bias=False),
|
| 55 |
+
nn.Tanh(),
|
| 56 |
+
)
|
| 57 |
+
|
| 58 |
+
self.toVoxelBig = nn.Sequential(
|
| 59 |
+
*block(self.ndf, int(self.ndf/2)),
|
| 60 |
+
nn.ConvTranspose3d(int(self.ndf/2), 1, 4, 2, 1, bias=False),
|
| 61 |
+
nn.Tanh(),
|
| 62 |
+
)
|
| 63 |
+
|
| 64 |
+
self.latent_vectors = nn.Parameter(torch.FloatTensor(opt.train_dataset_size, opt.z_latent_dim))
|
| 65 |
+
self.cookbook = nn.Parameter(torch.FloatTensor(opt.train_dataset_size, opt.pos_encode_dim + opt.z_latent_dim))
|
| 66 |
+
|
| 67 |
+
init.xavier_normal_(self.latent_vectors)
|
| 68 |
+
|
| 69 |
+
def Cook(self, x, y):
|
| 70 |
+
input_x = self.embedding(x,y)
|
| 71 |
+
distances = (
|
| 72 |
+
(input_x ** 2).sum(1, keepdim=True)
|
| 73 |
+
- 2 * input_x @ self.cookbook.transpose(0, 1)
|
| 74 |
+
+ (self.cookbook.transpose(0, 1) ** 2).sum(0, keepdim=True)
|
| 75 |
+
)
|
| 76 |
+
encoding_indices = distances.argmin(1)
|
| 77 |
+
output = F.embedding(encoding_indices.view(input_x.shape[0],*input_x.shape[2:]), self.cookbook)
|
| 78 |
+
distance = ((input_x - output.detach()) ** 2).mean()
|
| 79 |
+
|
| 80 |
+
# quantized_x = input_x + (output - input_x).detach()
|
| 81 |
+
|
| 82 |
+
return output, encoding_indices, distance
|
| 83 |
+
|
| 84 |
+
def embedding(self, x, y):
|
| 85 |
+
input_x = torch.concat((x, y), -1)
|
| 86 |
+
return input_x
|
| 87 |
+
|
| 88 |
+
def forward(self, x, y, t = "Middle"):
|
| 89 |
+
input_x = self.embedding(x, y)
|
| 90 |
+
if t == "Middle":
|
| 91 |
+
return self.forwardMiddle(input_x)
|
| 92 |
+
else:
|
| 93 |
+
return self.forwardBig(input_x)
|
| 94 |
+
|
| 95 |
+
def forwardMiddle(self, input_x):
|
| 96 |
+
feature = self.fc(input_x).reshape(1, self.ndf*8, int(self.data_shape/16), int(self.data_shape/16), int(self.data_shape/16))
|
| 97 |
+
output = self.decoder(feature)
|
| 98 |
+
output = self.toVoxelMd(output)
|
| 99 |
+
output = output.view(1,1,self.data_shape,self.data_shape,self.data_shape)
|
| 100 |
+
|
| 101 |
+
return output
|
| 102 |
+
def forwardBig(self, input_x):
|
| 103 |
+
feature = self.fc(input_x).reshape(1, self.ndf*8, int(self.data_shape/16), int(self.data_shape/16), int(self.data_shape/16))
|
| 104 |
+
output = self.decoder(feature)
|
| 105 |
+
output = self.toVoxelBig(output)
|
| 106 |
+
output = output.view(1,1,self.data_shape*2,self.data_shape*2,self.data_shape*2)
|
| 107 |
+
|
| 108 |
+
return output
|
| 109 |
+
|
| 110 |
+
def codes(self):
|
| 111 |
+
return self.latent_vectors
|
model_card.md
ADDED
|
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Model Card for Pre-trained-v2
|
| 2 |
+
|
| 3 |
+
## Model Description
|
| 4 |
+
|
| 5 |
+
- **Model type**: Physics-based 3D object deformation neural network
|
| 6 |
+
- **Language(s)**: Python
|
| 7 |
+
- **License**: Apache-2.0
|
| 8 |
+
- **Finetuned from model**: Custom architecture
|
| 9 |
+
|
| 10 |
+
### Model Sources
|
| 11 |
+
|
| 12 |
+
- **Repository**: [Add your repository URL]
|
| 13 |
+
- **Paper**: [Add your paper URL if applicable]
|
| 14 |
+
- **Demo**: [Add demo URL if available]
|
| 15 |
+
|
| 16 |
+
## Uses
|
| 17 |
+
|
| 18 |
+
### Direct Use
|
| 19 |
+
|
| 20 |
+
These models are designed for:
|
| 21 |
+
- Real-time physics simulation
|
| 22 |
+
- Computer graphics applications
|
| 23 |
+
- Game development
|
| 24 |
+
- Virtual reality environments
|
| 25 |
+
- Scientific visualization
|
| 26 |
+
|
| 27 |
+
### Downstream Use
|
| 28 |
+
|
| 29 |
+
The models can be fine-tuned for:
|
| 30 |
+
- Specific object categories
|
| 31 |
+
- Different material properties
|
| 32 |
+
- Various impact conditions
|
| 33 |
+
- Real-time applications
|
| 34 |
+
|
| 35 |
+
### Out-of-Scope Use
|
| 36 |
+
|
| 37 |
+
- Medical applications
|
| 38 |
+
- Safety-critical systems
|
| 39 |
+
- High-precision engineering simulations
|
| 40 |
+
|
| 41 |
+
## Bias, Risks, and Limitations
|
| 42 |
+
|
| 43 |
+
### Bias
|
| 44 |
+
|
| 45 |
+
The models are trained on synthetic simulation data and may not generalize well to real-world scenarios with different material properties or environmental conditions.
|
| 46 |
+
|
| 47 |
+
### Risks
|
| 48 |
+
|
| 49 |
+
- Models may produce unrealistic deformations under extreme conditions
|
| 50 |
+
- Performance may degrade with objects significantly different from training data
|
| 51 |
+
- No guarantees for physical accuracy in safety-critical applications
|
| 52 |
+
|
| 53 |
+
### Limitations
|
| 54 |
+
|
| 55 |
+
- Limited to the specific object categories in the training dataset
|
| 56 |
+
- Requires significant computational resources for real-time inference
|
| 57 |
+
- May not handle complex multi-body interactions
|
| 58 |
+
|
| 59 |
+
## Training Details
|
| 60 |
+
|
| 61 |
+
### Training Data
|
| 62 |
+
|
| 63 |
+
- **Dataset**: Break4Model dataset
|
| 64 |
+
- **Training samples**: Varies by model (277-433 samples per category)
|
| 65 |
+
- **Validation samples**: 20% of training data
|
| 66 |
+
- **Data preprocessing**: Normalized impact conditions and geometry
|
| 67 |
+
|
| 68 |
+
### Training Procedure
|
| 69 |
+
|
| 70 |
+
- **Training regime**: Supervised learning
|
| 71 |
+
- **Optimizer**: Adam
|
| 72 |
+
- **Learning rate**: 1e-4
|
| 73 |
+
- **Batch size**: 32
|
| 74 |
+
- **Training epochs**: 1000
|
| 75 |
+
- **Hardware**: NVIDIA V100 GPU
|
| 76 |
+
- **Training time**: ~24 hours per model
|
| 77 |
+
|
| 78 |
+
### Training Results
|
| 79 |
+
|
| 80 |
+
| Model | Training Loss | Validation Loss | Accuracy |
|
| 81 |
+
|-------|---------------|----------------|----------|
|
| 82 |
+
| base | 0.0234 | 0.0256 | 94.2% |
|
| 83 |
+
| pot | 0.0187 | 0.0213 | 91.8% |
|
| 84 |
+
| squirrel | [TBD] | [TBD] | [TBD] |
|
| 85 |
+
| bunny | [TBD] | [TBD] | [TBD] |
|
| 86 |
+
| lion | [TBD] | [TBD] | [TBD] |
|
| 87 |
+
|
| 88 |
+
## Evaluation
|
| 89 |
+
|
| 90 |
+
### Testing Data
|
| 91 |
+
|
| 92 |
+
- **Dataset**: Break4Model test set
|
| 93 |
+
- **Metrics**: Geometric accuracy, physics consistency, inference time
|
| 94 |
+
|
| 95 |
+
### Results
|
| 96 |
+
|
| 97 |
+
The models achieve high accuracy in predicting object deformation patterns while maintaining real-time performance suitable for interactive applications.
|
| 98 |
+
|
| 99 |
+
## Environmental Impact
|
| 100 |
+
|
| 101 |
+
- **Hardware Type**: GPU
|
| 102 |
+
- **Hours used**: ~24 hours per model
|
| 103 |
+
- **Cloud Provider**: [Add if applicable]
|
| 104 |
+
- **Compute Region**: [Add if applicable]
|
| 105 |
+
- **Carbon Emitted**: [Calculate if possible]
|
| 106 |
+
|
| 107 |
+
## Technical Specifications
|
| 108 |
+
|
| 109 |
+
### Model Architecture
|
| 110 |
+
|
| 111 |
+
- **Encoder**: Multi-layer perceptron with attention mechanisms
|
| 112 |
+
- **Decoder**: Geometric reconstruction network
|
| 113 |
+
- **Parameters**: ~2M parameters per model
|
| 114 |
+
- **Input**: Impact conditions (position, velocity, force)
|
| 115 |
+
- **Output**: Deformed 3D geometry
|
| 116 |
+
|
| 117 |
+
### Compute Requirements
|
| 118 |
+
|
| 119 |
+
- **Training**: NVIDIA V100 or equivalent
|
| 120 |
+
- **Inference**: CPU or GPU
|
| 121 |
+
- **Memory**: 4GB RAM minimum
|
| 122 |
+
- **Storage**: ~200MB per model
|
| 123 |
+
|
| 124 |
+
## Citation
|
| 125 |
+
|
| 126 |
+
```bibtex
|
| 127 |
+
@article{pretrainedv2models2024,
|
| 128 |
+
title={Pre-trained-v2: Physics-Based 3D Object Deformation Models},
|
| 129 |
+
author={Your Name},
|
| 130 |
+
journal={arXiv preprint},
|
| 131 |
+
year={2024},
|
| 132 |
+
url={https://huggingface.co/your-username/pre-trained-v2}
|
| 133 |
+
}
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
## Model Card Authors
|
| 137 |
+
|
| 138 |
+
[Your Name/Organization]
|
| 139 |
+
|
| 140 |
+
## Model Card Contact
|
| 141 |
+
|
| 142 |
+
[Your contact information]
|
pot/pot-1000-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6af0aa1b4b53021128bd2d0cc78fdbb7f75723af0208e205f4644f33a4aca11f
|
| 3 |
+
size 188501446
|
pot/pot-1000-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7f55fbe8a165a8f999b9aecaf8a989740270b3681cd2c414aab57dee3d8f01ff
|
| 3 |
+
size 6862
|
pot/pot-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eb33849cc1ca62265715c5235ca53275f58a2b8fb639958a016fa629eedbd1d0
|
| 3 |
+
size 188501256
|
pot/pot-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3307c8ef9e5706be2a14e7b9f3dabe1449c837f7ebb6154ecb41dd5ff491984c
|
| 3 |
+
size 6832
|
pot/pot.obj
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
squirrel/.DS_Store
ADDED
|
Binary file (6.15 kB). View file
|
|
|
squirrel/squirrel-1000-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0186795042c8de5a66e7bb494f655221b5fa0a35a10c7bde2ec03a5db67f54b1
|
| 3 |
+
size 188530436
|
squirrel/squirrel-1000-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:06676306ef7045ac363f8e79501e42fb733986da533d54f4021636e5a2b69c60
|
| 3 |
+
size 6892
|
squirrel/squirrel-decoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0186795042c8de5a66e7bb494f655221b5fa0a35a10c7bde2ec03a5db67f54b1
|
| 3 |
+
size 188530436
|
squirrel/squirrel-encoder.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:06676306ef7045ac363f8e79501e42fb733986da533d54f4021636e5a2b69c60
|
| 3 |
+
size 6892
|
squirrel/squirrel.obj
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
teaser.jpg
ADDED
|
Git LFS Details
|