File size: 4,193 Bytes
5995610
 
 
 
 
 
 
 
 
 
de369de
b152a33
de369de
b152a33
 
de369de
 
 
b152a33
de369de
b152a33
de369de
b152a33
de369de
b152a33
de369de
 
 
 
b152a33
de369de
b152a33
de369de
 
 
 
 
 
 
 
 
 
 
b152a33
de369de
 
 
 
 
b152a33
de369de
 
 
b152a33
de369de
b152a33
de369de
b152a33
de369de
 
 
b152a33
de369de
 
 
 
b152a33
de369de
 
 
 
b152a33
de369de
 
 
 
 
b152a33
de369de
b152a33
de369de
b152a33
de369de
b152a33
de369de
 
 
b152a33
de369de
 
 
 
 
 
 
b152a33
de369de
 
 
6274e8d
de369de
b152a33
de369de
6274e8d
 
de369de
6274e8d
b152a33
 
 
 
 
 
6274e8d
b152a33
de369de
 
b152a33
 
 
 
 
 
 
 
 
 
 
 
 
 
de369de
b152a33
b43927b
b152a33
de369de
b152a33
5995610
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
license: mit
datasets:
- nikoloside/break4models
pipeline_tag: other
tags:
- fracture
- vq-vae
- physical-simulation
---
# DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning

This is a collection of pre-trained models for deepfracture: a conditional vq-vae model for predicting fracture pattern from impulse code, trained on the [Break4Models](https://huggingface.co/datasets/nikoloside/break4models) dataset created by [FractureRB](https://github.com/david-hahn/FractureRB).


πŸ“– **For more details, please visit:**
- [GitHub Repository](https://github.com/nikoloside/TEBP)
- [Project Page](https://nikoloside.graphics/deepfracture/)

## Overview

These models are designed to predict fracture patterns based on impact conditions. Each model is trained on a specific target shape and can be used for real-time physics simulation and computer graphics applications.

## Model Architecture

The models use an encoder-decoder architecture:
- **Encoder**: Processes input impulse conditions and generates latent representations
- **Decoder**: Reconstructs GS-SDF(Geometrically-Segmented Signed Distance Fields) from latent representations
- **Training**: Supervised learning on physics simulation data

## Available Models

```
pre-trained-v2/
β”œβ”€β”€ base/           # Base object model
β”œβ”€β”€ pot/            # Pot object model
β”œβ”€β”€ squirrel/       # Squirrel object model
β”œβ”€β”€ bunny/          # Bunny object model
β”œβ”€β”€ lion/           # Lion object model
β”œβ”€β”€ objs/           # Different original mesh files
β”œβ”€β”€ csv/            # Initial collision scene
└── README.md       # This file
```

Each model directory contains:
- `{shape}-encoder.pt` - Encoder weights
- `{shape}-decoder.pt` - Decoder weights
- `{shape}-1000-encoder.pt` - Encoder weights (1000 epoch version)
- `{shape}-1000-decoder.pt` - Decoder weights (1000 epoch version)

Other folders:
- `{shape}.obj` - Reference original 3D mesh file
- `{shape}-{csv_num}.obj` - Reference initial collision scene. Containing pos, direct, impulse strength.

## Usage

### Loading Models

```python
import torch
from your_model_architecture import Encoder, Decoder

# Load encoder
encoder = Encoder()
encoder.load_state_dict(torch.load('base/base-encoder.pt'))
encoder.eval()

# Load decoder  
decoder = Decoder()
decoder.load_state_dict(torch.load('base/base-decoder.pt'))
decoder.eval()

# Load reference mesh
reference_mesh = 'objs/base.obj'
init_collision = 'csv/base-261.txt'
work_path = 'result/base-exp-1/
```

### Inference

- [Example](https://github.com/nikoloside/TEBP/blob/main/04.Run-time/predict-runtime.py)

- [Details](https://github.com/nikoloside/TEBP/blob/main/04.Run-time/MorphoImageJ.py#L34)

```python
# Prepare input conditions
input_conditions = prepare_impact_conditions(impact_point, velocity, impulse_strength)

# Encode
with torch.no_grad():
    latent = encoder(input_conditions)
    
# Decode
latent = decoder.cook(latent)
gssdf_voxel = deocoder.predict(latent)

# Apply to reference mesh
result_mesh = processCagedSDFSeg(gssdf_voxel, work_path, reference_mesh, isBig = False, maxValue = 1.0)
```

## Model Performance

(metrics and performance)[https://doi.org/10.1111/cgf.70002]


## Training Details

- **Dataset**: Break4Model dataset
- **Framework**: PyTorch
- **Optimizer**: Adam
- **Loss Function**: L2 Loss
- **Training Time**: ~24 hours per model on NVIDIA RTX 3090

## Citation

If you use these models in your research, please cite:

```bibtex
@article{huang2025deepfracture,
author = {Huang, Yuhang and Kanai, Takashi},
title = {DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning},
journal = {Computer Graphics Forum},
pages = {e70002},
year = {2025},
keywords = {animation, brittle fracture, neural networks, physically based animation},
doi = {https://doi.org/10.1111/cgf.70002},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.70002},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.70002}
}
```

## License

MIT

## Contact

For questions or issues, please open an issue on the Hugging Face model page.