Update README.md
Browse files
README.md
CHANGED
|
@@ -11,17 +11,4 @@ Inference code is available here: [github.com/jy0205/Pyramid-Flow](https://githu
|
|
| 11 |
|
| 12 |
Both 384p and 768p work on 24 GB VRAM. For 16 steps (5 second video), 384p takes a little over a minute on a 3090, and 768p takes about 7 minutes. For 31 steps (10 second video), 384p took about 10 minutes.
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
Change this line:
|
| 17 |
-
```
|
| 18 |
-
self.timesteps_per_stage[i_s] = torch.from_numpy(timesteps[:-1])
|
| 19 |
-
```
|
| 20 |
-
To this:
|
| 21 |
-
```
|
| 22 |
-
self.timesteps_per_stage[i_s] = timesteps[:-1]
|
| 23 |
-
```
|
| 24 |
-
|
| 25 |
-
This will allow the model to be compatible with newer versions of pytorch and other libraries than is shown in the requirements.
|
| 26 |
-
|
| 27 |
-
Working with torch2.4.1+cu124.
|
|
|
|
| 11 |
|
| 12 |
Both 384p and 768p work on 24 GB VRAM. For 16 steps (5 second video), 384p takes a little over a minute on a 3090, and 768p takes about 7 minutes. For 31 steps (10 second video), 384p took about 10 minutes.
|
| 13 |
|
| 14 |
+
I highly recommend using `cpu_offloading=True` when generating, unless you have more than 24 GB VRAM.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|