Upload ./README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -34,7 +34,11 @@ huggingface-cli download Tencent-Hunyuan/TensorRT-libs --local-dir ./ckpts/t2i/m
|
|
| 34 |
### 2. Install the TensorRT dependencies.
|
| 35 |
|
| 36 |
```shell
|
|
|
|
| 37 |
sh trt/install.sh
|
|
|
|
|
|
|
|
|
|
| 38 |
```
|
| 39 |
|
| 40 |
### 3. Build the TensorRT engine.
|
|
@@ -42,18 +46,22 @@ sh trt/install.sh
|
|
| 42 |
|
| 43 |
#### Method 1: Use the prebuilt engine
|
| 44 |
|
| 45 |
-
We provide some prebuilt TensorRT engines.
|
| 46 |
|
| 47 |
-
| Supported GPU |
|
| 48 |
-
|
| 49 |
-
| GeForce RTX 3090 |
|
| 50 |
-
| GeForce RTX 4090 |
|
| 51 |
-
| A100 |
|
| 52 |
|
| 53 |
-
Use the following command to download and place the engine in the specified location.
|
|
|
|
|
|
|
| 54 |
|
| 55 |
```shell
|
| 56 |
-
|
|
|
|
|
|
|
| 57 |
```
|
| 58 |
|
| 59 |
#### Method 2: Build your own engine
|
|
@@ -61,9 +69,6 @@ huggingface-cli download Tencent-Hunyuan/TensorRT-engine <Remote Path> --local-d
|
|
| 61 |
If you are using a different GPU, you can build the engine using the following command.
|
| 62 |
|
| 63 |
```shell
|
| 64 |
-
# Set the TensorRT build environment variables first. We provide a script to set up the environment.
|
| 65 |
-
source trt/activate.sh
|
| 66 |
-
|
| 67 |
# Build the TensorRT engine. By default, it will read the `ckpts` folder in the current directory.
|
| 68 |
sh trt/build_engine.sh
|
| 69 |
```
|
|
@@ -73,6 +78,9 @@ Finally, if you see the output like `&&&& PASSED TensorRT.trtexec [TensorRT v920
|
|
| 73 |
### 4. Run the inference using the TensorRT model.
|
| 74 |
|
| 75 |
```shell
|
|
|
|
|
|
|
|
|
|
| 76 |
# Run the inference using the prompt-enhanced model + HunyuanDiT TensorRT model.
|
| 77 |
python sample_t2i.py --prompt "渔舟唱晚" --infer-mode trt
|
| 78 |
|
|
|
|
| 34 |
### 2. Install the TensorRT dependencies.
|
| 35 |
|
| 36 |
```shell
|
| 37 |
+
# Extract and install the TensorRT dependencies.
|
| 38 |
sh trt/install.sh
|
| 39 |
+
|
| 40 |
+
# Set the TensorRT build environment variables. We provide a script to set up the environment.
|
| 41 |
+
source trt/activate.sh
|
| 42 |
```
|
| 43 |
|
| 44 |
### 3. Build the TensorRT engine.
|
|
|
|
| 46 |
|
| 47 |
#### Method 1: Use the prebuilt engine
|
| 48 |
|
| 49 |
+
We provide some prebuilt TensorRT engines, which need to be downloaded from Huggingface.
|
| 50 |
|
| 51 |
+
| Supported GPU | Remote Path |
|
| 52 |
+
|:----------------:|:---------------------------------:|
|
| 53 |
+
| GeForce RTX 3090 | `engines/RTX3090/model_onnx.plan` |
|
| 54 |
+
| GeForce RTX 4090 | `engines/RTX4090/model_onnx.plan` |
|
| 55 |
+
| A100 | `engines/A100/model_onnx.plan` |
|
| 56 |
|
| 57 |
+
Use the following command to download and place the engine in the specified location.
|
| 58 |
+
|
| 59 |
+
*Note: Please replace `<Remote Path>` with the corresponding remote path in the table above.*
|
| 60 |
|
| 61 |
```shell
|
| 62 |
+
export REMOTE_PATH=<Remote Path>
|
| 63 |
+
huggingface-cli download Tencent-Hunyuan/TensorRT-engine ${REMOTE_PATH} ./ckpts/t2i/model_trt/engine/
|
| 64 |
+
ln -s ${REMOTE_PATH} ./ckpts/t2i/model_trt/engine/model_onnx.plan
|
| 65 |
```
|
| 66 |
|
| 67 |
#### Method 2: Build your own engine
|
|
|
|
| 69 |
If you are using a different GPU, you can build the engine using the following command.
|
| 70 |
|
| 71 |
```shell
|
|
|
|
|
|
|
|
|
|
| 72 |
# Build the TensorRT engine. By default, it will read the `ckpts` folder in the current directory.
|
| 73 |
sh trt/build_engine.sh
|
| 74 |
```
|
|
|
|
| 78 |
### 4. Run the inference using the TensorRT model.
|
| 79 |
|
| 80 |
```shell
|
| 81 |
+
# Important: If you have not activated the environment, please run the following command.
|
| 82 |
+
source trt/activate.sh
|
| 83 |
+
|
| 84 |
# Run the inference using the prompt-enhanced model + HunyuanDiT TensorRT model.
|
| 85 |
python sample_t2i.py --prompt "渔舟唱晚" --infer-mode trt
|
| 86 |
|