camenduru commited on
Commit
68927b8
·
1 Parent(s): 989174e

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -99
README.md DELETED
@@ -1,99 +0,0 @@
1
- # Introduction
2
-
3
- This demo application ("demoDiffusion") showcases the acceleration of [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion-v1-4) pipeline using TensorRT plugins.
4
-
5
- # Setup
6
-
7
- ### Clone the TensorRT OSS repository
8
-
9
- ```bash
10
- git clone git@github.com:NVIDIA/TensorRT.git -b release/8.5 --single-branch
11
- cd TensorRT
12
- git submodule update --init --recursive
13
- ```
14
-
15
- ### Launch TensorRT NGC container
16
-
17
- Install nvidia-docker using [these intructions](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker).
18
-
19
- ```bash
20
- docker run --rm -it --gpus all -v $PWD:/workspace nvcr.io/nvidia/tensorrt:22.10-py3 /bin/bash
21
- ```
22
-
23
- ### (Optional) Install latest TensorRT release
24
-
25
- ```bash
26
- python3 -m pip install --upgrade pip
27
- python3 -m pip install --upgrade tensorrt
28
- ```
29
- > NOTE: Alternatively, you can download and install TensorRT packages from [NVIDIA TensorRT Developer Zone](https://developer.nvidia.com/tensorrt).
30
-
31
- ### Build TensorRT plugins library
32
-
33
- Build TensorRT Plugins library using the [TensorRT OSS build instructions](https://github.com/NVIDIA/TensorRT/blob/main/README.md#building-tensorrt-oss).
34
-
35
- ```bash
36
- export TRT_OSSPATH=/workspace
37
-
38
- cd $TRT_OSSPATH
39
- mkdir -p build && cd build
40
- cmake .. -DTRT_OUT_DIR=$PWD/out
41
- cd plugin
42
- make -j$(nproc)
43
-
44
- export PLUGIN_LIBS="$TRT_OSSPATH/build/out/libnvinfer_plugin.so"
45
- ```
46
-
47
- ### Install required packages
48
-
49
- ```bash
50
- cd $TRT_OSSPATH/demo/Diffusion
51
- pip3 install -r requirements.txt
52
-
53
- # Create output directories
54
- mkdir -p onnx engine output
55
- ```
56
-
57
- > NOTE: demoDiffusion has been tested on systems with NVIDIA A100, RTX3090, and RTX4090 GPUs, and the following software configuration.
58
- ```
59
- cuda-python 11.8.1
60
- diffusers 0.7.2
61
- onnx 1.12.0
62
- onnx-graphsurgeon 0.3.25
63
- onnxruntime 1.13.1
64
- polygraphy 0.43.1
65
- tensorrt 8.5.1.7
66
- tokenizers 0.13.2
67
- torch 1.12.0+cu116
68
- transformers 4.24.0
69
- ```
70
-
71
- > NOTE: optionally install HuggingFace [accelerate](https://pypi.org/project/accelerate/) package for faster and less memory-intense model loading.
72
-
73
-
74
- # Running demoDiffusion
75
-
76
- ### Review usage instructions
77
-
78
- ```bash
79
- python3 demo-diffusion.py --help
80
- ```
81
-
82
- ### HuggingFace user access token
83
-
84
- To download the model checkpoints for the Stable Diffusion pipeline, you will need a `read` access token. See [instructions](https://huggingface.co/docs/hub/security-tokens).
85
-
86
- ```bash
87
- export HF_TOKEN=<your access token>
88
- ```
89
-
90
- ### Generate an image guided by a single text prompt
91
-
92
- ```bash
93
- LD_PRELOAD=${PLUGIN_LIBS} python3 demo-diffusion.py "a beautiful photograph of Mt. Fuji during cherry blossom" --hf-token=$HF_TOKEN -v
94
- ```
95
-
96
- # Restrictions
97
- - Upto 16 simultaneous prompts (maximum batch size) per inference.
98
- - For generating images of dynamic shapes without rebuilding the engines, use `--force-dynamic-shape`.
99
- - Supports images sizes between 256x256 and 1024x1024.