Update README.md
Browse files
README.md
CHANGED
|
@@ -70,7 +70,7 @@ model.cuda()
|
|
| 70 |
```
|
| 71 |
|
| 72 |
### Export to ONNX
|
| 73 |
-
The goal of exporting to ONNX is to deploy inference by [TensorRT](https://developer.nvidia.com/tensorrt). Fake quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member
|
| 74 |
|
| 75 |
```python
|
| 76 |
from pytorch_quantization.nn import TensorQuantizer
|
|
|
|
| 70 |
```
|
| 71 |
|
| 72 |
### Export to ONNX
|
| 73 |
+
The goal of exporting to ONNX is to deploy inference by [TensorRT](https://developer.nvidia.com/tensorrt). Fake quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting the static member **TensorQuantizer** to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow the instructions in [torch.onnx](https://pytorch.org/docs/stable/onnx.html). Example:
|
| 74 |
|
| 75 |
```python
|
| 76 |
from pytorch_quantization.nn import TensorQuantizer
|