Spaces:
Running
Running
| description: TensorRT engine export utilities for converting ONNX models to optimized TensorRT engines. Provides functions for ONNX export from PyTorch models and TensorRT engine generation with support for FP16/INT8 quantization, dynamic shapes, DLA acceleration, and INT8 calibration for NVIDIA GPU inference optimization. | |
| keywords: Ultralytics, TensorRT export, ONNX export, PyTorch to ONNX, quantization, FP16, INT8, dynamic shapes, DLA acceleration, GPU inference, model optimization, calibration, NVIDIA, inference engine, model export | |
| # Reference for `ultralytics/utils/export/engine.py` | |
| !!! success "Improvements" | |
| This page is sourced from [https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/export/engine.py](https://github.com/ultralytics/ultralytics/blob/main/ultralytics/utils/export/engine.py). Have an improvement or example to add? Open a [Pull Request](https://docs.ultralytics.com/help/contributing/) β thank you! π | |
| <br> | |
| ## ::: ultralytics.utils.export.engine.torch2onnx | |
| <br><br><hr><br> | |
| ## ::: ultralytics.utils.export.engine.onnx2engine | |
| <br><br> | |