Linux
1. Requirements
- Already installed
CuDNN 9.x&CUDA 12.xfor ONNX Runtime 1.23.2
2. Inference
LD_LIBRARY_PATH=. ./asr --helpLD_LIBRARY_PATH=. ./asr openailibonnxruntime_providers_cuda.so&libonnxruntime_providers_shared.so- only need for CUDA inference
3. Build on Glibc>=2.28
mkdir -p ~/lib/onnxruntime cd ~/lib/onnxruntime wget https://github.com/microsoft/onnxruntime/releases/download/v1.23.2/onnxruntime-linux-x64-gpu-1.23.2.tgz tar -xzf onnxruntime-linux-x64-gpu-1.23.2.tgz export ORT_STRATEGY=manual export ORT_LIB_LOCATION=$HOME/lib/onnxruntime/onnxruntime-linux-x64-gpu-1.23.2/lib export LD_LIBRARY_PATH=$ORT_LIB_LOCATION:$LD_LIBRARY_PATH cargo clean cargo build --release -p asr --features cuda