File size: 3,508 Bytes
c446951 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 | # inference-client
<a href="https://universe.roboflow.com/roboflow-jvuqo/football-players-detection-3zvbc">
<img src="https://app.roboflow.com/images/download-dataset-badge.svg"></img>
</a>
<a href="https://universe.roboflow.com/roboflow-jvuqo/football-players-detection-3zvbc/model/">
<img src="https://app.roboflow.com/images/try-model-badge.svg"></img>
</a>
## π hello
This repository contains examples of image and video inference via [Roboflow Inference (HTTP)](https://github.com/roboflow/inference) and stream inference via [Roboflow Inference (UDP)](https://github.com/roboflow/inference).
The HTTP examples take in an image or video and run inference, whereas the UDP example listens for predictions from a UDP stream and processes them.
## π» install client environment
```bash
# clone repository and navigate to root directory
git clone https://github.com/roboflow/inference-client.git
cd inference-client
# setup python environment and activate it
python3 -m venv venv
source venv/bin/activate
# headless install
pip install -r requirements.txt
```
## π docker
You can learn more about Roboflow Inference Docker Image build, pull and run in our [documentation](https://roboflow.github.io/inference/quickstart/docker/).
### HTTP
- Run on x86 CPU:
```bash
docker run --net=host roboflow/roboflow-inference-server-cpu:latest
```
- Run on Nvidia GPU:
```bash
docker run --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
```
### UDP
- Run on Nvidia GPU:
```bash
docker run --gpus=all --net=host -e STREAM_ID=0 -e MODEL_ID=<> -e API_KEY=<> roboflow/roboflow-inference-server-udp-gpu:latest
```
<details close>
<summary>π more docker run options</summary>
### HTTP
- Run on arm64 CPU:
```bash
docker run -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:latest
```
- Run on Nvidia GPU with TensorRT Runtime:
```bash
docker run --network=host --gpus=all roboflow/roboflow-inference-server-trt:latest
```
- Run on Nvidia Jetson with JetPack `4.x`:
```bash
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson:latest
```
- Run on Nvidia Jetson with JetPack `5.x`:
```bash
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson-5.1.1:latest
```
### UDP
We only support one UDP container at the moment. Refer to the UDP command from earlier to set up UDP.
</details>
## π keys
Before running the inference script, ensure that the `API_KEY` is set as an environment variable. This key provides access to the inference API.
- For Unix/Linux:
```bash
export API_KEY=your_api_key_here
```
- For Windows:
```bash
set API_KEY=your_api_key_here
```
Replace `your_api_key_here` with your actual API key.
## π· image inference example (HTTP)
To run the image inference script:
```bash
python image.py \
--image_path data/a9f16c_8_9.png \
--class_list "ball" "goalkeeper" "player" "referee" \
--dataset_id "football-players-detection-3zvbc" \
--version_id 2 \
--confidence 0.5
```
## π¬ video inference example (HTTP)
To run the video inference script:
```bash
python video.py \
--video_path "data/40cd38_5.mp4" \
--class_list "ball" "goalkeeper" "player" "referee" \
--dataset_id "football-players-detection-3zvbc" \
--version_id 2 \
--confidence 0.5
```
## πΊ stream inference example (UDP)
To run the UDP receiver, run:
```bash
python udp.py --port=12345
``` |