File size: 7,414 Bytes
c446951 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 | 
[Roboflow](https://roboflow.com) Inference enables you to deploy computer vision models faster than ever.
With a `pip install inference` and `inference server start`, you can start a server to run a fine-tuned model on images, videos, and streams.
Inference supports running object detection, classification, instance segmentation, and foundation models (i.e. SAM, CLIP).
You can [train and deploy your own custom model](https://github.com/roboflow/notebooks) or use one of the 50,000+
[fine-tuned models shared by the Roboflow Universe community](https://universe.roboflow.com).
<a href="https://inference.roboflow.com/quickstart/run_a_model/" class="button">Get started with our "Run your first model" guide</a>
<style>
.button {
background-color: var(--md-primary-fg-color);
display: block;
padding: 10px;
color: white !important;
border-radius: 5px;
text-align: center;
}
</style>
Here is an example of a model running on a video using Inference:
<video width="100%" autoplay loop muted>
<source src="https://media.roboflow.com/football-video.mp4" type="video/mp4">
</video>
## π» Features
Inference provides a scalable method through which you can use computer vision models.
Inference is backed by:
- A server, so you donβt have to reinvent the wheel when it comes to serving your model to disperate parts of your application.
- Standard APIs for computer vision tasks, so switching out the model weights and architecture can be done independently of your application code.
- Model architecture implementations, which implement the tensor parsing glue between images and predictions for supervised models that you've fine-tuned to perform custom tasks.
- A model registry, so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights.
- Data management integrations, so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild.
And more!
### π Install pip vs Docker:
- **pip**: Installs `inference` into your Python environment. Lightweight, good for Python-centric projects.
- **Docker**: Packages `inference` with its environment. Ensures consistency across setups; ideal for scalable deployments.
## π» install
### With ONNX CPU Runtime:
For CPU powered inference:
```bash
pip install inference
```
or
```bash
pip install inference-cpu
```
### With ONNX GPU Runtime:
If you have an NVIDIA GPU, you can accelerate your inference with:
```bash
pip install inference-gpu
```
### Without ONNX Runtime:
Roboflow Inference uses Onnxruntime as its core inference engine. Onnxruntime provides an array of different [execution providers](https://onnxruntime.ai/docs/execution-providers/) that can optimize inference on differnt target devices. If you decide to install onnxruntime on your own, install inference with:
```bash
pip install inference-core
```
Alternatively, you can take advantage of some advanced execution providers using one of our published docker images.
### Extras:
Some functionality requires extra dependencies. These can be installed by specifying the desired extras during installation of Roboflow Inference. e.x. `pip install inference[extra]`
| extra | description |
|:-------|:-------------------------------------------------|
| `clip` | Ability to use the core `CLIP` model (by OpenAI) |
| `gaze` | Ability to use the core `Gaze` model |
| `http` | Ability to run the http interface |
| `sam` | Ability to run the core `Segment Anything` model (by Meta AI) |
| `doctr` | Ability to use the core `doctr` model (by [Mindee](https://github.com/mindee/doctr)) |
**_Note:_** Both CLIP and Segment Anything require PyTorch to run. These are included in their respective dependencies however PyTorch installs can be highly environment dependent. See the [official PyTorch install page](https://pytorch.org/get-started/locally/) for instructions specific to your enviornment.
Example install with CLIP dependencies:
```bash
pip install inference[clip]
```
## π docker
You can learn more about Roboflow Inference Docker Image build, pull and run in our [documentation](https://roboflow.github.io/inference/quickstart/docker/).
- Run on x86 CPU:
```bash
docker run --net=host roboflow/roboflow-inference-server-cpu:latest
```
- Run on NVIDIA GPU:
```bash
docker run --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
```
<details close>
<summary>π more docker run options</summary>
- Run on arm64 CPU:
```bash
docker run -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:latest
```
- Run on NVIDIA GPU with TensorRT Runtime:
```bash
docker run --network=host --gpus=all roboflow/roboflow-inference-server-trt:latest
```
- Run on NVIDIA Jetson with JetPack `4.x`:
```bash
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson:latest
```
- Run on NVIDIA Jetson with JetPack `5.x`:
```bash
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson-5.1.1:latest
```
</details>
<br/>
## π CLI
To use the CLI you will need python 3.7 or higher. To ensure you have the correct version of python, run `python --version` in your terminal. To install python, follow the instructions [here](https://www.python.org/downloads/).
After you have python installed, install the pypi package `inference-cli` or `inference`:
```bash
pip install inference-cli
```
From there you can run the inference server. See [Docker quickstart via CLI](./quickstart/docker.md/#via-cli) for more information.
```bash
inference server start
```
To use the CLI to make inferences, first [find your project ID and model version number in Roboflow](https://docs.roboflow.com/api-reference/workspace-and-project-ids).
See more detailed documentation on [HTTP Inference quickstart via CLI](./quickstart/http_inference.md/#via-cli).
```bash
inference infer {image_path} \
--project-id {project_id} \
--model-version {model_version} \
--api-key {api_key}
```
## Enterprise License
With a Roboflow Inference Enterprise License, you can access additional Inference features, including:
- Server cluster deployment
- Device management
- Active learning
- YOLOv5 and YOLOv8 model sub-license
To learn more, [contact the Roboflow team](https://roboflow.com/sales).
## More Roboflow Open Source Projects
|Project | Description|
|:---|:---|
|[supervision](https://roboflow.com/supervision) | General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation.
|[Autodistill](https://github.com/autodistill/autodistill) | Automatically label images for use in training computer vision models. |
|[Inference](https://github.com/roboflow/inference) (this project) | An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
|[Notebooks](https://roboflow.com/notebooks) | Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone.
|[Collect](https://github.com/roboflow/roboflow-collect) | Automated, intelligent data collection powered by CLIP. |