Sankie005 commited on
Commit
067aea7
·
1 Parent(s): 393a1e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -388
README.md CHANGED
@@ -1,388 +1,9 @@
1
- ![Roboflow Inference banner](https://github.com/roboflow/inference/blob/main/banner.png?raw=true)
2
-
3
- ## 🎬 pip install inference
4
-
5
- [Roboflow](https://roboflow.com) Inference is the easiest way to use and deploy computer vision models.
6
- Inference supports running object detection, classification, instance segmentation, and even foundation models (like CLIP and SAM).
7
- You can [train and deploy your own custom model](https://github.com/roboflow/notebooks) or use one of the 50,000+
8
- [fine-tuned models shared by the community](https://universe.roboflow.com).
9
-
10
- There are three primary `inference` interfaces:
11
- * A Python-native package (`pip install inference`)
12
- * A self-hosted inference server (`inference server start`)
13
- * A [fully-managed, auto-scaling API](https://docs.roboflow.com).
14
-
15
- ## 🏃 Getting Started
16
-
17
- Get up and running with `inference` on your local machine in 3 minutes.
18
-
19
- ```sh
20
- pip install inference # or inference-gpu if you have CUDA
21
- ```
22
-
23
- Setup [your Roboflow Private API Key](https://app.roboflow.com/settings/api)
24
- by exporting a `ROBOFLOW_API_KEY` environment variable or
25
- adding it to a `.env` file.
26
-
27
- ```sh
28
- export ROBOFLOW_API_KEY=your_key_here
29
- ```
30
-
31
- Run [an open-source Rock, Paper, Scissors model](https://universe.roboflow.com/roboflow-58fyf/rock-paper-scissors-sxsw)
32
- on your webcam stream:
33
-
34
- ```python
35
- import inference
36
-
37
- inference.Stream(
38
- source="webcam", # or rtsp stream or camera id
39
- model="rock-paper-scissors-sxsw/11", # from Universe
40
-
41
- on_prediction=lambda predictions, image: (
42
- print(predictions) # now hold up your hand: 🪨 📄 ✂️
43
- )
44
- )
45
- ```
46
-
47
- > [!NOTE]
48
- > Currently, the stream interface only supports object detection
49
-
50
- Now let's extend the example to use [Supervision](https://roboflow.com/supervision)
51
- to visualize the predictions and display them on screen with OpenCV:
52
-
53
- ```python
54
- import cv2
55
- import inference
56
- import supervision as sv
57
-
58
- annotator = sv.BoxAnnotator()
59
-
60
- inference.Stream(
61
- source="webcam", # or rtsp stream or camera id
62
- model="rock-paper-scissors-sxsw/11", # from Universe
63
-
64
- output_channel_order="BGR",
65
- use_main_thread=True, # for opencv display
66
-
67
- on_prediction=lambda predictions, image: (
68
- print(predictions), # now hold up your hand: 🪨 📄 ✂️
69
-
70
- cv2.imshow(
71
- "Prediction",
72
- annotator.annotate(
73
- scene=image,
74
- detections=sv.Detections.from_roboflow(predictions)
75
- )
76
- ),
77
- cv2.waitKey(1)
78
- )
79
- )
80
-
81
- ```
82
-
83
- ## 👩‍🏫 More Examples
84
-
85
- The [`/examples`](https://github.com/roboflow/inference/tree/main/examples/) directory contains code samples for working with and extending `inference` including using foundation models like CLIP, HTTP and UDP clients, and an insights dashboard, along with community examples (PRs welcome)!
86
-
87
- ## 🎥 Inference in action
88
-
89
- Check out Inference running on a video of a football game:
90
-
91
- https://github.com/roboflow/inference/assets/37276661/121ab5f4-5970-4e78-8052-4b40f2eec173
92
-
93
- ## 💻 Why Inference?
94
-
95
- Inference provides a scalable method through which you can manage inferences for your vision projects.
96
-
97
- Inference is composed of:
98
-
99
- - Thousands of [pre-trained community models](https://universe.roboflow.com) that you can use as a starting point.
100
-
101
- - Foundation models like CLIP, SAM, and OCR.
102
-
103
- - A tight integration with [Supervision](https://roboflow.com/supervision).
104
-
105
- - An HTTP server, so you don’t have to reimplement things like image processing and prediction visualization on every project and you can scale your GPU infrastructure independently of your application code, and access your model from whatever language your app is written in.
106
-
107
- - Standardized APIs for computer vision tasks, so switching out the model weights and architecture can be done independently of your application code.
108
-
109
- - A model registry, so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights.
110
-
111
- - Active Learning integrations, so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild.
112
-
113
- - Seamless interoperability with [Roboflow](https://roboflow.com) for creating datasets, training & deploying custom models.
114
-
115
- And more!
116
-
117
- ### 📌 Use the Inference Server
118
-
119
- You can learn more about Roboflow Inference Docker Image build, pull and run in our [documentation](https://inference.roboflow.com/quickstart/docker/).
120
-
121
- - Run on x86 CPU:
122
-
123
- ```bash
124
- docker run --net=host roboflow/roboflow-inference-server-cpu:latest
125
- ```
126
-
127
- - Run on NVIDIA GPU:
128
-
129
- ```bash
130
- docker run --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
131
- ```
132
-
133
- <details close>
134
- <summary>👉 more docker run options</summary>
135
-
136
- - Run on arm64 CPU:
137
-
138
- ```bash
139
- docker run -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:latest
140
- ```
141
-
142
- - Run on NVIDIA GPU with TensorRT Runtime:
143
-
144
- ```bash
145
- docker run --network=host --gpus=all roboflow/roboflow-inference-server-trt:latest
146
- ```
147
-
148
- - Run on NVIDIA Jetson with JetPack `4.x`:
149
-
150
- ```bash
151
- docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson:latest
152
- ```
153
-
154
- - Run on NVIDIA Jetson with JetPack `5.x`:
155
-
156
- ```bash
157
- docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson-5.1.1:latest
158
- ```
159
-
160
- </details>
161
-
162
- ### Extras:
163
-
164
- Some functionality requires extra dependencies. These can be installed by specifying the desired extras during installation of Roboflow Inference.
165
- | extra | description |
166
- |:-------|:-------------------------------------------------|
167
- | `clip` | Ability to use the core `CLIP` model (by OpenAI) |
168
- | `gaze` | Ability to use the core `Gaze` model |
169
- | `http` | Ability to run the http interface |
170
- | `sam` | Ability to run the core `Segment Anything` model (by Meta AI) |
171
-
172
- **_Note:_** Both CLIP and Segment Anything require pytorch to run. These are included in their respective dependencies however pytorch installs can be highly environment dependent. See the [official pytorch install page](https://pytorch.org/get-started/locally/) for instructions specific to your enviornment.
173
-
174
- Example install with CLIP dependencies:
175
-
176
- ```bash
177
- pip install "inference[clip]"
178
- ```
179
-
180
- ## Inference Client
181
-
182
- To consume predictions from inference server in Python you can
183
- use the `inference-sdk` package.
184
-
185
- ```bash
186
- pip install inference-sdk
187
- ```
188
-
189
- ```python
190
- from inference_sdk import InferenceHTTPClient
191
-
192
- image_url = "https://media.roboflow.com/inference/soccer.jpg"
193
-
194
- # Replace ROBOFLOW_API_KEY with your Roboflow API Key
195
- client = InferenceHTTPClient(
196
- api_url="http://localhost:9001", # or https://detect.roboflow.com for Hosted API
197
- api_key="ROBOFLOW_API_KEY"
198
- )
199
- with client.use_model("soccer-players-5fuqs/1"):
200
- predictions = client.infer(image_url)
201
-
202
- print(predictions)
203
- ```
204
-
205
- Visit our [documentation](https://inference.roboflow.com/) to discover capabilities of `inference-clients` library.
206
-
207
- ## Single Image Inference
208
-
209
- After installing `inference` via pip, you can run a simple inference
210
- on a single image (vs the video stream example above) by instantiating
211
- a `model` and using the `infer` method (don't forget to setup your
212
- `ROBOFLOW_API_KEY` environment variable or `.env` file):
213
-
214
- ```python
215
- from inference.models.utils import get_roboflow_model
216
-
217
- model = get_roboflow_model(
218
- model_id="soccer-players-5fuqs/1"
219
- )
220
-
221
- # you can also infer on local images by passing a file path,
222
- # a PIL image, or a numpy array
223
- results = model.infer(
224
- image="https://media.roboflow.com/inference/soccer.jpg",
225
- confidence=0.5,
226
- iou_threshold=0.5
227
- )
228
-
229
- print(results)
230
- ```
231
-
232
- ## Getting CLIP Embeddings
233
-
234
- You can run inference with [OpenAI's CLIP model](https://blog.roboflow.com/openai-clip) using:
235
-
236
- ```python
237
- from inference.models import Clip
238
-
239
- image_url = "https://media.roboflow.com/inference/soccer.jpg"
240
-
241
- model = Clip()
242
- embeddings = model.embed_image(image_url)
243
-
244
- print(embeddings)
245
- ```
246
-
247
- ## Using SAM
248
-
249
- You can run inference with [Meta's Segment Anything model](https://blog.roboflow.com/segment-anything-breakdown/) using:
250
-
251
- ```python
252
- from inference.models import SegmentAnything
253
-
254
- image_url = "https://media.roboflow.com/inference/soccer.jpg"
255
-
256
- model = SegmentAnything()
257
- embeddings = model.embed_image(image_url)
258
-
259
- print(embeddings)
260
- ```
261
-
262
- ## 🏗️ inference Process
263
-
264
- To standardize the inference process throughout all our models, Roboflow Inference has a structure for processing inference requests. The specifics can be found on each model's respective page, but overall it works like this for most models:
265
-
266
- <img width="900" alt="inference structure" src="https://github.com/stellasphere/inference/assets/29011058/abf69717-f852-4655-9e6e-dae19fc263dc">
267
-
268
- ## ✅ Supported Models
269
-
270
- ### Load from Roboflow
271
-
272
- You can use models hosted on Roboflow with the following architectures through Inference:
273
-
274
- - YOLOv5 Object Detection
275
- - YOLOv5 Instance Segmentation
276
- - YOLOv8 Object Detection
277
- - YOLOv8 Classification
278
- - YOLOv8 Segmentation
279
- - YOLACT Segmentation
280
- - ViT Classification
281
-
282
- ### Core Models
283
-
284
- Core Models are foundation models and models that have not been fine-tuned on a specific dataset.
285
-
286
- The following core models are supported:
287
-
288
- 1. CLIP
289
- 2. L2CS (Gaze Detection)
290
- 3. Segment Anything (SAM)
291
-
292
- ## 📝 License
293
-
294
- The Roboflow Inference code is distributed under an [Apache 2.0 license](https://github.com/roboflow/inference/blob/master/LICENSE.md). The models supported by Roboflow Inference have their own licenses. View the licenses for supported models below.
295
-
296
- | model | license |
297
- | :------------------------ | :-----------------------------------------------------------------------------------------------------------------------------------: |
298
- | `inference/models/clip` | [MIT](https://github.com/openai/CLIP/blob/main/LICENSE) |
299
- | `inference/models/gaze` | [MIT](https://github.com/Ahmednull/L2CS-Net/blob/main/LICENSE), [Apache 2.0](https://github.com/google/mediapipe/blob/master/LICENSE) |
300
- | `inference/models/sam` | [Apache 2.0](https://github.com/facebookresearch/segment-anything/blob/main/LICENSE) |
301
- | `inference/models/vit` | [Apache 2.0](https://github.com/roboflow/inference/main/inference/models/vit/LICENSE) |
302
- | `inference/models/yolact` | [MIT](https://github.com/dbolya/yolact/blob/master/README.md) |
303
- | `inference/models/yolov5` | [AGPL-3.0](https://github.com/ultralytics/yolov5/blob/master/LICENSE) |
304
- | `inference/models/yolov7` | [GPL-3.0](https://github.com/WongKinYiu/yolov7/blob/main/README.md) |
305
- | `inference/models/yolov8` | [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/master/LICENSE) |
306
-
307
- ## Inference CLI
308
- We've created a CLI tool with useful commands to make the `inference` usage easier. Check out [docs](./inference_cli/README.md).
309
-
310
- ## 🚀 Enterprise
311
-
312
- With a Roboflow Inference Enterprise License, you can access additional Inference features, including:
313
-
314
- - Server cluster deployment
315
- - Device management
316
- - Active learning
317
- - YOLOv5 and YOLOv8 commercial license
318
-
319
- To learn more, [contact the Roboflow team](https://roboflow.com/sales).
320
-
321
- ## 📚 documentation
322
-
323
- Visit our [documentation](https://inference.roboflow.com) for usage examples and reference for Roboflow Inference.
324
-
325
- ## 🏆 contribution
326
-
327
- We would love your input to improve Roboflow Inference! Please see our [contributing guide](https://github.com/roboflow/inference/blob/master/CONTRIBUTING.md) to get started. Thank you to all of our contributors! 🙏
328
-
329
- ## 💻 explore more Roboflow open source projects
330
-
331
- | Project | Description |
332
- | :---------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------- |
333
- | [supervision](https://roboflow.com/supervision) | General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation. |
334
- | [Autodistill](https://github.com/autodistill/autodistill) | Automatically label images for use in training computer vision models. |
335
- | [Inference](https://github.com/roboflow/inference) (this project) | An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models. |
336
- | [Notebooks](https://roboflow.com/notebooks) | Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone. |
337
- | [Collect](https://github.com/roboflow/roboflow-collect) | Automated, intelligent data collection powered by CLIP. |
338
-
339
- <br>
340
-
341
- <div align="center">
342
-
343
- <div align="center">
344
- <a href="https://youtube.com/roboflow">
345
- <img
346
- src="https://media.roboflow.com/notebooks/template/icons/purple/youtube.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634652"
347
- width="3%"
348
- />
349
- </a>
350
- <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
351
- <a href="https://roboflow.com">
352
- <img
353
- src="https://media.roboflow.com/notebooks/template/icons/purple/roboflow-app.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949746649"
354
- width="3%"
355
- />
356
- </a>
357
- <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
358
- <a href="https://www.linkedin.com/company/roboflow-ai/">
359
- <img
360
- src="https://media.roboflow.com/notebooks/template/icons/purple/linkedin.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633691"
361
- width="3%"
362
- />
363
- </a>
364
- <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
365
- <a href="https://docs.roboflow.com">
366
- <img
367
- src="https://media.roboflow.com/notebooks/template/icons/purple/knowledge.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634511"
368
- width="3%"
369
- />
370
- </a>
371
- <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
372
- <a href="https://disuss.roboflow.com">
373
- <img
374
- src="https://media.roboflow.com/notebooks/template/icons/purple/forum.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633584"
375
- width="3%"
376
- />
377
- <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
378
- <a href="https://blog.roboflow.com">
379
- <img
380
- src="https://media.roboflow.com/notebooks/template/icons/purple/blog.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633605"
381
- width="3%"
382
- />
383
- </a>
384
- </a>
385
- </div>
386
-
387
- </div>
388
- </div>
 
1
+ ---
2
+ title: DOCKER ML
3
+ emoji: 💻
4
+ colorFrom: purple
5
+ colorTo: gray
6
+ sdk: docker
7
+ pinned: false
8
+ license: apache-2.0
9
+ ---