Sankie005 commited on
Commit
393a1e5
·
1 Parent(s): 7430fa0

Upload 11 files

Browse files
Files changed (11) hide show
  1. .actrc +1 -0
  2. .dockerignore +34 -0
  3. .gitignore +155 -0
  4. CITATION.cff +18 -0
  5. CONTRIBUTING.md +69 -0
  6. LICENSE +3 -0
  7. LICENSE.core +201 -0
  8. Makefile +42 -0
  9. README.md +388 -11
  10. banner.png +0 -0
  11. mkdocs.yml +103 -0
.actrc ADDED
@@ -0,0 +1 @@
 
 
1
+ -e .github/act_event.json
.dockerignore ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ inference/landing/*
2
+ !inference/landing/out/
3
+
4
+ # NodeJS (landing page)
5
+ **/node_modules/
6
+ **/dist
7
+ .git
8
+ npm-debug.log
9
+ .coverage
10
+ .coverage.*
11
+ .env
12
+ .aws
13
+
14
+ # Python (inference)
15
+ __pycache__
16
+ *.pyc
17
+ *.pyo
18
+ *.pyd
19
+ .Python
20
+ env
21
+ pip-log.txt
22
+ pip-delete-this-directory.txt
23
+ .tox
24
+ .coverage
25
+ .coverage.*
26
+ .cache
27
+ nosetests.xml
28
+ coverage.xml
29
+ *.cover
30
+ *.log
31
+ .git
32
+ .mypy_cache
33
+ .pytest_cache
34
+ .hypothesis
.gitignore ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # IDE
2
+ .idea/
3
+ .vscode/
4
+
5
+ # Byte-compiled / optimized / DLL files
6
+ __pycache__/
7
+ *.py[cod]
8
+ *$py.class
9
+
10
+ # C extensions
11
+ *.so
12
+
13
+ # Distribution / packaging
14
+ .Python
15
+ build/
16
+ develop-eggs/
17
+ dist/
18
+ downloads/
19
+ eggs/
20
+ .eggs/
21
+ lib64/
22
+ parts/
23
+ sdist/
24
+ var/
25
+ wheels/
26
+ pip-wheel-metadata/
27
+ share/python-wheels/
28
+ *.egg-info/
29
+ .installed.cfg
30
+ *.egg
31
+ MANIFEST
32
+ _classes.csv
33
+
34
+ # PyInstaller
35
+ # Usually these files are written by a python script from a template
36
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
37
+ *.manifest
38
+ *.spec
39
+
40
+ # Installer logs
41
+ pip-log.txt
42
+ pip-delete-this-directory.txt
43
+
44
+ # Unit test / coverage reports
45
+ htmlcov/
46
+ .tox/
47
+ .nox/
48
+ .coverage
49
+ .coverage.*
50
+ .cache
51
+ nosetests.xml
52
+ coverage.xml
53
+ *.cover
54
+ *.py,cover
55
+ .hypothesis/
56
+ .pytest_cache/
57
+
58
+ # Translations
59
+ *.mo
60
+ *.pot
61
+
62
+ # Django stuff:
63
+ *.log
64
+ local_settings.py
65
+ db.sqlite3
66
+ db.sqlite3-journal
67
+
68
+ # Flask stuff:
69
+ instance/
70
+ .webassets-cache
71
+
72
+ # Scrapy stuff:
73
+ .scrapy
74
+
75
+ # Sphinx documentation
76
+ docs/_build/
77
+
78
+ # PyBuilder
79
+ target/
80
+
81
+ # Jupyter Notebook
82
+ .ipynb_checkpoints
83
+
84
+ # IPython
85
+ profile_default/
86
+ ipython_config.py
87
+
88
+ # pyenv
89
+ .python-version
90
+
91
+ # pipenv
92
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
93
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
94
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
95
+ # install all needed dependencies.
96
+ #Pipfile.lock
97
+
98
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow
99
+ __pypackages__/
100
+
101
+ # Celery stuff
102
+ celerybeat-schedule
103
+ celerybeat.pid
104
+
105
+ # SageMath parsed files
106
+ *.sage.py
107
+
108
+ # Environments
109
+ .env
110
+ .venv
111
+ env/
112
+ venv/
113
+ ENV/
114
+ env.bak/
115
+ venv.bak/
116
+
117
+ # Spyder project settings
118
+ .spyderproject
119
+ .spyproject
120
+
121
+ # Rope project settings
122
+ .ropeproject
123
+
124
+ # mkdocs documentation
125
+ /site
126
+
127
+ # mypy
128
+ .mypy_cache/
129
+ .dmypy.json
130
+ dmypy.json
131
+
132
+ # Pyre type checker
133
+ .pyre/
134
+
135
+ deps/
136
+ profile/
137
+ *.onnx
138
+ *.jpg
139
+ *.pstats
140
+ *.jpeg
141
+ *.png
142
+ !icon.png
143
+ *.mp4
144
+ *.roboflow.txt
145
+ dev_tools/
146
+ *.tmp
147
+ ASL-*
148
+ README.dataset*
149
+ _annotations*
150
+
151
+ export_api_keys.sh
152
+ inference_cli/version.py
153
+ inference_sdk/version.py
154
+
155
+ **/.DS_Store
CITATION.cff ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cff-version: 1.2.0
2
+ title: Roboflow Inference Server
3
+ message: >-
4
+ If you use this software, please cite it using the
5
+ metadata from this file.
6
+ type: software
7
+ authors:
8
+ - given-names: Robfolow
9
+ email: support@roboflow.com
10
+ repository-code: 'https://github.com/roboflow/inference-server'
11
+ url: 'https://roboflow.com'
12
+ abstract: >-
13
+ An opinionated, easy-to-use inference server for use with
14
+ computer vision models.
15
+ keywords:
16
+ - computer vision
17
+ - inference
18
+ license: Apache-2.0
CONTRIBUTING.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing to the Roboflow Inference Server 🛠️
2
+
3
+ Thank you for your interest in contributing to the Roboflow Inference Server!
4
+
5
+ We welcome any contributions to help us improve the quality of `inference-server` and expand the range of supported models.
6
+
7
+ ## Contribution Guidelines
8
+
9
+ We welcome contributions to:
10
+
11
+ 1. Add support for running inference on a new model.
12
+ 2. Report bugs and issues in the project.
13
+ 3. Submit a request for a new task or feature.
14
+ 4. Improve our test coverage.
15
+
16
+ ### Contributing Features
17
+
18
+ The Inference Server provides a standard interface through which you can work with computer vision models. With Inference Server, you can use state-of-the-art models with your own weights without having to spend time installing dependencies, configuring environments, and writing inference code.
19
+
20
+ We welcome contributions that add support for new models to the project. Before you begin, please make sure that another contributor has not already begun work on the model you want to add. You can check the [project README](https://github.com/roboflow/inference-server/blob/main/README.md) for our roadmap on adding more models.
21
+
22
+ You will need to add documentation for your model and link to it from the `inference-server` README. You can add a new page to the `docs/models` directory that describes your model and how to use it. You can use the existing model documentation as a guide for how to structure your documentation.
23
+
24
+ ## How to Contribute Changes
25
+
26
+ First, fork this repository to your own GitHub account. Create a new branch that describes your changes (i.e. `line-counter-docs`). Push your changes to the branch on your fork and then submit a pull request to this repository.
27
+
28
+ When creating new functions, please ensure you have the following:
29
+
30
+ 1. Docstrings for the function and all parameters.
31
+ 2. Examples in the documentation for the function.
32
+ 3. Created an entry in our docs to autogenerate the documentation for the function.
33
+
34
+ All pull requests will be reviewed by the maintainers of the project. We will provide feedback and ask for changes if necessary.
35
+
36
+ PRs must pass all tests and linting requirements before they can be merged.
37
+
38
+ ## 🧹 Code quality
39
+
40
+ We provide two handy commands inside the `Makefile`, namely:
41
+
42
+ - `make style` to format the code
43
+ - `make check_code_quality` to check code quality (PEP8 basically)
44
+
45
+ ## 🧪 Tests
46
+
47
+ [`pytests`](https://docs.pytest.org/en/7.1.x/) is used to run our tests.
48
+
49
+ ## 📚 Documentation
50
+
51
+ Roboflow Inference uses mkdocs and mike to offer versioned documentation. The project documentation is hosted on [GitHub Pages](https://inference.roboflow.com).
52
+
53
+ To build the Inference documentation, first install the project development dependencies:
54
+
55
+ ```bash
56
+ pip install -r requirements/requirements.docs.txt
57
+ ```
58
+
59
+ To run the latest version of the documentation, run:
60
+
61
+ ```bash
62
+ mike serve
63
+ ```
64
+
65
+ Before a new release is published, a new version of the documentation should be built. To create a new version, run:
66
+
67
+ ```bash
68
+ mike deploy <version-number>
69
+ ```
LICENSE ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ LICENSE.core (Apache 2.0) applies to all files in this repository
2
+ except for files in or under any directory that contains a superseding
3
+ license file (such as the models located in `inference/models/`).
LICENSE.core ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright 2023, Roboflow
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
Makefile ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .PHONY: style check_code_quality
2
+
3
+ export PYTHONPATH = .
4
+ check_dirs := inference inference_sdk
5
+
6
+ style:
7
+ black $(check_dirs)
8
+ isort --profile black $(check_dirs)
9
+
10
+ check_code_quality:
11
+ black --check $(check_dirs)
12
+ isort --check-only --profile black $(check_dirs)
13
+ # stop the build if there are Python syntax errors or undefined names
14
+ flake8 $(check_dirs) --count --select=E9,F63,F7,F82 --show-source --statistics
15
+ # exit-zero treats all errors as warnings. E203 for black, E501 for docstring, W503 for line breaks before logical operators
16
+ flake8 $(check_dirs) --count --max-line-length=88 --exit-zero --ignore=D --extend-ignore=E203,E501,W503 --statistics
17
+
18
+ start_test_docker_cpu:
19
+ docker run -d --rm -p $(PORT):$(PORT) -e PORT=$(PORT) -e MAX_BATCH_SIZE=17 --name inference-test roboflow/${INFERENCE_SERVER_REPO}:test
20
+
21
+ start_test_docker_gpu:
22
+ docker run -d --rm -p $(PORT):$(PORT) -e PORT=$(PORT) -e MAX_BATCH_SIZE=17 --gpus=all --name inference-test roboflow/${INFERENCE_SERVER_REPO}:test
23
+
24
+ start_test_docker_jetson:
25
+ docker run -d --rm -p $(PORT):$(PORT) -e PORT=$(PORT) -e MAX_ACTIVE_MODELS=1 -e MAX_BATCH_SIZE=17 --runtime=nvidia --name inference-test roboflow/${INFERENCE_SERVER_REPO}:test
26
+
27
+ stop_test_docker:
28
+ docker rm -f inference-test
29
+
30
+ create_wheels:
31
+ python -m pip install --upgrade pip
32
+ python -m pip install wheel twine requests -r requirements/_requirements.txt
33
+ rm -f dist/*
34
+ python .release/pypi/inference.core.setup.py bdist_wheel
35
+ python .release/pypi/inference.cpu.setup.py bdist_wheel
36
+ python .release/pypi/inference.gpu.setup.py bdist_wheel
37
+ python .release/pypi/inference.setup.py bdist_wheel
38
+ python .release/pypi/inference.sdk.setup.py bdist_wheel
39
+ python .release/pypi/inference.cli.setup.py bdist_wheel
40
+
41
+ upload_wheels:
42
+ twine upload dist/*.whl
README.md CHANGED
@@ -1,11 +1,388 @@
1
- ---
2
- title: Docker Ml
3
- emoji: 🐢
4
- colorFrom: indigo
5
- colorTo: gray
6
- sdk: docker
7
- pinned: false
8
- license: mit
9
- ---
10
-
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ![Roboflow Inference banner](https://github.com/roboflow/inference/blob/main/banner.png?raw=true)
2
+
3
+ ## 🎬 pip install inference
4
+
5
+ [Roboflow](https://roboflow.com) Inference is the easiest way to use and deploy computer vision models.
6
+ Inference supports running object detection, classification, instance segmentation, and even foundation models (like CLIP and SAM).
7
+ You can [train and deploy your own custom model](https://github.com/roboflow/notebooks) or use one of the 50,000+
8
+ [fine-tuned models shared by the community](https://universe.roboflow.com).
9
+
10
+ There are three primary `inference` interfaces:
11
+ * A Python-native package (`pip install inference`)
12
+ * A self-hosted inference server (`inference server start`)
13
+ * A [fully-managed, auto-scaling API](https://docs.roboflow.com).
14
+
15
+ ## 🏃 Getting Started
16
+
17
+ Get up and running with `inference` on your local machine in 3 minutes.
18
+
19
+ ```sh
20
+ pip install inference # or inference-gpu if you have CUDA
21
+ ```
22
+
23
+ Setup [your Roboflow Private API Key](https://app.roboflow.com/settings/api)
24
+ by exporting a `ROBOFLOW_API_KEY` environment variable or
25
+ adding it to a `.env` file.
26
+
27
+ ```sh
28
+ export ROBOFLOW_API_KEY=your_key_here
29
+ ```
30
+
31
+ Run [an open-source Rock, Paper, Scissors model](https://universe.roboflow.com/roboflow-58fyf/rock-paper-scissors-sxsw)
32
+ on your webcam stream:
33
+
34
+ ```python
35
+ import inference
36
+
37
+ inference.Stream(
38
+ source="webcam", # or rtsp stream or camera id
39
+ model="rock-paper-scissors-sxsw/11", # from Universe
40
+
41
+ on_prediction=lambda predictions, image: (
42
+ print(predictions) # now hold up your hand: 🪨 📄 ✂️
43
+ )
44
+ )
45
+ ```
46
+
47
+ > [!NOTE]
48
+ > Currently, the stream interface only supports object detection
49
+
50
+ Now let's extend the example to use [Supervision](https://roboflow.com/supervision)
51
+ to visualize the predictions and display them on screen with OpenCV:
52
+
53
+ ```python
54
+ import cv2
55
+ import inference
56
+ import supervision as sv
57
+
58
+ annotator = sv.BoxAnnotator()
59
+
60
+ inference.Stream(
61
+ source="webcam", # or rtsp stream or camera id
62
+ model="rock-paper-scissors-sxsw/11", # from Universe
63
+
64
+ output_channel_order="BGR",
65
+ use_main_thread=True, # for opencv display
66
+
67
+ on_prediction=lambda predictions, image: (
68
+ print(predictions), # now hold up your hand: 🪨 📄 ✂️
69
+
70
+ cv2.imshow(
71
+ "Prediction",
72
+ annotator.annotate(
73
+ scene=image,
74
+ detections=sv.Detections.from_roboflow(predictions)
75
+ )
76
+ ),
77
+ cv2.waitKey(1)
78
+ )
79
+ )
80
+
81
+ ```
82
+
83
+ ## 👩‍🏫 More Examples
84
+
85
+ The [`/examples`](https://github.com/roboflow/inference/tree/main/examples/) directory contains code samples for working with and extending `inference` including using foundation models like CLIP, HTTP and UDP clients, and an insights dashboard, along with community examples (PRs welcome)!
86
+
87
+ ## 🎥 Inference in action
88
+
89
+ Check out Inference running on a video of a football game:
90
+
91
+ https://github.com/roboflow/inference/assets/37276661/121ab5f4-5970-4e78-8052-4b40f2eec173
92
+
93
+ ## 💻 Why Inference?
94
+
95
+ Inference provides a scalable method through which you can manage inferences for your vision projects.
96
+
97
+ Inference is composed of:
98
+
99
+ - Thousands of [pre-trained community models](https://universe.roboflow.com) that you can use as a starting point.
100
+
101
+ - Foundation models like CLIP, SAM, and OCR.
102
+
103
+ - A tight integration with [Supervision](https://roboflow.com/supervision).
104
+
105
+ - An HTTP server, so you don’t have to reimplement things like image processing and prediction visualization on every project and you can scale your GPU infrastructure independently of your application code, and access your model from whatever language your app is written in.
106
+
107
+ - Standardized APIs for computer vision tasks, so switching out the model weights and architecture can be done independently of your application code.
108
+
109
+ - A model registry, so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights.
110
+
111
+ - Active Learning integrations, so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild.
112
+
113
+ - Seamless interoperability with [Roboflow](https://roboflow.com) for creating datasets, training & deploying custom models.
114
+
115
+ And more!
116
+
117
+ ### 📌 Use the Inference Server
118
+
119
+ You can learn more about Roboflow Inference Docker Image build, pull and run in our [documentation](https://inference.roboflow.com/quickstart/docker/).
120
+
121
+ - Run on x86 CPU:
122
+
123
+ ```bash
124
+ docker run --net=host roboflow/roboflow-inference-server-cpu:latest
125
+ ```
126
+
127
+ - Run on NVIDIA GPU:
128
+
129
+ ```bash
130
+ docker run --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
131
+ ```
132
+
133
+ <details close>
134
+ <summary>👉 more docker run options</summary>
135
+
136
+ - Run on arm64 CPU:
137
+
138
+ ```bash
139
+ docker run -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:latest
140
+ ```
141
+
142
+ - Run on NVIDIA GPU with TensorRT Runtime:
143
+
144
+ ```bash
145
+ docker run --network=host --gpus=all roboflow/roboflow-inference-server-trt:latest
146
+ ```
147
+
148
+ - Run on NVIDIA Jetson with JetPack `4.x`:
149
+
150
+ ```bash
151
+ docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson:latest
152
+ ```
153
+
154
+ - Run on NVIDIA Jetson with JetPack `5.x`:
155
+
156
+ ```bash
157
+ docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson-5.1.1:latest
158
+ ```
159
+
160
+ </details>
161
+
162
+ ### Extras:
163
+
164
+ Some functionality requires extra dependencies. These can be installed by specifying the desired extras during installation of Roboflow Inference.
165
+ | extra | description |
166
+ |:-------|:-------------------------------------------------|
167
+ | `clip` | Ability to use the core `CLIP` model (by OpenAI) |
168
+ | `gaze` | Ability to use the core `Gaze` model |
169
+ | `http` | Ability to run the http interface |
170
+ | `sam` | Ability to run the core `Segment Anything` model (by Meta AI) |
171
+
172
+ **_Note:_** Both CLIP and Segment Anything require pytorch to run. These are included in their respective dependencies however pytorch installs can be highly environment dependent. See the [official pytorch install page](https://pytorch.org/get-started/locally/) for instructions specific to your enviornment.
173
+
174
+ Example install with CLIP dependencies:
175
+
176
+ ```bash
177
+ pip install "inference[clip]"
178
+ ```
179
+
180
+ ## Inference Client
181
+
182
+ To consume predictions from inference server in Python you can
183
+ use the `inference-sdk` package.
184
+
185
+ ```bash
186
+ pip install inference-sdk
187
+ ```
188
+
189
+ ```python
190
+ from inference_sdk import InferenceHTTPClient
191
+
192
+ image_url = "https://media.roboflow.com/inference/soccer.jpg"
193
+
194
+ # Replace ROBOFLOW_API_KEY with your Roboflow API Key
195
+ client = InferenceHTTPClient(
196
+ api_url="http://localhost:9001", # or https://detect.roboflow.com for Hosted API
197
+ api_key="ROBOFLOW_API_KEY"
198
+ )
199
+ with client.use_model("soccer-players-5fuqs/1"):
200
+ predictions = client.infer(image_url)
201
+
202
+ print(predictions)
203
+ ```
204
+
205
+ Visit our [documentation](https://inference.roboflow.com/) to discover capabilities of `inference-clients` library.
206
+
207
+ ## Single Image Inference
208
+
209
+ After installing `inference` via pip, you can run a simple inference
210
+ on a single image (vs the video stream example above) by instantiating
211
+ a `model` and using the `infer` method (don't forget to setup your
212
+ `ROBOFLOW_API_KEY` environment variable or `.env` file):
213
+
214
+ ```python
215
+ from inference.models.utils import get_roboflow_model
216
+
217
+ model = get_roboflow_model(
218
+ model_id="soccer-players-5fuqs/1"
219
+ )
220
+
221
+ # you can also infer on local images by passing a file path,
222
+ # a PIL image, or a numpy array
223
+ results = model.infer(
224
+ image="https://media.roboflow.com/inference/soccer.jpg",
225
+ confidence=0.5,
226
+ iou_threshold=0.5
227
+ )
228
+
229
+ print(results)
230
+ ```
231
+
232
+ ## Getting CLIP Embeddings
233
+
234
+ You can run inference with [OpenAI's CLIP model](https://blog.roboflow.com/openai-clip) using:
235
+
236
+ ```python
237
+ from inference.models import Clip
238
+
239
+ image_url = "https://media.roboflow.com/inference/soccer.jpg"
240
+
241
+ model = Clip()
242
+ embeddings = model.embed_image(image_url)
243
+
244
+ print(embeddings)
245
+ ```
246
+
247
+ ## Using SAM
248
+
249
+ You can run inference with [Meta's Segment Anything model](https://blog.roboflow.com/segment-anything-breakdown/) using:
250
+
251
+ ```python
252
+ from inference.models import SegmentAnything
253
+
254
+ image_url = "https://media.roboflow.com/inference/soccer.jpg"
255
+
256
+ model = SegmentAnything()
257
+ embeddings = model.embed_image(image_url)
258
+
259
+ print(embeddings)
260
+ ```
261
+
262
+ ## 🏗️ inference Process
263
+
264
+ To standardize the inference process throughout all our models, Roboflow Inference has a structure for processing inference requests. The specifics can be found on each model's respective page, but overall it works like this for most models:
265
+
266
+ <img width="900" alt="inference structure" src="https://github.com/stellasphere/inference/assets/29011058/abf69717-f852-4655-9e6e-dae19fc263dc">
267
+
268
+ ## ✅ Supported Models
269
+
270
+ ### Load from Roboflow
271
+
272
+ You can use models hosted on Roboflow with the following architectures through Inference:
273
+
274
+ - YOLOv5 Object Detection
275
+ - YOLOv5 Instance Segmentation
276
+ - YOLOv8 Object Detection
277
+ - YOLOv8 Classification
278
+ - YOLOv8 Segmentation
279
+ - YOLACT Segmentation
280
+ - ViT Classification
281
+
282
+ ### Core Models
283
+
284
+ Core Models are foundation models and models that have not been fine-tuned on a specific dataset.
285
+
286
+ The following core models are supported:
287
+
288
+ 1. CLIP
289
+ 2. L2CS (Gaze Detection)
290
+ 3. Segment Anything (SAM)
291
+
292
+ ## 📝 License
293
+
294
+ The Roboflow Inference code is distributed under an [Apache 2.0 license](https://github.com/roboflow/inference/blob/master/LICENSE.md). The models supported by Roboflow Inference have their own licenses. View the licenses for supported models below.
295
+
296
+ | model | license |
297
+ | :------------------------ | :-----------------------------------------------------------------------------------------------------------------------------------: |
298
+ | `inference/models/clip` | [MIT](https://github.com/openai/CLIP/blob/main/LICENSE) |
299
+ | `inference/models/gaze` | [MIT](https://github.com/Ahmednull/L2CS-Net/blob/main/LICENSE), [Apache 2.0](https://github.com/google/mediapipe/blob/master/LICENSE) |
300
+ | `inference/models/sam` | [Apache 2.0](https://github.com/facebookresearch/segment-anything/blob/main/LICENSE) |
301
+ | `inference/models/vit` | [Apache 2.0](https://github.com/roboflow/inference/main/inference/models/vit/LICENSE) |
302
+ | `inference/models/yolact` | [MIT](https://github.com/dbolya/yolact/blob/master/README.md) |
303
+ | `inference/models/yolov5` | [AGPL-3.0](https://github.com/ultralytics/yolov5/blob/master/LICENSE) |
304
+ | `inference/models/yolov7` | [GPL-3.0](https://github.com/WongKinYiu/yolov7/blob/main/README.md) |
305
+ | `inference/models/yolov8` | [AGPL-3.0](https://github.com/ultralytics/ultralytics/blob/master/LICENSE) |
306
+
307
+ ## Inference CLI
308
+ We've created a CLI tool with useful commands to make the `inference` usage easier. Check out [docs](./inference_cli/README.md).
309
+
310
+ ## 🚀 Enterprise
311
+
312
+ With a Roboflow Inference Enterprise License, you can access additional Inference features, including:
313
+
314
+ - Server cluster deployment
315
+ - Device management
316
+ - Active learning
317
+ - YOLOv5 and YOLOv8 commercial license
318
+
319
+ To learn more, [contact the Roboflow team](https://roboflow.com/sales).
320
+
321
+ ## 📚 documentation
322
+
323
+ Visit our [documentation](https://inference.roboflow.com) for usage examples and reference for Roboflow Inference.
324
+
325
+ ## 🏆 contribution
326
+
327
+ We would love your input to improve Roboflow Inference! Please see our [contributing guide](https://github.com/roboflow/inference/blob/master/CONTRIBUTING.md) to get started. Thank you to all of our contributors! 🙏
328
+
329
+ ## 💻 explore more Roboflow open source projects
330
+
331
+ | Project | Description |
332
+ | :---------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------- |
333
+ | [supervision](https://roboflow.com/supervision) | General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation. |
334
+ | [Autodistill](https://github.com/autodistill/autodistill) | Automatically label images for use in training computer vision models. |
335
+ | [Inference](https://github.com/roboflow/inference) (this project) | An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models. |
336
+ | [Notebooks](https://roboflow.com/notebooks) | Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone. |
337
+ | [Collect](https://github.com/roboflow/roboflow-collect) | Automated, intelligent data collection powered by CLIP. |
338
+
339
+ <br>
340
+
341
+ <div align="center">
342
+
343
+ <div align="center">
344
+ <a href="https://youtube.com/roboflow">
345
+ <img
346
+ src="https://media.roboflow.com/notebooks/template/icons/purple/youtube.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634652"
347
+ width="3%"
348
+ />
349
+ </a>
350
+ <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
351
+ <a href="https://roboflow.com">
352
+ <img
353
+ src="https://media.roboflow.com/notebooks/template/icons/purple/roboflow-app.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949746649"
354
+ width="3%"
355
+ />
356
+ </a>
357
+ <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
358
+ <a href="https://www.linkedin.com/company/roboflow-ai/">
359
+ <img
360
+ src="https://media.roboflow.com/notebooks/template/icons/purple/linkedin.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633691"
361
+ width="3%"
362
+ />
363
+ </a>
364
+ <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
365
+ <a href="https://docs.roboflow.com">
366
+ <img
367
+ src="https://media.roboflow.com/notebooks/template/icons/purple/knowledge.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949634511"
368
+ width="3%"
369
+ />
370
+ </a>
371
+ <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
372
+ <a href="https://disuss.roboflow.com">
373
+ <img
374
+ src="https://media.roboflow.com/notebooks/template/icons/purple/forum.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633584"
375
+ width="3%"
376
+ />
377
+ <img src="https://raw.githubusercontent.com/ultralytics/assets/main/social/logo-transparent.png" width="3%"/>
378
+ <a href="https://blog.roboflow.com">
379
+ <img
380
+ src="https://media.roboflow.com/notebooks/template/icons/purple/blog.png?ik-sdk-version=javascript-1.4.3&updatedAt=1672949633605"
381
+ width="3%"
382
+ />
383
+ </a>
384
+ </a>
385
+ </div>
386
+
387
+ </div>
388
+ </div>
banner.png ADDED
mkdocs.yml ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ site_name: Roboflow Inference
2
+ site_url: https://inference.roboflow.com/
3
+ site_author: Roboflow
4
+ site_description: With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using the Roboflow Inference Server.
5
+ repo_name: roboflow/inference
6
+ repo_url: https://github.com/roboflow/inference
7
+ edit_uri: https://github.com/roboflow/inference/tree/main/docs
8
+ copyright: Roboflow 2023. All rights reserved.
9
+
10
+ extra:
11
+ social:
12
+ - icon: fontawesome/brands/github
13
+ link: https://github.com/roboflow
14
+ - icon: fontawesome/brands/youtube
15
+ link: https://www.youtube.com/roboflow
16
+ - icon: fontawesome/brands/linkedin
17
+ link: https://www.linkedin.com/company/roboflow-ai/mycompany/
18
+ - icon: fontawesome/brands/twitter
19
+ link: https://twitter.com/roboflow
20
+ analytics:
21
+ provider: google
22
+ property: G-T0CED2YY8K
23
+ version:
24
+ default: 1.0
25
+
26
+ extra_css:
27
+ - styles.css
28
+
29
+ nav:
30
+ - Home:
31
+ - Home: index.md
32
+ - Get Started:
33
+ - What is Inference?: quickstart/what_is_inference.md
34
+ - What Devices Can I Use?: quickstart/devices.md
35
+ - Run Your First Model: quickstart/run_a_model.md
36
+ - Run a Fine-Tuned Model: quickstart/explore_models.md
37
+ - Run a Model:
38
+ - On an Image (Using HTTP): quickstart/http_inference.md
39
+ - On an Image (Using SDK): quickstart/run_model_on_image.md
40
+ - On a Video, Webcam or RTSP Stream: quickstart/run_model_on_rtsp_webcam.md
41
+ - Over UDP: quickstart/run_model_over_udp.md
42
+ - Use a Foundation Model:
43
+ - What is a Foundation Model?: foundation/about.md
44
+ - CLIP (Classification, Embeddings): foundation/clip.md
45
+ - DocTR (OCR): foundation/doctr.md
46
+ - Grounding DINO (Object Detection): foundation/grounding_dino.md
47
+ - L2CS-Net (Gaze Detection): foundation/gaze.md
48
+ - Segment Anything (Segmentation): foundation/sam.md
49
+ - Integrate with Inference:
50
+ - SDK Reference: inference_sdk/http_client.md
51
+ - Collecting data without model: quickstart/stubs.md
52
+ - Reference:
53
+ - Model Licensing: quickstart/licensing.md
54
+ - Model Device Compatability: quickstart/compatability_matrix.md
55
+ - Install Inference with Docker: quickstart/docker.md
56
+ - Docker Configuration Options: quickstart/docker_configuration_options.md
57
+ - HTTP Inference: quickstart/http_inference.md
58
+ - HTTP API Reference:
59
+ - API Reference: api.md
60
+ - Back to Quickstart: /quickstart/what_is_inference/
61
+ - Contribute:
62
+ - Contribute to Inference: contributing.md
63
+ - Changelog: https://github.com/roboflow/inference/releases
64
+
65
+ theme:
66
+ name: 'material'
67
+ logo: https://media.roboflow.com/inference-icon.png
68
+ favicon: https://media.roboflow.com/inference-icon.png
69
+ font:
70
+ text: Roboto
71
+ code: Roboto Mono
72
+ custom_dir: 'custom_theme'
73
+ features:
74
+ - navigation.top
75
+ - navigation.tabs
76
+ - navigation.tabs.sticky
77
+ - navigation.prune
78
+ - navigation.footer
79
+ - navigation.tracking
80
+ - navigation.instant
81
+ - navigation.instant.progress
82
+ - navigation.indexes
83
+ - navigation.sections
84
+
85
+ plugins:
86
+ - mkdocstrings
87
+ - search
88
+ - swagger-ui-tag
89
+
90
+ markdown_extensions:
91
+ - admonition
92
+ - pymdownx.details
93
+ - pymdownx.superfences
94
+ - attr_list
95
+ - md_in_html
96
+ - pymdownx.tabbed:
97
+ alternate_style: true
98
+ - toc:
99
+ permalink: true
100
+
101
+ extra_javascript:
102
+ - "https://widget.kapa.ai/kapa-widget.bundle.js"
103
+ - "javascript/init_kapa_widget.js"