File size: 2,271 Bytes
c446951 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
# π€ Video Inference Dashboard Example
Roboflow's inference server to analyze video streams. This project extracts insights from video frames at defined intervals and generates informative visualizations and CSV outputs.
## π¦ Use Case: Smart Inventory Monitoring
Factories & stores can:
- Save time
- Count items at intervals, avoiding stockouts.
- Restock efficiently using data.
- Enhance operations
## π Result
This is counting products on shelf, every 5 minutes, categorically and in total.
<a href="https://universe.roboflow.com/roboflow-ngkro/shelf-product">
<img src="https://app.roboflow.com/images/download-dataset-badge.svg"></img>
</a>
<a href="https://universe.roboflow.com/roboflow-ngkro/shelf-product/model/">
<img src="https://app.roboflow.com/images/try-model-badge.svg"></img>
</a>
<br/>

<br/>

## βοΈ Requirements
Make sure you have docker installed. Learn more about building, pulling, and running the Roboflow Inference Docker Image in our [documentation](https://roboflow.github.io/inference/quickstart/docker/).
## π Installation
### **β 1 Start inference server**
x86 CPU:
```bash
docker run --net=host roboflow/roboflow-inference-server-cpu:latest
```
NVIDIA GPU
```bash
docker run --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
```
### **β 2 Setup and Run**
```python
git clone https://github.com/roboflow/inference-dashboard-example.git
cd inference-dashboard-example
pip install -r requirements.txt
```
```python
python main.py --dataset_id [YOUR_DATASET_ID] --api_key [YOUR_API_KEY] --video_path [PATH_TO_VIDEO] --interval_minutes [INTERVAL_IN_MINUTES]
"""
--dataset_id: Your dataset name on Roboflow.
--version_id: The version ID for inference (default: 1).
--api_key: Your API key on Roboflow.
--video_path: Path to the video file for analysis.
--interval_minutes: Interval in minutes to extract predictions (default: 1).
"""
```
## π¦Ύ Feedback & Contributions
Feel free to open an issue, submit a PR, or share your feedback. All contributions are welcome!
|