File size: 4,845 Bytes
f60a6c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
π Example Chute for Turbovision πͺ
This repository demonstrates how to deploy a Chute via the Turbovision CLI, hosted on Hugging Face Hub. It serves as a minimal example showcasing the required structure and workflow for integrating machine learning models, preprocessing, and orchestration into a reproducible Chute environment.
## Repository Structure
The following two files must be present (in their current locations) for a successful deployment β their content can be modified as needed:
| File | Purpose |
|------|---------|
| `miner.py` | Defines the ML model type(s), orchestration, and all pre/postprocessing logic. |
| `config.yml` | Specifies machine configuration (e.g., GPU type, memory, environment variables). |
Other files β e.g., model weights, utility scripts, or dependencies β are optional and can be included as needed for your model.
> **Note**: Any required assets must be defined or contained within this repo, which is fully open-source, since all network-related operations (downloading challenge data, weights, etc.) are disabled inside the Chute.
## Overview
Below is a high-level diagram showing the interaction between Huggingface, Chutes and Turbovision:
```
βββββββββββββββ ββββββββββββ ββββββββββββββββ
β HuggingFace β βββ> β Chutes β βββ> β Turbovision β
β Hub β β .ai β β Validator β
βββββββββββββββ ββββββββββββ ββββββββββββββββ
```
## Local Testing
After editing the `config.yml` and `miner.py` and saving it into your Huggingface Repo, you will want to test it works locally.
1. **Copy the template file** `scorevision/chute_template/turbovision_chute.py.j2` as a python file called `my_chute.py` and fill in the missing variables:
```python
HF_REPO_NAME = "{{ huggingface_repository_name }}"
HF_REPO_REVISION = "{{ huggingface_repository_revision }}"
CHUTES_USERNAME = "{{ chute_username }}"
CHUTE_NAME = "{{ chute_name }}"
```
2. **Run the following command to build the chute locally** (Caution: there are known issues with the docker location when running this on a mac):
```bash
chutes build my_chute:chute --local --public
```
3. **Run the name of the docker image just built** (i.e. `CHUTE_NAME`) and enter it:
```bash
docker run -p 8000:8000 -e CHUTES_EXECUTION_CONTEXT=REMOTE -it <image-name> /bin/bash
```
4. **Run the file from within the container**:
```bash
chutes run my_chute:chute --dev --debug
```
5. **In another terminal, test the local endpoints** to ensure there are no bugs:
```bash
# Health check
curl -X POST http://localhost:8000/health -d '{}'
# Prediction test
curl -X POST http://localhost:8000/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}'
```
## Live Testing
If you have any chute with the same name (i.e. from a previous deployment), ensure you delete that first (or you will get an error when trying to build).
1. **List existing chutes**:
```bash
chutes chutes list
```
Take note of the chute id that you wish to delete (if any):
```bash
chutes chutes delete <chute-id>
```
2. **You should also delete its associated image**:
```bash
chutes images list
```
Take note of the chute image id:
```bash
chutes images delete <chute-image-id>
```
3. **Use Turbovision's CLI to build, deploy and commit on-chain**:
```bash
sv -vv push
```
> **Note**: You can skip the on-chain commit using `--no-commit`. You can also specify a past huggingface revision to point to using `--revision` and/or the local files you want to upload to your huggingface repo using `--model-path`.
4. **When completed, warm up the chute** (if its cold π§):
You can confirm its status using `chutes chutes list` or `chutes chutes get <chute-id>` if you already know its id.
> **Note**: Warming up can sometimes take a while but if the chute runs without errors (should be if you've tested locally first) and there are sufficient nodes (i.e. machines) available matching the `config.yml` you specified, the chute should become hot π₯!
```bash
chutes warmup <chute-id>
```
5. **Test the chute's endpoints**:
```bash
# Health check
curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/health -d '{}' -H "Authorization: Bearer $CHUTES_API_KEY"
# Prediction
curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}' -H "Authorization: Bearer $CHUTES_API_KEY"
```
6. **Test what your chute would get on a validator**:
This also applies any validation/integrity checks which may fail if you did not use the Turbovision CLI above to deploy the chute:
```bash
sv -vv run-once
```
|