aitask1024 commited on
Commit
107fd34
·
verified ·
1 Parent(s): 87b81fd

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +92 -0
  2. config.yml +28 -0
  3. miner.py +139 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚀 Example Chute for Turbovision 🪂
2
+
3
+ This repository demonstrates how to deploy a **Chute** via the **Turbovision CLI**, hosted on **Hugging Face Hub**.
4
+ It serves as a minimal example showcasing the required structure and workflow for integrating machine learning models, preprocessing, and orchestration into a reproducible Chute environment.
5
+
6
+ ## Repository Structure
7
+ The following two files **must be present** (in their current locations) for a successful deployment — their content can be modified as needed:
8
+
9
+ | File | Purpose |
10
+ |------|----------|
11
+ | `miner.py` | Defines the ML model type(s), orchestration, and all pre/postprocessing logic. |
12
+ | `config.yml` | Specifies machine configuration (e.g., GPU type, memory, environment variables). |
13
+
14
+ Other files — e.g., model weights, utility scripts, or dependencies — are **optional** and can be included as needed for your model. Note: Any required assets must be defined or contained **within this repo**, which is fully open-source, since all network-related operations (downloading challenge data, weights, etc.) are disabled **inside the Chute**
15
+
16
+ ## Overview
17
+
18
+ Below is a high-level diagram showing the interaction between Huggingface, Chutes and Turbovision:
19
+
20
+ ![](../images/miner.png)
21
+
22
+ ## Local Testing
23
+ After editing the `config.yml` and `miner.py` and saving it into your Huggingface Repo, you will want to test it works locally.
24
+
25
+ 1. Copy the file `scorevision/chute_tmeplate/turbovision_chute.py.j2` as a python file called `my_chute.py` and fill in the missing variables:
26
+ ```python
27
+ HF_REPO_NAME = "{{ huggingface_repository_name }}"
28
+ HF_REPO_REVISION = "{{ huggingface_repository_revision }}"
29
+ CHUTES_USERNAME = "{{ chute_username }}"
30
+ CHUTE_NAME = "{{ chute_name }}"
31
+ ```
32
+
33
+ 2. Run the following command to build the chute locally (Caution: there are known issues with the docker location when running this on a mac)
34
+ ```bash
35
+ chutes build my_chute:chute --local --public
36
+ ```
37
+
38
+ 3. Run the name of the docker image just built (i.e. `CHUTE_NAME`) and enter it
39
+ ```bash
40
+ docker run -p 8000:8000 -e CHUTES_EXECUTION_CONTEXT=REMOTE -it <image-name> /bin/bash
41
+ ```
42
+
43
+ 4. Run the file from within the container
44
+ ```bash
45
+ chutes run my_chute:chute --dev --debug
46
+ ```
47
+
48
+ 5. In another terminal, test the local endpoints to ensure there are no bugs
49
+ ```bash
50
+ curl -X POST http://localhost:8000/health -d '{}'
51
+ curl -X POST http://localhost:8000/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}'
52
+ ```
53
+
54
+ ## Live Testing
55
+ 1. If you have any chute with the same name (ie from a previous deployment), ensure you delete that first (or you will get an error when trying to build).
56
+ ```bash
57
+ chutes chutes list
58
+ ```
59
+ Take note of the chute id that you wish to delete (if any)
60
+ ```bash
61
+ chutes chutes delete <chute-id>
62
+ ```
63
+
64
+ You should also delete its associated image
65
+ ```bash
66
+ chutes images list
67
+ ```
68
+ Take note of the chute image id
69
+ ```bash
70
+ chutes images delete <chute-image-id>
71
+ ```
72
+
73
+ 2. Use Turbovision's CLI to build, deploy and commit on-chain (Note: you can skip the on-chain commit using `--no-commit`. You can also specify a past huggingface revision to point to using `--revision` and/or the local files you want to upload to your huggingface repo using `--model-path`)
74
+ ```bash
75
+ sv -vv push
76
+ ```
77
+
78
+ 3. When completed, warm up the chute (if its cold 🧊). (You can confirm its status using `chutes chutes list` or `chutes chutes get <chute-id>` if you already know its id). Note: Warming up can sometimes take a while but if the chute runs without errors (should be if you've tested locally first) and there are sufficient nodes (i.e. machines) available matching the `config.yml` you specified, the chute should become hot 🔥!
79
+ ```bash
80
+ chutes warmup <chute-id>
81
+ ```
82
+
83
+ 4. Test the chute's endpoints
84
+ ```bash
85
+ curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/health -d '{}' -H "Authorization: Bearer $CHUTES_API_KEY"
86
+ curl -X POST https://<YOUR-CHUTE-SLUG>.chutes.ai/predict -d '{"url": "https://scoredata.me/2025_03_14/35ae7a/h1_0f2ca0.mp4","meta": {}}' -H "Authorization: Bearer $CHUTES_API_KEY"
87
+ ```
88
+
89
+ 5. Test what your chute would get on a validator (this also applies any validation/integrity checks which may fail if you did not use the Turbovision CLI above to deploy the chute)
90
+ ```bash
91
+ sv -vv run-once
92
+ ```
config.yml ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Image:
2
+ from_base: parachutes/python:3.12
3
+ run_command:
4
+ - pip install --upgrade setuptools wheel
5
+ - pip install huggingface_hub==0.19.4 ultralytics==8.2.40 'torch<2.6' opencv-python-headless
6
+ set_workdir: /app
7
+
8
+ NodeSelector:
9
+ gpu_count: 1
10
+ min_vram_gb_per_gpu: 16
11
+ include:
12
+ - a100
13
+ - a100_40gb
14
+ - "3090"
15
+ - a40
16
+ - a6000
17
+ exclude:
18
+ - "5090"
19
+ - b200
20
+ - h200
21
+ - h20
22
+ - mi300x
23
+
24
+ Chute:
25
+ timeout_seconds: 300
26
+ concurrency: 4
27
+ max_instances: 5
28
+ scaling_threshold: 0.5
miner.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pathlib import Path
2
+
3
+ from ultralytics import YOLO
4
+ from numpy import ndarray
5
+ from pydantic import BaseModel
6
+
7
+
8
+ class BoundingBox(BaseModel):
9
+ x1: int
10
+ y1: int
11
+ x2: int
12
+ y2: int
13
+ cls_id: int
14
+ conf: float
15
+
16
+
17
+ class TVFrameResult(BaseModel):
18
+ frame_id: int
19
+ boxes: list[BoundingBox]
20
+ keypoints: list[tuple[int, int]]
21
+
22
+
23
+ class Miner:
24
+ """
25
+ This class is responsible for:
26
+ - Loading ML models.
27
+ - Running batched predictions on images.
28
+ - Parsing ML model outputs into structured results (TVFrameResult).
29
+
30
+ This class can be modified, but it must have the following to be compatible with the chute:
31
+ - be named `Miner`
32
+ - have a `predict_batch` function with the inputs and outputs specified
33
+ - be stored in a file called `miner.py` which lives in the root of the HFHub repo
34
+ """
35
+
36
+ def __init__(self, path_hf_repo: Path) -> None:
37
+ """
38
+ Loads all ML models from the repository.
39
+ -----(Adjust as needed)----
40
+
41
+ Args:
42
+ path_hf_repo (Path):
43
+ Path to the downloaded HuggingFace Hub repository
44
+
45
+ Returns:
46
+ None
47
+ """
48
+ self.bbox_model = YOLO(path_hf_repo / "football-player-detection.pt")
49
+ print(f"✅ BBox Model Loaded")
50
+ self.keypoints_model = YOLO(path_hf_repo / "football-pitch-detection.pt")
51
+ print(f"✅ Keypoints Model Loaded")
52
+
53
+ def __repr__(self) -> str:
54
+ """
55
+ Information about miner returned in the health endpoint
56
+ to inspect the loaded ML models (and their types)
57
+ -----(Adjust as needed)----
58
+ """
59
+ return f"BBox Model: {type(self.bbox_model).__name__}\nKeypoints Model: {type(self.keypoints_model).__name__}"
60
+
61
+ def predict_batch(
62
+ self,
63
+ batch_images: list[ndarray],
64
+ offset: int,
65
+ n_keypoints: int,
66
+ ) -> list[TVFrameResult]:
67
+ """
68
+ Miner prediction for a batch of images.
69
+ Handles the orchestration of ML models and any preprocessing and postprocessing
70
+ -----(Adjust as needed)----
71
+
72
+ Args:
73
+ batch_images (list[np.ndarray]):
74
+ A list of images (as NumPy arrays) to process in this batch.
75
+ offset (int):
76
+ The frame number corresponding to the first image in the batch.
77
+ Used to correctly index frames in the output results.
78
+ n_keypoints (int):
79
+ The number of keypoints expected for each frame in this challenge type.
80
+
81
+ Returns:
82
+ list[TVFrameResult]:
83
+ A list of predictions for each image in the batch
84
+ """
85
+
86
+ bboxes: dict[int, list[BoundingBox]] = {}
87
+ bbox_model_results = self.bbox_model.predict(batch_images)
88
+ if bbox_model_results is not None:
89
+ for frame_number_in_batch, detection in enumerate(bbox_model_results):
90
+ if not hasattr(detection, "boxes") or detection.boxes is None:
91
+ continue
92
+ boxes = []
93
+ for box in detection.boxes.data:
94
+ x1, y1, x2, y2, conf, cls_id = box.tolist()
95
+ boxes.append(
96
+ BoundingBox(
97
+ x1=int(x1),
98
+ y1=int(y1),
99
+ x2=int(x2),
100
+ y2=int(y2),
101
+ cls_id=int(cls_id),
102
+ conf=float(conf),
103
+ )
104
+ )
105
+ bboxes[offset + frame_number_in_batch] = boxes
106
+ print("✅ BBoxes predicted")
107
+
108
+ keypoints: dict[int, tuple[int, int]] = {}
109
+ keypoints_model_results = self.keypoints_model.predict(batch_images)
110
+ if keypoints_model_results is not None:
111
+ for frame_number_in_batch, detection in enumerate(keypoints_model_results):
112
+ if not hasattr(detection, "keypoints") or detection.keypoints is None:
113
+ continue
114
+ frame_keypoints: list[tuple[int, int]] = []
115
+ for part_points in detection.keypoints.data:
116
+ for x, y, _ in part_points:
117
+ frame_keypoints.append((int(x), int(y)))
118
+ if len(frame_keypoints) < n_keypoints:
119
+ frame_keypoints.extend(
120
+ [(0, 0)] * (n_keypoints - len(frame_keypoints))
121
+ )
122
+ else:
123
+ frame_keypoints = frame_keypoints[:n_keypoints]
124
+ keypoints[offset + frame_number_in_batch] = frame_keypoints
125
+ print("✅ Keypoints predicted")
126
+
127
+ results: list[TVFrameResult] = []
128
+ for frame_number in range(offset, offset + len(batch_images)):
129
+ results.append(
130
+ TVFrameResult(
131
+ frame_id=frame_number,
132
+ boxes=bboxes.get(frame_number, []),
133
+ keypoints=keypoints.get(
134
+ frame_number, [(0, 0) for _ in range(n_keypoints)]
135
+ ),
136
+ )
137
+ )
138
+ print("✅ Combined results as TVFrameResult")
139
+ return results