Update README.md
Browse files
README.md
CHANGED
|
@@ -1,27 +1,3 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
## Directory Structure
|
| 6 |
-
- `videos/` — Place your input videos here.
|
| 7 |
-
- `frames/` — Extracted frames will be saved here.
|
| 8 |
-
- `cursors/` — Cursor templates for detection.
|
| 9 |
-
- `annotations/` — Output JSON annotation files.
|
| 10 |
-
- `scripts/` — All processing scripts (frame extraction, cursor tracking, annotation, API, etc).
|
| 11 |
-
|
| 12 |
-
## Usage
|
| 13 |
-
1. Upload your video(s) to the `videos/` directory.
|
| 14 |
-
2. Run the pipeline (see below) to extract frames, track cursor, and generate annotations.
|
| 15 |
-
3. All outputs will be saved in the appropriate folders.
|
| 16 |
-
|
| 17 |
-
## Automation
|
| 18 |
-
The pipeline is orchestrated by the `scripts/pipeline.py` script, which runs all steps in order.
|
| 19 |
-
|
| 20 |
-
## HuggingFace Space Notes
|
| 21 |
-
- The Space is fully writeable; all outputs are saved in the workspace.
|
| 22 |
-
- The Docker container is configured for all dependencies and write permissions.
|
| 23 |
-
|
| 24 |
-
## API
|
| 25 |
-
A FastAPI server is provided for vision model inference.
|
| 26 |
-
|
| 27 |
-
---
|
|
|
|
| 1 |
+
---
|
| 2 |
+
colorFrom: yellow
|
| 3 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|