Commit
·
36ddfc8
1
Parent(s):
84c468c
Rearrange README since some content is included in the main org page now.
Browse files
README.md
CHANGED
|
@@ -6,7 +6,8 @@ license: apache-2.0
|
|
| 6 |
|
| 7 |
This project provides a starting point for implementing a submission to the [SAFE: Image Edit Detection and Localization Challenge 2025](https://app.dyff.io/challenges/dc509a8c771b492b90c43012fde9a04f). You do not need to use this code to participate in the challenge.
|
| 8 |
|
| 9 |
-
|
|
|
|
| 10 |
|
| 11 |
To use the code and tools in this repository, [clone it](https://huggingface.co/docs/hub/en/repositories-getting-started#cloning-repositories) with `git`:
|
| 12 |
|
|
@@ -14,6 +15,84 @@ To use the code and tools in this repository, [clone it](https://huggingface.co/
|
|
| 14 |
git clone https://huggingface.co/safe-challenge-2025/example-submission
|
| 15 |
```
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
# How to participate
|
| 19 |
|
|
@@ -32,10 +111,6 @@ If you're comfortable building a Docker image yourself, the [preferred way](#sub
|
|
| 32 |
|
| 33 |
Alternatively, you can create a Docker [HuggingFace Space](https://huggingface.co/new-space?sdk=docker) and [create submissions from the space](#submitting-a-docker-huggingface-space) using a [webform](https://challenge.dyff.io/submit). The advantage of using an HF Space is that it builds the Docker image for you. However, HF Spaces also have some limitations that you'll need to account for.
|
| 34 |
|
| 35 |
-
## General considerations
|
| 36 |
-
|
| 37 |
-
* Your submission will run **without Internet access** during evaluation. All of the files required to run your submission must be packaged along with it. You can either include files in the Docker image, or upload the files as a separate package and mount them in your application container during execution.
|
| 38 |
-
|
| 39 |
|
| 40 |
# Submitting using the Dyff API
|
| 41 |
|
|
@@ -174,82 +249,11 @@ To implement a new detector that you can submit to the challenge, you need to im
|
|
| 174 |
|
| 175 |
You are also free to build detectors with any other technologies and software stacks that you want, but you may have to figure out packaging on your own. All that's required of your submission is that it runs in a Docker container and that it supports the [required inference API](#api-reference).
|
| 176 |
|
| 177 |
-
## Quick Start
|
| 178 |
-
|
| 179 |
-
**Install `uv`:**
|
| 180 |
-
https://docs.astral.sh/uv/getting-started/installation/
|
| 181 |
-
|
| 182 |
-
**Local development:**
|
| 183 |
-
```bash
|
| 184 |
-
# Install dependencies
|
| 185 |
-
make setup
|
| 186 |
-
source venv/bin/activate
|
| 187 |
-
|
| 188 |
-
# Download the example model
|
| 189 |
-
make download
|
| 190 |
-
|
| 191 |
-
# Run it
|
| 192 |
-
make serve
|
| 193 |
-
```
|
| 194 |
-
|
| 195 |
-
In a second terminal:
|
| 196 |
-
```bash
|
| 197 |
-
# Process an example input
|
| 198 |
-
./prompt.sh cat.json
|
| 199 |
-
```
|
| 200 |
-
|
| 201 |
-
The server runs on `http://127.0.0.1:8000`. Check `/docs` for the interactive API documentation.
|
| 202 |
-
|
| 203 |
-
**Docker:**
|
| 204 |
-
```bash
|
| 205 |
-
# Build
|
| 206 |
-
make docker-build
|
| 207 |
-
|
| 208 |
-
# Run
|
| 209 |
-
make docker-run
|
| 210 |
-
```
|
| 211 |
-
|
| 212 |
-
The Docker container also runs the server at `http://127.0.0.1:8000`.
|
| 213 |
-
|
| 214 |
-
## What Happens When You Start the Server
|
| 215 |
|
| 216 |
-
|
| 217 |
-
INFO: Starting ML Inference Service...
|
| 218 |
-
INFO: Initializing ResNet service: models/microsoft/resnet-18
|
| 219 |
-
INFO: Loading model from models/microsoft/resnet-18
|
| 220 |
-
INFO: Model loaded: 1000 classes
|
| 221 |
-
INFO: Startup completed successfully
|
| 222 |
-
INFO: Uvicorn running on http://0.0.0.0:8000
|
| 223 |
-
```
|
| 224 |
-
|
| 225 |
-
If you see "Model directory not found", check that your model files exist at the expected path with the full org/model structure.
|
| 226 |
-
|
| 227 |
-
## Testing the API
|
| 228 |
-
|
| 229 |
-
By default, the server serves the inference API at `/predict`:
|
| 230 |
|
| 231 |
-
|
| 232 |
-
# Using curl
|
| 233 |
-
curl -X POST http://localhost:8000/predict \
|
| 234 |
-
-H "Content-Type: application/json" \
|
| 235 |
-
-d '{
|
| 236 |
-
"image": {
|
| 237 |
-
"mediaType": "image/jpeg",
|
| 238 |
-
"data": "<base64-encoded-image-data>"
|
| 239 |
-
}
|
| 240 |
-
}'
|
| 241 |
-
```
|
| 242 |
|
| 243 |
-
Example response:
|
| 244 |
-
```json
|
| 245 |
-
{
|
| 246 |
-
"logprobs": [-0.859380304813385,-1.2701971530914307,-2.1918208599090576,-1.69235098361969],
|
| 247 |
-
"localizationMask": {
|
| 248 |
-
"mediaType":"image/png",
|
| 249 |
-
"data":"iVBORw0KGgoAAAANSUhEUgAAA8AAAAKDAQAAAAD9Fl5AAAAAu0lEQVR4nO3NsREAMAgDMWD/nZMVKEwn1T5/FQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMCl3g5f+HC24TRhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAj70gwKsTlmdBwAAAABJRU5ErkJggg=="
|
| 250 |
-
}
|
| 251 |
-
}
|
| 252 |
-
```
|
| 253 |
|
| 254 |
## Project Structure
|
| 255 |
|
|
|
|
| 6 |
|
| 7 |
This project provides a starting point for implementing a submission to the [SAFE: Image Edit Detection and Localization Challenge 2025](https://app.dyff.io/challenges/dc509a8c771b492b90c43012fde9a04f). You do not need to use this code to participate in the challenge.
|
| 8 |
|
| 9 |
+
|
| 10 |
+
# Quick Start
|
| 11 |
|
| 12 |
To use the code and tools in this repository, [clone it](https://huggingface.co/docs/hub/en/repositories-getting-started#cloning-repositories) with `git`:
|
| 13 |
|
|
|
|
| 15 |
git clone https://huggingface.co/safe-challenge-2025/example-submission
|
| 16 |
```
|
| 17 |
|
| 18 |
+
## Install `uv`
|
| 19 |
+
|
| 20 |
+
https://docs.astral.sh/uv/getting-started/installation/
|
| 21 |
+
|
| 22 |
+
## Local development
|
| 23 |
+
|
| 24 |
+
```bash
|
| 25 |
+
# Install dependencies
|
| 26 |
+
make setup
|
| 27 |
+
source venv/bin/activate
|
| 28 |
+
|
| 29 |
+
# Download the example model
|
| 30 |
+
make download
|
| 31 |
+
|
| 32 |
+
# Run it
|
| 33 |
+
make serve
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
In a second terminal:
|
| 37 |
+
```bash
|
| 38 |
+
# Process an example input
|
| 39 |
+
./prompt.sh cat.json
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
The server runs on `http://127.0.0.1:8000`. Check `/docs` for the interactive API documentation.
|
| 43 |
+
|
| 44 |
+
## Docker
|
| 45 |
+
```bash
|
| 46 |
+
# Build
|
| 47 |
+
make docker-build
|
| 48 |
+
|
| 49 |
+
# Run
|
| 50 |
+
make docker-run
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
The Docker container also runs the server at `http://127.0.0.1:8000`.
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
## What Happens When You Start the Server
|
| 57 |
+
|
| 58 |
+
```
|
| 59 |
+
INFO: Starting ML Inference Service...
|
| 60 |
+
INFO: Initializing ResNet service: models/microsoft/resnet-18
|
| 61 |
+
INFO: Loading model from models/microsoft/resnet-18
|
| 62 |
+
INFO: Model loaded: 1000 classes
|
| 63 |
+
INFO: Startup completed successfully
|
| 64 |
+
INFO: Uvicorn running on http://0.0.0.0:8000
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
If you see "Model directory not found", check that your model files exist at the expected path with the full org/model structure.
|
| 68 |
+
|
| 69 |
+
## Testing the API
|
| 70 |
+
|
| 71 |
+
By default, the server serves the inference API at `/predict`:
|
| 72 |
+
|
| 73 |
+
```bash
|
| 74 |
+
# Using curl
|
| 75 |
+
curl -X POST http://localhost:8000/predict \
|
| 76 |
+
-H "Content-Type: application/json" \
|
| 77 |
+
-d '{
|
| 78 |
+
"image": {
|
| 79 |
+
"mediaType": "image/jpeg",
|
| 80 |
+
"data": "<base64-encoded-image-data>"
|
| 81 |
+
}
|
| 82 |
+
}'
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
Example response:
|
| 86 |
+
```json
|
| 87 |
+
{
|
| 88 |
+
"logprobs": [-0.859380304813385,-1.2701971530914307,-2.1918208599090576,-1.69235098361969],
|
| 89 |
+
"localizationMask": {
|
| 90 |
+
"mediaType":"image/png",
|
| 91 |
+
"data":"iVBORw0KGgoAAAANSUhEUgAAA8AAAAKDAQAAAAD9Fl5AAAAAu0lEQVR4nO3NsREAMAgDMWD/nZMVKEwn1T5/FQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMCl3g5f+HC24TRhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAj70gwKsTlmdBwAAAABJRU5ErkJggg=="
|
| 92 |
+
}
|
| 93 |
+
}
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
|
| 97 |
# How to participate
|
| 98 |
|
|
|
|
| 111 |
|
| 112 |
Alternatively, you can create a Docker [HuggingFace Space](https://huggingface.co/new-space?sdk=docker) and [create submissions from the space](#submitting-a-docker-huggingface-space) using a [webform](https://challenge.dyff.io/submit). The advantage of using an HF Space is that it builds the Docker image for you. However, HF Spaces also have some limitations that you'll need to account for.
|
| 113 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
# Submitting using the Dyff API
|
| 116 |
|
|
|
|
| 249 |
|
| 250 |
You are also free to build detectors with any other technologies and software stacks that you want, but you may have to figure out packaging on your own. All that's required of your submission is that it runs in a Docker container and that it supports the [required inference API](#api-reference).
|
| 251 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 252 |
|
| 253 |
+
## General considerations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 254 |
|
| 255 |
+
* Your submission will run **without Internet access** during evaluation. All of the files required to run your submission must be packaged along with it. You can either include files in the Docker image, or upload the files as a separate package and mount them in your application container during execution.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 256 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 257 |
|
| 258 |
## Project Structure
|
| 259 |
|