tmp / docs /ENCODER_SCRIPTS.md
Icey444's picture
Upload docs/
18d3063 verified

Per-task encoder scripts

Each complex prediction type has its own Python file that reads how predictions are recorded (bbox, polygon, keypoint, etc.) and outputs one image or text file per encoding variant.

Task Script Prediction format Outputs
depth_estimation src/encoders/encode_depth.py predictions[].url (3 colormap URLs) 3 images per sample: plasma, turbo, gray
object_detection src/encoders/encode_object_detection.py predictions_type: "bbox", predictions[].bbox [x1,y1,x2,y2], label, color_hex original, box_only, box_label, text_xyxy.txt, text_xywh.txt
instance_segmentation src/encoders/encode_instance_segmentation.py Polygon: predictions[].polygon [[x,y],...]. RLE: predictions[].rle (needs pycocotools to decode) original, enc1..enc6 (overlay images), enc7_json.txt
semantic_segmentation src/encoders/encode_semantic_segmentation.py predictions[].polygon [[[x,y],...]] or [[x,y],...], label same 7 as instance (enc1–7)
referring_segmentation src/encoders/encode_referring_segmentation.py predictions[].polygon, label (referring expression) original, fill, contour, fill_contour, json.txt
keypoint src/encoders/encode_keypoint.py predictions[].keypoint (17×3: x,y,conf per COCO keypoint) original, enc_a (same color), enc_b (per-person), enc_c (per-limb), enc_d_json.txt
generation_* / lowlevel-* src/encoders/encode_generation_lowlevel.py predictions[].url or predictions[].image (prediction image URL) 1 image per sample

Shared: src/common.py provides load_image_id_to_url() (from images/*.json) and fetch_image_cv2(url) for overlay tasks.

Requirement for overlay tasks: images/*.json (or root images.json) must list each image with id and url so that image_id in annotations can be resolved to the original image URL.

Run all: ./run_encoding.sh (uses 24 threads, 2 samples per task).