ChipYTY's picture
Add files using upload-large-folder tool
e5e4b98 verified

SA-Co/Gold benchmark

SA-Co/Gold is a benchmark for promptable concept segmentation (PCS) in images. The benchmark contains images paired with text labels, also referred as Noun Phrases (NPs), each annotated exhaustively with masks on all object instances that match the label. SA-Co/Gold comprises 7 subsets, each targeting a different annotation domain: MetaCLIP captioner NPs, SA-1B captioner NPs, Attributes, Crowded Scenes, Wiki-Common1K, Wiki-Food/Drink, Wiki-Sports Equipment. The images are originally from the MetaCLIP and SA-1B datasets.

For each subset, the annotations are multi-reviewed by 3 independent human annotators. Each row in the figure shows an image and noun phrase pair from one of the domains, and masks from the 3 annotators. Dashed borders indicate special group masks that cover more than a single instance, used when separating into instances is deemed too difficult. Annotators sometimes disagree on precise mask borders, the number of instances, and whether the phrase exists. Having 3 independent annotations allow us to measure human agreement on the task, which serves as an upper bound for model performance.

# Preparation

Download annotations

The GT annotations can be downloaded from Hugging Face or Roboflow

Download images

There are two image sources for the evaluation dataset: MetaCLIP and SA-1B.

  1. The MetaCLIP images are referred in 6 out of 7 subsets (MetaCLIP captioner NPs, Attributes, Crowded Scenes, Wiki-Common1K, Wiki-Food/Drink, Wiki-Sports Equipment) and can be downloaded from Roboflow.

  2. The SA-1B images are referred in 1 out of 7 subsets (SA-1B captioner NPs) and can be downloaded from Roboflow. Alternatively, they can be downloaded from here. Please access the link for sa_co_gold.tar from dynamic links available under Download text file to download the SA-1B images referred in SA-Co/Gold.

Usage

Visualization

Run evaluation

The official metric for SA-Co/Gold is cgF1. Please refer to the SAM3 paper for details. Our evaluator inherits from the official COCO evaluator, with some modifications. Recall that in the Gold subset, there are three annotations for each datapoint. We evaluate against each of them and picks the most favorable (oracle setting). It has minimal dependency (pycocotools, numpy and scipy), to help reusability in other projects. In this section we provide several pointers to run evaluation of SAM3 or third-party models.

Evaluate SAM3

We provide inference configurations to reproduce the evaluation of SAM3. First, please edit the file eval_base.yaml with the paths where you downloaded the images and annotations above.

There are 7 subsets and as many configurations to be run. Let's take the first subset as an example. The inference can be run locally using the following command (you can adjust the number of gpus):

python sam3/train/train.py -c configs/gold_image_evals/sam3_gold_image_metaclip_nps.yaml --use-cluster 0 --num-gpus 1

The predictions will be dumped in the folder specified in eval_base.yaml.

We also provide support for SLURM-based cluster inference. Edit the eval_base.yaml file to reflect your slurm configuration (partition, qos, ...), then run

python sam3/train/train.py -c configs/gold_image_evals/sam3_gold_image_metaclip_nps.yaml --use-cluster 1

We provide the commands for all subsets below

MetaCLIP captioner NPs

python sam3/train/train.py -c configs/gold_image_evals/sam3_gold_image_metaclip_nps.yaml --use-cluster 1

SA-1B captioner NPs

Refer to SA-1B images for this subset. For the other 6 subsets, refer to MetaCLIP images.

python sam3/train/train.py -c configs/gold_image_evals/sam3_gold_image_sa1b_nps.yaml --use-cluster 1

Attributes

python sam3/train/train.py -c configs/gold_image_evals/sam3_gold_image_attributes.yaml --use-cluster 1

Crowded Scenes

python sam3/train/train.py -c configs/gold_image_evals/sam3_gold_image_crowded.yaml --use-cluster 1

Wiki-Common1K

python sam3/train/train.py -c configs/gold_image_evals/sam3_gold_image_wiki_common.yaml --use-cluster 1

Wiki-Food/Drink

python sam3/train/train.py -c configs/gold_image_evals/sam3_gold_image_fg_food.yaml --use-cluster 1

Wiki-Sports Equipment

python sam3/train/train.py -c configs/gold_image_evals/sam3_gold_image_fg_sports.yaml --use-cluster 1

Offline evaluation

If you have the predictions in the COCO result format (see here), then we provide scripts to easily run the evaluation.

For an example on how to run the evaluator on all subsets and aggregate results, see the following notebook: saco_gold_silver_eval_example.ipynb Alternatively, you can run python scripts/eval/gold/eval_sam3.py

If you have a prediction file for a given subset, you can run the evaluator specifically for that one using the standalone script. Example:

python scripts/eval/standalone_cgf1.py --pred_file /path/to/coco_predictions_segm.json --gt_files /path/to/annotations/gold_metaclip_merged_a_release_test.json  /path/to/annotations/gold_metaclip_merged_b_release_test.json  /path/to/annotations/gold_metaclip_merged_c_release_test.json

Results

Here we collect the segmentation results for SAM3 and some baselines. Note that the baselines that do not produce masks are evaluated by converting the boxes to masks using SAM2

AverageCaptioner metaclipCaptioner sa1b CrowdedFG foodFG sportAttributes Wiki common
cgF1IL_MCCpositive_micro_F1 cgF1IL_MCCpositive_micro_F1cgF1 IL_MCCpositive_micro_F1cgF1IL_MCC positive_micro_F1cgF1IL_MCCpositive_micro_F1 cgF1IL_MCCpositive_micro_F1cgF1 IL_MCCpositive_micro_F1cgF1IL_MCC positive_micro_F1
gDino-T3.250.1516.2 2.890.2113.883.07 0.215.350.280.08 3.370.960.19.83 1.120.111.213.75 0.2947.30.70.06 12.14
OWLv2*24.590.5742 17.690.5234.2713.32 0.526.8315.80.51 30.7431.960.6549.35 36.010.6456.1935.61 0.6356.2321.730.54 40.25
OWLv217.270.4636.8 12.210.3931.339.76 0.4521.658.870.36 24.7724.360.5147.85 24.440.5246.9725.85 0.5448.2215.40.42 36.64
LLMDet-L6.50.2127.3 4.490.2319.365.32 0.2322.812.420.18 13.745.50.1929.12 4.390.1725.3422.17 0.3957.131.180.05 23.3
APE16.410.436.9 12.60.4230.112.23 0.2210.017.150.35 20.322.740.5145.01 31.790.5656.4526.74 0.4757.2711.590.29 39.46
DINO-X21.260.3855.2 17.210.3549.1719.66 0.4840.9312.860.34 37.4830.070.4961.72 28.360.4169.430.97 0.4274.049.720.18 53.52
Gemini 2.513.030.2946.1 9.90.2933.7913.1 0.4132.18.150.27 30.3419.630.3359.52 15.070.2853.518.84 0.363.146.50.13 50.32
SAM 354.060.8266.11 47.260.8158.5853.69 0.8662.5561.080.9 67.7353.410.7967.28 65.520.8973.7554.93 0.767242.530.7 60.85

Annotation format

The annotation format is derived from COCO format. Notable data fields are:

  • images: a list of dict features, contains a list of all image-NP pairs. Each entry is related to an image-NP pair and has the following items.

    • id: an int feature, unique identifier for the image-NP pair
    • text_input: a string feature, the noun phrase for the image-NP pair
    • file_name: a string feature, the relative image path in the corresponding data folder.
    • height/width: dimension of the image
    • is_instance_exhaustive: Boolean (0 or 1). If it's 1 then all the instances are correctly annotated. For instance segmentation, we only use those datapoints. Otherwise, there may be either missing instances or crowd segments (a segment covering multiple instances)
    • is_pixel_exhaustive: Boolean (0 or 1). If it's 1, then the union of all masks cover all pixels corresponding to the prompt. This is weaker than instance_exhaustive since it allows crowd segments. It can be used for semantic segmentation evaluations.
  • annotations: a list of dict features, containing a list of all annotations including bounding box, segmentation mask, area etc.

    • image_id: an int feature, maps to the identifier for the image-np pair in images
    • bbox: a list of float features, containing bounding box in [x,y,w,h] format, normalized by the image dimensions
    • segmentation: a dict feature, containing segmentation mask in RLE format
    • category_id: For compatibility with the coco format. Will always be 1 and is unused.
    • is_crowd: Boolean (0 or 1). If 1, then the segment overlaps several instances (used in cases where instances are not separable, for e.g. due to poor image quality)
  • categories: a list of dict features, containing a list of all categories. Here, we provide the category key for compatibility with the COCO format, but in open-vocabulary detection we do not use it. Instead, the text prompt is stored directly in each image (text_input in images). Note that in our setting, a unique image (id in images) actually corresponds to an (image, text prompt) combination.

For id in images that have corresponding annotations (i.e. exist as image_id in annotations), we refer to them as a "positive" NP. And, for id in images that don't have any annotations (i.e. they do not exist as image_id in annotations), we refer to them as a "negative" NP.

A sample annotation from Wiki-Food/Drink domain looks as follows:

images

[
  {
    "id": 10000000,
    "file_name": "1/1001/metaclip_1_1001_c122868928880ae52b33fae1.jpeg",
    "text_input": "chili",
    "width": 600,
    "height": 600,
    "queried_category": "0",
    "is_instance_exhaustive": 1,
    "is_pixel_exhaustive": 1
  },
  {
    "id": 10000001,
    "file_name": "1/1001/metaclip_1_1001_c122868928880ae52b33fae1.jpeg",
    "text_input": "the fish ball",
    "width": 600,
    "height": 600,
    "queried_category": "2001",
    "is_instance_exhaustive": 1,
    "is_pixel_exhaustive": 1
  }
]

annotations

[
  {
    "id": 1,
    "image_id": 10000000,
    "source": "manual",
    "area": 0.002477777777777778,
    "bbox": [
      0.44333332777023315,
      0.0,
      0.10833333432674408,
      0.05833333358168602
    ],
    "segmentation": {
      "counts": "`kk42fb01O1O1O1O001O1O1O001O1O00001O1O001O001O0000000000O1001000O010O02O001N10001N0100000O10O1000O10O010O100O1O1O1O1O0000001O0O2O1N2N2Nobm4",
      "size": [
        600,
        600
      ]
    },
    "category_id": 1,
    "iscrowd": 0
  },
  {
    "id": 2,
    "image_id": 10000000,
    "source": "manual",
    "area": 0.001275,
    "bbox": [
      0.5116666555404663,
      0.5716666579246521,
      0.061666667461395264,
      0.036666665226221085
    ],
    "segmentation": {
      "counts": "aWd51db05M1O2N100O1O1O1O1O1O010O100O10O10O010O010O01O100O100O1O00100O1O100O1O2MZee4",
      "size": [
        600,
        600
      ]
    },
    "category_id": 1,
    "iscrowd": 0
  }
]

Data Stats

Here are the stats for the 7 annotation domains. The # Image-NPs represent the total number of unique image-NP pairs including both “positive” and “negative” NPs.

Domain Media # Image-NPs # Image-NP-Masks
MetaCLIP captioner NPs MetaCLIP 33393 20144
SA-1B captioner NPs SA-1B 13258 30306
Attributes MetaCLIP 9245 3663
Crowded Scenes MetaCLIP 20687 50417
Wiki-Common1K MetaCLIP 65502 6448
Wiki-Food&Drink MetaCLIP 13951 9825
Wiki-Sports Equipment MetaCLIP 12166 5075