| | --- |
| | extra_gated_fields: |
| | First Name: text |
| | Last Name: text |
| | Date of birth: date_picker |
| | Country: country |
| | Affiliation: text |
| | Job title: |
| | type: select |
| | options: |
| | - Student |
| | - Research Graduate |
| | - AI researcher |
| | - AI developer/engineer |
| | - Reporter |
| | - Other |
| | geo: ip_location |
| | By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox |
| | extra_gated_description: >- |
| | The information you provide will be collected, stored, processed and shared in |
| | accordance with the [Meta Privacy |
| | Policy](https://www.facebook.com/privacy/policy/). |
| | extra_gated_button_content: Submit |
| | language: |
| | - en |
| | license: other |
| | pretty_name: SACo-VEval |
| | configs: |
| | - config_name: SACo-VEval SA-V |
| | data_files: |
| | - split: test |
| | path: annotation/saco_veval_sav_test.json |
| | - split: val |
| | path: annotation/saco_veval_sav_val.json |
| | - config_name: SACo-VEval YT-Temporal-1B |
| | data_files: |
| | - split: test |
| | path: annotation/saco_veval_yt1b_test.json |
| | - split: val |
| | path: annotation/saco_veval_yt1b_val.json |
| | - config_name: SACo-VEval SmartGlasses |
| | data_files: |
| | - split: test |
| | path: annotation/saco_veval_smartglasses_test.json |
| | - split: val |
| | path: annotation/saco_veval_smartglasses_val.json |
| | --- |
| | |
| | # SA-Co/VEval Dataset |
| | **License** each domain has its own License |
| | * SA-Co/VEval - SA-V: CC-BY-NC 4.0 |
| | * SA-Co/VEval - YT-Temporal-1B: CC-BY-NC 4.0 |
| | * SA-Co/VEval - SmartGlasses: CC-by-4.0 |
| | |
| | **SA-Co/VEval** is an evaluation dataset comprising of 3 domains, each domain has a val and test split. |
| | * SA-Co/VEval - SA-V: videos are from the [SA-V dataset](https://ai.meta.com/datasets/segment-anything-video/) |
| | * SA-Co/VEval - YT-Temporal-1B: videos are from the [YT-Temporal-1B](https://cove.thecvf.com/datasets/704) |
| | * SA-Co/VEval - SmartGlasses: egocentric videos from [Smart Glasses](https://huggingface.co/datasets/facebook/SACo-VEval/blob/main/media/saco_sg.tar.gz) |
| |
|
| | This Hugging Face dataset repo contains the following contents: |
| | ``` |
| | datasets/facebook/SACo-VEval/tree/main/ |
| | ├── annotation/ |
| | │ ├── saco_veval_sav_test.json |
| | │ ├── saco_veval_sav_val.json |
| | │ ├── saco_veval_smartglasses_test.json |
| | │ ├── saco_veval_smartglasses_val.json |
| | │ ├── saco_veval_yt1b_test.json |
| | │ ├── saco_veval_yt1b_val.json |
| | └── media/ |
| | ├── saco_sg.tar.gz |
| | └── yt1b_start_end_time.json |
| | ``` |
| | * annotation |
| | * all the GT json files |
| | * media |
| | * `saco_sg.tar.gz`: the preprocessed JPEGImages for SA-Co/VEval - SmartGlasses |
| | * `yt1b_start_end_time.json`: the Youtube video ids and the start and end time used in SA-Co/VEval - YT-Temporal-1B |
| |
|
| | More detail to prepare the complete SA-Co/VEval Dataset can be found in the [SAM 3 Github](https://github.com/facebookresearch/sam3/tree/main/scripts/eval/veval). |
| |
|
| | ## Annotation Format |
| | The format is similar to the [YTVIS](https://youtube-vos.org/dataset/vis/) format. |
| |
|
| | In the annotation json, e.g. `saco_veval_sav_test.json` there are 5 fields: |
| | * info: |
| | * A dict containing the dataset info |
| | * E.g. {'version': 'v1', 'date': '2025-09-24', 'description': 'SA-Co/VEval SA-V Test'} |
| | * videos |
| | * A list of videos that are used in the current annotation json |
| | * It contains {id, video_name, file_names, height, width, length} |
| | * annotations |
| | * A list of **positive** masklets and their related info |
| | * It contains {id, segmentations, bboxes, areas, iscrowd, video_id, height, width, category_id, noun_phrase} |
| | * video_id should match to the `videos - id` field above |
| | * category_id should match to the `categories - id` field below |
| | * segmentations is a list of [RLE](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/mask.py) |
| | * categories |
| | * A **globally** used noun phrase id map, which is true across all 3 domains. |
| | * It contains {id, name} |
| | * name is the noun phrase |
| | * video_np_pairs |
| | * A list of video-np pairs, including both **positive** and **negative** used in the current annotation json |
| | * It contains {id, video_id, category_id, noun_phrase, num_masklets} |
| | * video_id should match the `videos - id` above |
| | * category_id should match the `categories - id` above |
| | * when `num_masklets > 0` it is a positive video-np pair, and the presenting masklets can be found in the annotations field |
| | * when `num_masklets = 0` it is a negative video-np pair, meaning no masklet presenting at all |
| | ``` |
| | data { |
| | "info": info |
| | "videos": [video] |
| | "annotations": [annotation] |
| | "categories": [category] |
| | "video_np_pairs": [video_np_pair] |
| | } |
| | video { |
| | "id": int |
| | "video_name": str # e.g. sav_000000 |
| | "file_names": List[str] |
| | "height": int |
| | "width": width |
| | "length": length |
| | } |
| | annotation { |
| | "id": int |
| | "segmentations": List[RLE] |
| | "bboxes": List[List[int, int, int, int]] |
| | "areas": List[int] |
| | "iscrowd": int |
| | "video_id": str |
| | "height": int |
| | "width": int |
| | "category_id": int |
| | "noun_phrase": str |
| | } |
| | category { |
| | "id": int |
| | "name": str |
| | } |
| | video_np_pair { |
| | "id": int |
| | "video_id": str |
| | "category_id": int |
| | "noun_phrase": str |
| | "num_masklets" int |
| | } |
| | ``` |
| | SAM 3 Github [sam3/examples/saco_veval_vis_example.ipynb](https://github.com/facebookresearch/sam3/blob/main/examples/saco_veval_vis_example.ipynb) shows some examples of the data format and data visualization. |