SAM 2: Segment Anything in Images and Videos
Paper
β’
2408.00714
β’
Published
β’
120
image
imagewidth (px) 512
512
| label
class label 2
classes |
|---|---|
0images
|
|
0images
|
|
1masks
|
|
1masks
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
0images
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
|
1masks
|
Dataset partitions for Federated SAM2-LoRA medical image segmentation across multiple Data Owners.
This dataset contains partitioned chest CT scans designed for federated learning experiments with heterogeneous clients. Each partition represents a different hospital/data owner with varying data availability and training capabilities.
Original data from Chest CT Segmentation on Kaggle.
| Data Owner | Type | Training Method | Contributes to FedAvg |
|---|---|---|---|
| DO1 | Zero-shot | CLIP text prompts | No |
| DO2 | Few-shot | Memory bank | No |
| DO3 | LoRA | Gradient training | Yes |
| DO4 | LoRA | Gradient training | Yes |
βββ do2_fewshot/
β βββ mock/ # Sample data for testing
β β βββ train/
β β βββ test/
β βββ private/ # Full training data
β β βββ train/
β β βββ test/
β βββ README.md
βββ do3_lora/
β βββ mock/
β βββ private/
β βββ README.md
βββ do4_lora/
β βββ mock/
β βββ private/
β βββ README.md
βββ README.md
Each split contains:
images/ - RGB JPEG chest CT slicesmasks/ - Binary segmentation maskstrain.csv or test.csv - Image-mask mapping| Partition | Private (train/test) | Mock (train/test) | Unique Patients |
|---|---|---|---|
| do2_fewshot | 4 / 2 | 2 / 1 | 6 |
| do3_lora | 30 / 8 | 8 / 2 | 38 |
| do4_lora | 28 / 8 | 7 / 2 | 36 |
| Total | 62 / 18 | 17 / 5 | 80 |
from huggingface_hub import snapshot_download
# Download all partitions
snapshot_download(
repo_id="khoaguin/chest-ct-segmentation",
repo_type="dataset",
local_dir="./dataset"
)
# Download specific partition
snapshot_download(
repo_id="khoaguin/chest-ct-segmentation",
repo_type="dataset",
allow_patterns="do3_lora/**",
local_dir="./dataset"
)
# Download mock data only
snapshot_download(
repo_id="khoaguin/chest-ct-segmentation",
repo_type="dataset",
allow_patterns="*/mock/**",
local_dir="./dataset"
)
import pandas as pd
from PIL import Image
from pathlib import Path
# Load DO3 training data
data_path = Path("dataset/do3_lora/private/train")
df = pd.read_csv(data_path / "train.csv")
for _, row in df.iterrows():
image = Image.open(data_path / "images" / row["ImageId"])
mask = Image.open(data_path / "masks" / row["MaskId"])
ImageId,MaskId
ID00131637202220424084844_30.jpg,ID00131637202220424084844_mask_30.jpg
If you use this dataset, please cite:
@misc{chest-ct-segmentation-fl,
title={Chest CT Segmentation for Federated Learning},
author={OpenMined},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/khoaguin/chest-ct-segmentation}
}
This dataset is released under CC BY 4.0.