oncoseg-api / README.md
tp53's picture
Upload folder using huggingface_hub
44d1038 verified

A newer version of the Gradio SDK is available: 6.5.1

Upgrade
metadata
title: OncoSeg Inference API
emoji: 🏥
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: false
license: cc-by-nc-4.0

OncoSeg Medical Image Segmentation API

GPU-accelerated segmentation for CT and MRI volumes using the OncoSeg/MedSAM3 model.

Features

  • Text-prompted segmentation: Describe what to find (e.g., "tumor", "lesion")
  • Multiple organ checkpoints: Brain, liver, breast, lung, kidney, spine
  • NIfTI support: Upload .nii or .nii.gz files
  • API-first design: Programmatic access for integration with viewers

API Endpoints

POST /api/segment_slice_api

Segment a single slice from a volume.

Request:

{
    "nifti_b64": "<base64-encoded NIfTI file>",
    "slice_idx": 77,
    "text_prompt": "tumor",
    "checkpoint": "brain"
}

Response:

{
    "success": true,
    "mask_b64": "<base64-encoded mask>",
    "mask_shape": [240, 240],
    "contours": [[[y1, x1], [y2, x2], ...]],
    "slice_idx": 77,
    "inference_time_ms": 1234
}

POST /api/segment_volume_api

Segment an entire volume and return contours for all slices with detections.

Request:

{
    "nifti_b64": "<base64-encoded NIfTI file>",
    "text_prompt": "tumor",
    "checkpoint": "brain",
    "skip_empty": true,
    "min_area": 50
}

Response:

{
    "success": true,
    "contours": {
        "32": [[[y, x], ...]],
        "33": [[[y, x], ...]],
        ...
    },
    "num_slices": 155,
    "slices_with_tumor": ["32", "33", ...],
    "inference_time_ms": 45000
}

Available Checkpoints

Checkpoint Organ/Task Best For
brain Glioblastoma (BraTS) Brain MRI FLAIR
liver Liver lesions Abdominal CT
breast Breast tumor (DCE-MRI) Breast MRI
lung Lung cancer (NSCLC) Chest CT
kidney Kidney tumor (KiTS) Abdominal CT
spine Spine structures CT

Usage Example (Python)

import requests
import base64

# Read NIfTI file
with open("brain_mri.nii.gz", "rb") as f:
    nifti_b64 = base64.b64encode(f.read()).decode()

# Call API
response = requests.post(
    "https://tp53-oncoseg-api.hf.space/api/segment_slice_api",
    json={
        "nifti_b64": nifti_b64,
        "slice_idx": 77,
        "text_prompt": "tumor",
        "checkpoint": "brain",
    },
    timeout=120,  # Allow time for cold start
)

result = response.json()

if result["success"]:
    # Decode mask
    import numpy as np
    mask_bytes = base64.b64decode(result["mask_b64"])
    mask = np.frombuffer(mask_bytes, dtype=np.uint8).reshape(result["mask_shape"])
    
    # Use contours for visualization
    contours = result["contours"]
    print(f"Found {len(contours)} contours in {result['inference_time_ms']}ms")

Integration with OncoSeg Viewer

This Space is designed to work with the OncoSeg Viewer browser-based medical image viewer.

Set environment variable in the viewer:

export INFERENCE_MODE=hf
export HF_SPACE_URL=https://tp53-oncoseg-api.hf.space

Performance

Metric Value
Cold start 10-30s (model loading)
Warm inference 1-3s per slice
Full volume (155 slices) 3-5 minutes

License

CC BY-NC 4.0 - For research and non-commercial use.

Citation

If you use OncoSeg in your research, please cite:

@software{oncoseg2025,
  title = {OncoSeg: Medical Image Segmentation with MedSAM3},
  year = {2025},
  url = {https://huggingface.co/spaces/tp53/oncoseg-api}
}