Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
512
611
End of preview. Expand in Data Studio

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

AI-Based Intelligent Extraction of Diseased Areas from Medical Radiography Images

1. Overview

This dataset is a refined and preprocessed version of the Brain Stroke CT Dataset, specifically optimized for semantic segmentation tasks using deep learning architectures like U-Net. This work is part of a research project focused on automating the extraction of pathological regions (Bleeding and Ischemia) from head CT scans to assist in rapid clinical diagnosis.

2. Source & Attribution

The raw data was originally sourced from the Brain Stroke CT Dataset on Kaggle, which was prepared by radiologists and supported by the Turkish Health Institutes (TUSEB).

3. Preprocessing & Methodology

The original dataset provided raw CT images and "Overlay" images where specialists manually highlighted diseased areas using specific colors (Red and Green). To make this data compatible with high-precision segmentation models, a custom Python pipeline was developed to:

  • Binary Mask Extraction: Used HSV color masking to isolate the clinicians' annotations from the overlays, converting them into pure binary masks (White: Diseased, Black: Healthy).
  • Normal Case Synthesis: Generated corresponding empty (all-black) masks for healthy brain images to maintain consistency during model training.
  • Structural Standardization: Organized the data into a clean images/ and masks/ hierarchy for each category (Normal, Bleeding, Ischemia).

4. Why This Custom Pipeline?

The raw Kaggle dataset is excellent for classification, but lacks ready-to-use binary masks for pixel-wise segmentation. By applying this custom script:

  1. We standardized the labels for U-Net ingestion.
  2. We ensured that the model learns the exact morphology of the stroke rather than just its general location.
  3. We reduced data noise by filtering out non-essential metadata and raw DICOM formats, focusing on standardized PNG outputs.

5. Implementation Script

The following script was used to generate this processed version:

# [
import os
import shutil
import cv2
import numpy as np
from tqdm import tqdm
import kagglehub
from huggingface_hub import HfApi, create_repo

# --- إعدادات الحساب والروابط ---
my_token = "***********"
repo_name = "iraqigold/brain-stroke-ct-dataset"

# مسارات الشغل بداخل الكولاب
raw_path = "/content/brain-stroke-ct-dataset"
output_folder = "/content/brain-stroke"
zip_name = "/content/brain-stroke-dataset-v1" 

# --- تنظيف المكان وتحميل الداتا ---
print("جاري تنظيف المجلدات وتحميل البيانات من كاجل...")
if os.path.exists(raw_path): shutil.rmtree(raw_path)
if os.path.exists(output_folder): shutil.rmtree(output_folder)

# تحميل الداتا سيت
tmp_path = kagglehub.dataset_download("ozguraslank/brain-stroke-ct-dataset")
shutil.copytree(tmp_path, raw_path)

# التأكد من المسار الصحيح للمجلد بداخل الداتا سيت
data_root = raw_path
if "Brain_Stroke_CT_Dataset" in os.listdir(raw_path):
    data_root = os.path.join(raw_path, "Brain_Stroke_CT_Dataset")

# --- دالة استخراج الماسك من الألوان (أحمر وأخضر) ---
def get_mask(path):
    # نقرأ الصورة ونحولها لنمط HSV حتى نعزل الألوان
    image = cv2.imread(path)
    hsv_img = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    
    # تحديد درجات اللون الأحمر والأخضر
    r1_low, r1_high = np.array([0, 70, 50]), np.array([10, 255, 255])
    r2_low, r2_high = np.array([170, 70, 50]), np.array([180, 255, 255])
    g_low, g_high = np.array([35, 50, 50]), np.array([85, 255, 255])
    
    # نسوي ماسك لكل لون ونجمعهم
    m1 = cv2.inRange(hsv_img, r1_low, r1_high)
    m2 = cv2.inRange(hsv_img, r2_low, r2_high)
    m3 = cv2.inRange(hsv_img, g_low, g_high)
    
    final_mask = cv2.bitwise_or(cv2.bitwise_or(m1, m2), m3)
    # نحولها لأسود وأبيض صافي (Binary)
    _, thresh = cv2.threshold(final_mask, 1, 255, cv2.THRESH_BINARY)
    return thresh

# --- ترتيب الفولدرات ومعالجة الصور ---
print("بدأنا ترتيب الفولدرات واستخراج الماسكات...")
folders = ["Bleeding", "Ischemia", "Normal"]

for f in folders:
    os.makedirs(os.path.join(output_folder, f, "images"), exist_ok=True)
    os.makedirs(os.path.join(output_folder, f, "masks"), exist_ok=True)

# نمر على كل نوع مرض ونعالج صوره
for folder in folders:
    print(f"جاري معالجة نوع: {folder}")
    path_now = os.path.join(data_root, folder)
    
    png_folder = os.path.join(path_now, "PNG")
    overlay_folder = os.path.join(path_now, "OVERLAY")
    
    if not os.path.exists(png_folder): continue

    for img_name in tqdm(os.listdir(png_folder)):
        if not img_name.lower().endswith(('.png', '.jpg', '.jpeg')): continue
        
        # نقل الصورة الأصلية لمكانها الجديد
        old_img = os.path.join(png_folder, img_name)
        new_img = os.path.join(output_folder, folder, "images", img_name)
        shutil.copy(old_img, new_img)
        
        # نحدد مكان حفظ الماسك الجديد
        new_mask = os.path.join(output_folder, folder, "masks", img_name)
        
        if folder == "Normal":
            # إذا الحالة سليمة، نسوي ماسك أسود فارغ بنفس الحجم
            temp_img = cv2.imread(old_img)
            empty_mask = np.zeros((temp_img.shape[0], temp_img.shape[1]), dtype=np.uint8)
            cv2.imwrite(new_mask, empty_mask)
        else:
            # إذا اكو مرض، نستخرج الماسك من صور الـ Overlay الملونة
            ov_path = os.path.join(overlay_folder, img_name)
            if os.path.exists(ov_path):
                binary_res = get_mask(ov_path)
                cv2.imwrite(new_mask, binary_res)

# --- ضغط البيانات ورفعها ---
print("جاري ضغط الملفات...")
shutil.make_archive(zip_name, 'zip', output_folder)
size = os.path.getsize(f"{zip_name}.zip") / (1024*1024)
print(f"حجم الملف المضغوط: {size:.2f} MB")

print("بدء عملية الرفع إلى Hugging Face...")
hf_api = HfApi()
create_repo(repo_name, token=my_token, repo_type="dataset", exist_ok=True)

# رفع المجلد مفتوح للمعاينة
hf_api.upload_folder(
    folder_path=output_folder,
    repo_id=repo_name,
    token=my_token,
    repo_type="dataset"
)

# رفع ملف الـ Zip للتحميل السريع
hf_api.upload_file(
    path_or_fileobj=f"{zip_name}.zip",
    path_in_repo="brain-stroke-dataset-v1.zip",
    repo_id=repo_name,
    token=my_token,
    repo_type="dataset"
)

print(f"تم بنجاح! الرابط: https://huggingface.co/datasets/{repo_name}")
]
Downloads last month
40