license: cc-by-nc-nd-4.0
task_categories:
- visual-question-answering
- object-detection
- image-to-text
language:
- en
tags:
- medical
- surgery
- pituitary
- spatial-reasoning
- instrument-detection
- surgical-workflow
- vision-language
- qwen2-vl
- lora
- coordinates
- prototype
size_categories:
- 1K<n<10K
pretty_name: PitVQA Spatial VLM Dataset (Early Version)
PitVQA Spatial VLM Dataset (Early Version)
Early prototype spatial localization dataset for pituitary surgery. Note: For production use, please use mmrech/pitvqa-comprehensive-spatial which has 10,139 validated samples.
π GitHub: https://github.com/matheus-rech/pit_project π Updated Version: mmrech/pitvqa-comprehensive-spatial (recommended) π Original Dataset: UCL Research Data Repository
β οΈ Important Notice
This is an early prototype version of the spatial localization dataset. For current research and production use, we recommend:
π Use mmrech/pitvqa-comprehensive-spatial instead
Why Use the Comprehensive Version?
| Feature | This Dataset (Early) | Comprehensive (Current) |
|---|---|---|
| Samples | ~3,000-5,000 | 10,139 |
| Validation | Partial | 100% verified |
| Coverage | Limited | Complete workflow |
| Documentation | Basic | Comprehensive |
| Model Performance | Baseline | State-of-the-art |
| Recommended | β No | β Yes |
Dataset Description
This early-stage dataset contains spatial annotations for surgical instrument localization in pituitary surgery. It served as a proof-of-concept for the spatial localization task.
Key Features
- π― Spatial Coordinates: Normalized (x, y) coordinates in 0-100 scale
- π§ Surgical Instruments: Basic instrument categories
- π§ͺ Prototype Phase: Early development version
- π Limited Coverage: Subset of complete surgical workflow
Historical Context
This dataset was created during the initial development phase of the PitVQA spatial localization project. It helped establish:
- Feasibility of spatial localization with VLMs
- Coordinate format (normalized 0-100 scale)
- Question-answering structure for spatial queries
- Baseline performance metrics
Evolution Path
pitvqa-unified-vlm (Classification)
β
pitvqa-spatial-vlm (Early Spatial) β You are here
β
pitvqa-comprehensive-spatial (Production) β Recommended
Data Format
Sample Structure
{
"image": PIL.Image, # Surgical frame
"question": str, # Spatial query
"answer": str, # Format: "<point x='45.2' y='68.3'>object</point>"
"video_id": str, # Source video
"frame_number": int # Frame index
}
Coordinate Format
<point x='45.2' y='68.3'>suction device</point>
Migration Guide
Upgrading to Comprehensive Version
If you're currently using this dataset, migration is straightforward:
# Old (Early Version)
from datasets import load_dataset
dataset_old = load_dataset("mmrech/pitvqa-spatial-vlm")
# New (Comprehensive Version) - Recommended
dataset_new = load_dataset("mmrech/pitvqa-comprehensive-spatial")
# Same format, just more data and better validation!
Training Configuration
For LoRA training, use the same configuration as the comprehensive version:
from trl import SFTTrainer
from peft import LoraConfig
lora_config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj", "k_proj", "o_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
However, we recommend training on the comprehensive version for better performance.
Performance Comparison
Early Version (This Dataset)
| Metric | Value |
|---|---|
| Quadrant Accuracy | ~35-40% |
| Coordinate MAE | ~18-20% |
| Status | Baseline |
Comprehensive Version (Recommended)
| Metric | Value | Improvement |
|---|---|---|
| Quadrant Accuracy | 80.3% | +124% |
| Coordinate MAE | 12.1% | -40% |
| Status | State-of-the-art | β |
Performance increase: Models trained on the comprehensive version achieve 124% improvement in quadrant accuracy.
Use Cases
Appropriate Use Cases
- Historical Research: Understanding evolution of spatial VLMs
- Ablation Studies: Comparing data quantity effects
- Baseline Comparisons: Establishing improvement metrics
- Educational Demos: Simple proof-of-concept examples
Not Recommended For
- β Production models (use comprehensive version)
- β MICCAI/journal publications (use comprehensive version)
- β Clinical research (use comprehensive version)
- β Benchmark evaluations (use comprehensive version)
Training Usage
Recommended Approach
Don't train on this dataset. Instead:
# Use the comprehensive version
from datasets import load_dataset
dataset = load_dataset("mmrech/pitvqa-comprehensive-spatial")
# Follow training guide:
# https://github.com/matheus-rech/pit_project/blob/main/notebooks/train_spatial_qwen2vl_colab.ipynb
If You Must Use This Dataset
from datasets import load_dataset
# Load early version (not recommended)
dataset = load_dataset("mmrech/pitvqa-spatial-vlm")
# Same training procedure as comprehensive version
# But expect lower performance (35-40% vs 80.3%)
Limitations
Dataset Limitations
- Limited Samples: Smaller dataset than comprehensive version
- Incomplete Coverage: Not all surgical phases covered
- Partial Validation: Not fully validated for ground truth fidelity
- Lower Performance: Models trained on this achieve 35-40% accuracy vs 80.3%
Technical Limitations
- Data Quality: Less rigorous validation than comprehensive version
- Documentation: Limited compared to production dataset
- Support: Community support focused on comprehensive version
Superseded Status
β οΈ This dataset has been superseded by mmrech/pitvqa-comprehensive-spatial
Ethical Considerations
Same ethical considerations as comprehensive version:
- β De-identified patient data
- β Institutional ethics approval
- β Not for clinical use
License
CC-BY-NC-ND-4.0 (Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International)
Same license as comprehensive version.
Citation
If citing this early work, please also cite the comprehensive version:
@misc{rech2026pitvqa_spatial_early,
author = {Rech, Matheus},
title = {PitVQA Spatial VLM Dataset (Early Version)},
year = {2026},
publisher = {HuggingFace},
note = {Early prototype. See pitvqa-comprehensive-spatial for production use.},
howpublished = {\url{https://huggingface.co/datasets/mmrech/pitvqa-spatial-vlm}}
}
@misc{rech2026pitvqa_spatial_dataset,
author = {Rech, Matheus},
title = {PitVQA Comprehensive Spatial Dataset},
year = {2026},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/datasets/mmrech/pitvqa-comprehensive-spatial}},
note = {Recommended version with 10,139 validated samples}
}
Recommended Resources
Instead of This Dataset, Use:
- Dataset: mmrech/pitvqa-comprehensive-spatial
- Model: mmrech/pitvqa-qwen2vl-spatial
- GitHub: https://github.com/matheus-rech/pit_project
- Training Guide: Colab Notebook
Dataset Card Authors
Matheus Rech
Contact
- GitHub: https://github.com/matheus-rech/pit_project
- HuggingFace: https://huggingface.co/mmrech
- Questions: Please open an issue on GitHub
Changelog
Version 1.0.0 (Early 2026)
- Initial early prototype release
- Basic spatial localization annotations
- Proof-of-concept for spatial VLM task
Status: Superseded (Current)
- Superseded by: mmrech/pitvqa-comprehensive-spatial
- Recommendation: Use comprehensive version for all new projects
β οΈ Deprecation Notice: This early version is provided for historical reference and reproducibility of early experiments. For current research, please use mmrech/pitvqa-comprehensive-spatial which provides 10,139 validated samples and achieves 80.3% quadrant accuracy vs 35-40% with this early version.