MedPT / README.md
iagoalves's picture
Update README.md
6a55a33 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: condition
      dtype: string
    - name: medical_specialty
      dtype: string
    - name: question_type
      dtype: string
  splits:
    - name: train
      num_bytes: 239265644
      num_examples: 384095
  download_size: 95059520
  dataset_size: 239265644
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-4.0
task_categories:
  - question-answering
language:
  - pt
tags:
  - medical
pretty_name: MedPT
size_categories:
  - 100K<n<1M

Dataset Summary

MedPT is the first large-scale, real-world corpus for the Brazilian Portuguese medical domain. It comprises 384,095 authentic question-answer pairs derived from patient-doctor interactions. The dataset covers over 3,200 distinct health-related conditions spanning across 104 medical specialties.

This dataset is designed to advance natural language processing in healthcare for Portuguese speakers by capturing unique clinical and cultural nuances, including endemic diseases and region-specific colloquialisms often lost in translated datasets.

Dataset Structure

Data Instances

A typical instance in the dataset consists of a patient's medical question, an expert's answer, and associated metadata regarding the medical condition, the answering doctor's specialty, and the semantic intent of the question.

Example:

{
"id": 0,
"question": "Fazer aplicação de ácido hialurônico no ombro para melhorar o movimento da capsulite adesiva funciona?",
"answer": "Pode funcionar,associado com fisioterapia.",
"condition": "Capsulite adesiva",
"medical_specialty": "Ortopedista - traumatologista",
"question_type": "Tratamento"
}

Data Fields

  • id (int64): A unique identifier for the question-answer pair.

  • question (string): The patient's original inquiry in Brazilian Portuguese. These queries often describe symptoms, test results, or seek information about medications and treatments.

  • answer (string): A response provided by a licensed healthcare professional.

  • condition (string): A finer-grained topic within the main field, often a specific disease or condition (e.g., Diabetes, Acne, Hypertension).

  • medical_specialty (string): The specialization of the professional who authored the answer (e.g., General Practitioner, Psychologist, Nutritionist).

  • question_type (string): The semantic intent of the question. It is classified into one of seven categories: Diagnosis, Treatment, Anatomy and Physiology, Epidemiology, Healthy Lifestyle, Choosing Healthcare Professionals, or Other.

Dataset Creation

Source Data

The data was collected from Doctoralia, a widely used online platform where Brazilian patients ask health-related questions and receive responses from verified doctors. During the initial collection phase, approximately 2 million raw samples were gathered. All questions are completely anonymous, ensuring the privacy and confidentiality of the users.

Data Cleaning and Curation

To ensure high quality, the corpus underwent a rigorous multi-stage curation protocol:

  • Normalization: HTML tags, URLs, special characters, and extraneous whitespace were removed.

  • Deduplication: Redundant entries associated with multiple medical specialties were consolidated based on the core question-answer text.

  • Contextual Enrichment: Incomplete patient queries (e.g., "What is the treatment?") were enriched by prepending the corresponding condition to the text to make them self-contained.

  • Answer Filtering: Uninformative answers (fewer than 10 tokens) and generic articles (exceeding 1,000 tokens) were removed to optimize for direct Q&A tasks.

Annotations

To provide a fine-grained understanding of user intent, the question_type feature was synthetically generated using the gpt-oss-120b model. The model classified each question into one of the seven predefined semantic categories based solely on the question's content.

Benchmark and Baselines

MedPT was validated on a multi-class medical specialty classification task. Fine-tuning the Qwen3 1.7B model on the dataset established strong baselines:

  • Achieved an outstanding 94% F1-score on a challenging 20-class setup.

  • Demonstrated robust few-shot (In-Context Learning) capabilities, with a 3-shot prompt yielding a 0.73 F1-score on the 20-class task.

Ethical Considerations

The MedPT dataset and any models developed using it are intended to function strictly as assistive tools to support, not replace, the judgment of qualified healthcare professionals. AI systems trained on this data should not be used as a substitute for professional medical consultation, diagnosis, or treatment. The dataset relies on historically collected responses, and models may produce outputs that are plausible but clinically inaccurate.

Citation

If you use this dataset in your research, please cite the following paper:

@article{farber2025medpt,
title={MedPT: A Massive Medical Question Answering Dataset for Brazilian-Portuguese Speakers},
author={F{\"a}rber, Fernanda Bufon and Brito, Iago Alves and Dollis, Julia Soares and Ribeiro, Pedro Schindler Freire Brasil and Sousa, Rafael Teixeira and others},
journal={arXiv preprint arXiv:2511.11878},
year={2025}
}