| --- |
| task_categories: |
| - text-generation |
| language: |
| - zh |
| - en |
| tags: |
| - chemistry |
| - medical |
| size_categories: |
| - 1M<n<10M |
| viewer: true |
| configs: |
| - config_name: default |
| data_files: |
| - split: AnesCorpus_en |
| path: |
| - AnesCorpus_en-*.parquet |
| - split: AnesCorpus_zh |
| path: |
| - AnesCorpus_zh-*.parquet |
| license: c-uda |
| --- |
| |
| The AnesBench Datasets Collection comprises three distinct datasets: AnesBench, an anesthesiology reasoning benchmark; AnesQA, an SFT dataset; and AnesCorpus, a continual pre-training dataset. This repository pertains to AnesCorpus. For AnesBench and AnesQA, please refer to their respective links: https://huggingface.co/datasets/MiliLab/AnesBench and https://huggingface.co/datasets/MiliLab/AnesQA. |
|
|
|
|
| ### AnesCorpus |
|
|
| **AnesCorpus** is a large-scale, domain-specific corpus constructed for **Continuous Pre-training (CPT)** in the field of anesthesiology. It is built from two primary sources: |
|
|
| - **Domain-specific filtering** from large-scale corpora such as [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), using keyword-based heuristics. |
| - **PubMed research articles** related to anesthesiology, processed through rigorous cleaning and formatting to ensure high relevance and quality. |
|
|
| | Language | Rows | Tokens | |
| |----------|---------|----------| |
| | English | ~1.59M | ~3B | |
| | Chinese | ~593K | ~0.2B | |
|
|
| This curated dataset provides a rich foundation for pretraining language models to understand anesthesiology-related concepts, terminology, and clinical context. |
|
|
| ### Recommended Usage |
|
|
| This dataset and AnesQA are compatible with a wide range of instruction-tuned language models and popular training frameworks. |