File size: 1,680 Bytes
c45a08e
 
 
 
 
 
 
 
 
 
 
f76a412
 
 
 
0aa6320
f76a412
c19574e
0aa6320
f76a412
c19574e
0087d84
ee4fd5d
 
f1020ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93de3bb
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
task_categories:
- text-generation
language:
- zh
- en
tags:
- chemistry
- medical
size_categories:
- 1M<n<10M
viewer: true
configs:
- config_name: default
  data_files:
  - split: AnesCorpus_en
    path:
    - AnesCorpus_en-*.parquet
  - split: AnesCorpus_zh
    path:
    - AnesCorpus_zh-*.parquet
license: c-uda
---

The AnesBench Datasets Collection comprises three distinct datasets: AnesBench, an anesthesiology reasoning benchmark; AnesQA, an SFT dataset; and AnesCorpus, a continual pre-training dataset. This repository pertains to AnesCorpus. For AnesBench and AnesQA, please refer to their respective links: https://huggingface.co/datasets/MiliLab/AnesBench and https://huggingface.co/datasets/MiliLab/AnesQA.


### AnesCorpus

**AnesCorpus** is a large-scale, domain-specific corpus constructed for **Continuous Pre-training (CPT)** in the field of anesthesiology. It is built from two primary sources:

- **Domain-specific filtering** from large-scale corpora such as [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), using keyword-based heuristics.
- **PubMed research articles** related to anesthesiology, processed through rigorous cleaning and formatting to ensure high relevance and quality.

| Language | Rows    | Tokens   |
|----------|---------|----------|
| English  | ~1.59M  | ~3B      |
| Chinese  | ~593K   | ~0.2B    |

This curated dataset provides a rich foundation for pretraining language models to understand anesthesiology-related concepts, terminology, and clinical context.

### Recommended Usage

This dataset and AnesQA are compatible with a wide range of instruction-tuned language models and popular training frameworks.