Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -4,7 +4,10 @@ license: mit
|
|
| 4 |
|
| 5 |
**CTMatch Dataset**
|
| 6 |
|
| 7 |
-
This is a combied set of 2 labelled datasets of
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
These have been processed using ctproc, and in this state can be used by various tokenizers for fine-tuning (see ctmatch for examples).
|
| 10 |
|
|
@@ -19,23 +22,23 @@ These 2 datasets contain no patient identifying information are openly available
|
|
| 19 |
|
| 20 |
Additionally, for the IR task, other feature representations of the documents have been created, each of these has exactly 374648 lines of corresponding data:
|
| 21 |
|
| 22 |
-
doc_texts.csv
|
| 23 |
- texts extracted from processed documents using several fields including eligbility min and max age, and eligbility criteria, structured as this example from NCT00000102:
|
| 24 |
"Inclusion Criteria: diagnosed with Congenital Adrenal Hyperplasia (CAH) normal ECG during baseline evaluation, Exclusion Criteria: history of liver disease, or elevated liver function tests history of cardiovascular disease"
|
| 25 |
|
| 26 |
|
| 27 |
-
doc_categories.csv
|
| 28 |
- 1 x 15 vectors of somewhat arbitrarily chosen topic probabilities (softmax output) generated by zero-shot classification model, CTMatch.category_model(doc['condition']) lexically ordered as such:
|
| 29 |
cancer,cardiac,endocrine,gastrointestinal,genetic,healthy,infection,neurological,other,pediatric,psychological,pulmonary,renal,reproductive
|
| 30 |
|
| 31 |
-
doc_embeddings.csv
|
| 32 |
- 1 x 384 vectors of embeddings taken from last hidden state of CTMatch.embedding_model.encode(doc_text) using SentenceTransformers
|
| 33 |
|
| 34 |
|
| 35 |
-
index2docid.csv
|
| 36 |
- simple mapping of index to NCTID's for filtering/reference throughout IR program, corresponding to vector, texts representation order
|
| 37 |
|
| 38 |
-
see repo for more information
|
| 39 |
https://github.com/semajyllek/ctmatch
|
| 40 |
|
| 41 |
|
|
|
|
| 4 |
|
| 5 |
**CTMatch Dataset**
|
| 6 |
|
| 7 |
+
This is a combied set of 2 labelled datasets of:
|
| 8 |
+
`topic (patient descriptions), doc (clinical trials documents - selected fields), and label ({0, 1, 2})` triples, in jsonl format.
|
| 9 |
+
|
| 10 |
+
(Somewhat of a duplication of some of the `ir_dataset` also available on HF.)
|
| 11 |
|
| 12 |
These have been processed using ctproc, and in this state can be used by various tokenizers for fine-tuning (see ctmatch for examples).
|
| 13 |
|
|
|
|
| 22 |
|
| 23 |
Additionally, for the IR task, other feature representations of the documents have been created, each of these has exactly 374648 lines of corresponding data:
|
| 24 |
|
| 25 |
+
`doc_texts.csv`
|
| 26 |
- texts extracted from processed documents using several fields including eligbility min and max age, and eligbility criteria, structured as this example from NCT00000102:
|
| 27 |
"Inclusion Criteria: diagnosed with Congenital Adrenal Hyperplasia (CAH) normal ECG during baseline evaluation, Exclusion Criteria: history of liver disease, or elevated liver function tests history of cardiovascular disease"
|
| 28 |
|
| 29 |
|
| 30 |
+
`doc_categories.csv`:
|
| 31 |
- 1 x 15 vectors of somewhat arbitrarily chosen topic probabilities (softmax output) generated by zero-shot classification model, CTMatch.category_model(doc['condition']) lexically ordered as such:
|
| 32 |
cancer,cardiac,endocrine,gastrointestinal,genetic,healthy,infection,neurological,other,pediatric,psychological,pulmonary,renal,reproductive
|
| 33 |
|
| 34 |
+
`doc_embeddings.csv`:
|
| 35 |
- 1 x 384 vectors of embeddings taken from last hidden state of CTMatch.embedding_model.encode(doc_text) using SentenceTransformers
|
| 36 |
|
| 37 |
|
| 38 |
+
`index2docid.csv`:
|
| 39 |
- simple mapping of index to NCTID's for filtering/reference throughout IR program, corresponding to vector, texts representation order
|
| 40 |
|
| 41 |
+
**see repo for more information**:
|
| 42 |
https://github.com/semajyllek/ctmatch
|
| 43 |
|
| 44 |
|