Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -124,7 +124,7 @@ dataset_info:
|
|
| 124 |
num_bytes: 8957337.588045152
|
| 125 |
num_examples: 5909
|
| 126 |
download_size: 41857005
|
| 127 |
-
dataset_size: 89571860
|
| 128 |
- config_name: bp
|
| 129 |
features:
|
| 130 |
- name: id
|
|
@@ -249,7 +249,7 @@ dataset_info:
|
|
| 249 |
num_bytes: 10848275.3
|
| 250 |
num_examples: 10347
|
| 251 |
download_size: 63383574
|
| 252 |
-
dataset_size: 108482753
|
| 253 |
- config_name: cc
|
| 254 |
features:
|
| 255 |
- name: id
|
|
@@ -374,7 +374,7 @@ dataset_info:
|
|
| 374 |
num_bytes: 7186604.168797647
|
| 375 |
num_examples: 9996
|
| 376 |
download_size: 53676537
|
| 377 |
-
dataset_size: 71861728
|
| 378 |
- config_name: mf
|
| 379 |
features:
|
| 380 |
- name: id
|
|
@@ -499,7 +499,7 @@ dataset_info:
|
|
| 499 |
num_bytes: 6088456.537219902
|
| 500 |
num_examples: 8953
|
| 501 |
download_size: 47994386
|
| 502 |
-
dataset_size: 60879125
|
| 503 |
configs:
|
| 504 |
- config_name: all
|
| 505 |
data_files:
|
|
@@ -525,4 +525,84 @@ configs:
|
|
| 525 |
path: mf/train-*
|
| 526 |
- split: test
|
| 527 |
path: mf/test-*
|
|
|
|
|
|
|
| 528 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
num_bytes: 8957337.588045152
|
| 125 |
num_examples: 5909
|
| 126 |
download_size: 41857005
|
| 127 |
+
dataset_size: 89571860
|
| 128 |
- config_name: bp
|
| 129 |
features:
|
| 130 |
- name: id
|
|
|
|
| 249 |
num_bytes: 10848275.3
|
| 250 |
num_examples: 10347
|
| 251 |
download_size: 63383574
|
| 252 |
+
dataset_size: 108482753
|
| 253 |
- config_name: cc
|
| 254 |
features:
|
| 255 |
- name: id
|
|
|
|
| 374 |
num_bytes: 7186604.168797647
|
| 375 |
num_examples: 9996
|
| 376 |
download_size: 53676537
|
| 377 |
+
dataset_size: 71861728
|
| 378 |
- config_name: mf
|
| 379 |
features:
|
| 380 |
- name: id
|
|
|
|
| 499 |
num_bytes: 6088456.537219902
|
| 500 |
num_examples: 8953
|
| 501 |
download_size: 47994386
|
| 502 |
+
dataset_size: 60879125
|
| 503 |
configs:
|
| 504 |
- config_name: all
|
| 505 |
data_files:
|
|
|
|
| 525 |
path: mf/train-*
|
| 526 |
- split: test
|
| 527 |
path: mf/test-*
|
| 528 |
+
tags:
|
| 529 |
+
- gene-ontology
|
| 530 |
---
|
| 531 |
+
|
| 532 |
+
# AmiGO Boost Dataset
|
| 533 |
+
|
| 534 |
+
AmiGO Boost is the [AmiGO](https://huggingface.co/datasets/andrewdalpino/AmiGO) dataset with the addition of phylogenetically-inferred annotations to increase sample count and diversity.
|
| 535 |
+
|
| 536 |
+
## Processing Steps
|
| 537 |
+
|
| 538 |
+
- Filter for high-quality evidence codes.
|
| 539 |
+
- Remove duplicate GO term annotations.
|
| 540 |
+
- Expand annotations to include the entire GO subgraph.
|
| 541 |
+
- Embed subgraphs and assign stratum IDs to the samples.
|
| 542 |
+
- Generate stratified train/test split.
|
| 543 |
+
|
| 544 |
+
## Subsets
|
| 545 |
+
|
| 546 |
+
The [AmiGO](https://huggingface.co/datasets/andrewdalpino/AmiGO) dataset is available on HuggingFace Hub and can be loaded using the HuggingFace [Datasets](https://huggingface.co/docs/datasets) library.
|
| 547 |
+
|
| 548 |
+
The dataset is divided into three subsets according to the GO terms that the sequences are annotated with.
|
| 549 |
+
|
| 550 |
+
- `all` - At least 1 term from each aspect
|
| 551 |
+
- `mf` - Only molecular function terms
|
| 552 |
+
- `cc` - Only celluar component terms
|
| 553 |
+
- `bp` - Only biological process terms
|
| 554 |
+
|
| 555 |
+
To load the default AmiGO dataset with all function annotations you can use the example below.
|
| 556 |
+
|
| 557 |
+
```python
|
| 558 |
+
from datasets import load_dataset
|
| 559 |
+
|
| 560 |
+
dataset = load_dataset("andrewdalpino/AmiGO-Boost")
|
| 561 |
+
```
|
| 562 |
+
|
| 563 |
+
To load a subset of the AmiGO dataset use the example below.
|
| 564 |
+
|
| 565 |
+
```python
|
| 566 |
+
dataset = load_dataset("andrewdalpino/AmiGO-Boost", "mf")
|
| 567 |
+
```
|
| 568 |
+
|
| 569 |
+
## Splits
|
| 570 |
+
|
| 571 |
+
We provide a 90/10 `train` and `test` split for your convenience. The subsets were determined using a stratified approach which assigns cluster numbers to sequences based on their terms embeddings. We've included the stratum IDs so that you can generate additional custom stratified splits as shown in the example below.
|
| 572 |
+
|
| 573 |
+
```python
|
| 574 |
+
from datasets import load_dataset
|
| 575 |
+
|
| 576 |
+
dataset = load_dataset("andrewdalpino/AmiGO-Boost", split="train")
|
| 577 |
+
|
| 578 |
+
dataset = dataset.class_encode_column("stratum_id")
|
| 579 |
+
|
| 580 |
+
dataset = dataset.train_test_split(test_size=0.2, stratify_by_column="stratum_id")
|
| 581 |
+
```
|
| 582 |
+
|
| 583 |
+
## Filtering
|
| 584 |
+
|
| 585 |
+
You can also filter the samples of the dataset like in the example below.
|
| 586 |
+
|
| 587 |
+
```python
|
| 588 |
+
dataset = dataset.filter(lambda sample: len(sample["sequence"]) <= 2048)
|
| 589 |
+
```
|
| 590 |
+
|
| 591 |
+
## Tokenizing
|
| 592 |
+
|
| 593 |
+
Some tasks may require you to tokenize the amino acid sequences. In this example, we loop through the samples and add a `tokens` column to store the tokenized sequences.
|
| 594 |
+
|
| 595 |
+
```python
|
| 596 |
+
def tokenize(sample: dict): list[int]:
|
| 597 |
+
tokens = tokenizer.tokenize(sample["sequence"])
|
| 598 |
+
|
| 599 |
+
sample["tokens"] = tokens
|
| 600 |
+
|
| 601 |
+
return sample
|
| 602 |
+
|
| 603 |
+
dataset = dataset.map(tokenize, remove_columns="sequence")
|
| 604 |
+
```
|
| 605 |
+
|
| 606 |
+
## References
|
| 607 |
+
|
| 608 |
+
>- The UniProt Consortium, UniProt: the Universal Protein Knowledgebase in 2025, Nucleic Acids Research, 2025, 53, D609–D617.
|