Datasets:
dataset_info:
- config_name: all
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: length
dtype: int64
- name: terms
sequence: string
- name: terms_embedding
sequence: float64
- name: taxon_id
dtype: string
- name: stratum_id
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '11'
'4': '12'
'5': '13'
'6': '14'
'7': '15'
'8': '16'
'9': '17'
'10': '18'
'11': '19'
'12': '2'
'13': '20'
'14': '21'
'15': '22'
'16': '23'
'17': '24'
'18': '25'
'19': '26'
'20': '27'
'21': '28'
'22': '29'
'23': '3'
'24': '30'
'25': '31'
'26': '32'
'27': '33'
'28': '34'
'29': '35'
'30': '36'
'31': '37'
'32': '38'
'33': '39'
'34': '4'
'35': '40'
'36': '41'
'37': '42'
'38': '43'
'39': '44'
'40': '45'
'41': '46'
'42': '47'
'43': '48'
'44': '49'
'45': '5'
'46': '50'
'47': '51'
'48': '52'
'49': '53'
'50': '54'
'51': '55'
'52': '56'
'53': '57'
'54': '58'
'55': '59'
'56': '6'
'57': '60'
'58': '61'
'59': '62'
'60': '63'
'61': '64'
'62': '65'
'63': '66'
'64': '67'
'65': '68'
'66': '69'
'67': '7'
'68': '70'
'69': '71'
'70': '72'
'71': '73'
'72': '74'
'73': '75'
'74': '76'
'75': '77'
'76': '78'
'77': '79'
'78': '8'
'79': '80'
'80': '81'
'81': '82'
'82': '83'
'83': '84'
'84': '85'
'85': '86'
'86': '87'
'87': '88'
'88': '89'
'89': '9'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
splits:
- name: train
num_bytes: 193626393.11677656
num_examples: 128021
- name: test
num_bytes: 21514715.88322343
num_examples: 14225
download_size: 154163126
dataset_size: 215141109
- config_name: bp
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: length
dtype: int64
- name: terms
sequence: string
- name: terms_embedding
sequence: float64
- name: taxon_id
dtype: string
- name: stratum_id
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '11'
'4': '12'
'5': '13'
'6': '14'
'7': '15'
'8': '16'
'9': '17'
'10': '18'
'11': '19'
'12': '2'
'13': '20'
'14': '21'
'15': '22'
'16': '23'
'17': '24'
'18': '25'
'19': '26'
'20': '27'
'21': '28'
'22': '29'
'23': '3'
'24': '30'
'25': '31'
'26': '32'
'27': '33'
'28': '34'
'29': '35'
'30': '36'
'31': '37'
'32': '38'
'33': '39'
'34': '4'
'35': '40'
'36': '41'
'37': '42'
'38': '43'
'39': '44'
'40': '45'
'41': '46'
'42': '47'
'43': '48'
'44': '49'
'45': '5'
'46': '50'
'47': '51'
'48': '52'
'49': '53'
'50': '54'
'51': '55'
'52': '56'
'53': '57'
'54': '58'
'55': '59'
'56': '6'
'57': '60'
'58': '61'
'59': '62'
'60': '63'
'61': '64'
'62': '65'
'63': '66'
'64': '67'
'65': '68'
'66': '69'
'67': '7'
'68': '70'
'69': '71'
'70': '72'
'71': '73'
'72': '74'
'73': '75'
'74': '76'
'75': '77'
'76': '78'
'77': '79'
'78': '8'
'79': '80'
'80': '81'
'81': '82'
'82': '83'
'83': '84'
'84': '85'
'85': '86'
'86': '87'
'87': '88'
'88': '89'
'89': '9'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
splits:
- name: train
num_bytes: 129187926.9
num_examples: 82989
- name: test
num_bytes: 14354214.1
num_examples: 9221
download_size: 107191842
dataset_size: 143542141
- config_name: cc
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: length
dtype: int64
- name: terms
sequence: string
- name: terms_embedding
sequence: float64
- name: taxon_id
dtype: string
- name: stratum_id
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '11'
'4': '12'
'5': '13'
'6': '14'
'7': '15'
'8': '16'
'9': '17'
'10': '18'
'11': '19'
'12': '2'
'13': '20'
'14': '21'
'15': '22'
'16': '23'
'17': '24'
'18': '25'
'19': '26'
'20': '27'
'21': '28'
'22': '29'
'23': '3'
'24': '30'
'25': '31'
'26': '32'
'27': '33'
'28': '34'
'29': '35'
'30': '36'
'31': '37'
'32': '38'
'33': '39'
'34': '4'
'35': '40'
'36': '41'
'37': '42'
'38': '43'
'39': '44'
'40': '45'
'41': '46'
'42': '47'
'43': '48'
'44': '49'
'45': '5'
'46': '50'
'47': '51'
'48': '52'
'49': '53'
'50': '54'
'51': '55'
'52': '56'
'53': '57'
'54': '58'
'55': '59'
'56': '6'
'57': '60'
'58': '61'
'59': '62'
'60': '63'
'61': '64'
'62': '65'
'63': '66'
'64': '67'
'65': '68'
'66': '69'
'67': '7'
'68': '70'
'69': '71'
'70': '72'
'71': '73'
'72': '74'
'73': '75'
'74': '76'
'75': '77'
'76': '78'
'77': '79'
'78': '8'
'79': '80'
'80': '81'
'81': '82'
'82': '83'
'83': '84'
'84': '85'
'85': '86'
'86': '87'
'87': '88'
'88': '89'
'89': '9'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
splits:
- name: train
num_bytes: 76889746.48893577
num_examples: 83620
- name: test
num_bytes: 8544122.511064233
num_examples: 9292
download_size: 64110332
dataset_size: 85433869
- config_name: mf
features:
- name: id
dtype: string
- name: sequence
dtype: string
- name: length
dtype: int64
- name: terms
sequence: string
- name: terms_embedding
sequence: float64
- name: taxon_id
dtype: string
- name: stratum_id
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '10'
'3': '11'
'4': '12'
'5': '13'
'6': '14'
'7': '15'
'8': '16'
'9': '17'
'10': '18'
'11': '19'
'12': '2'
'13': '20'
'14': '21'
'15': '22'
'16': '23'
'17': '24'
'18': '25'
'19': '26'
'20': '27'
'21': '28'
'22': '29'
'23': '3'
'24': '30'
'25': '31'
'26': '32'
'27': '33'
'28': '34'
'29': '35'
'30': '36'
'31': '37'
'32': '38'
'33': '39'
'34': '4'
'35': '40'
'36': '41'
'37': '42'
'38': '43'
'39': '44'
'40': '45'
'41': '46'
'42': '47'
'43': '48'
'44': '49'
'45': '5'
'46': '50'
'47': '51'
'48': '52'
'49': '53'
'50': '54'
'51': '55'
'52': '56'
'53': '57'
'54': '58'
'55': '59'
'56': '6'
'57': '60'
'58': '61'
'59': '62'
'60': '63'
'61': '64'
'62': '65'
'63': '66'
'64': '67'
'65': '68'
'66': '69'
'67': '7'
'68': '70'
'69': '71'
'70': '72'
'71': '73'
'72': '74'
'73': '75'
'74': '76'
'75': '77'
'76': '78'
'77': '79'
'78': '8'
'79': '80'
'80': '81'
'81': '82'
'82': '83'
'83': '84'
'84': '85'
'85': '86'
'86': '87'
'87': '88'
'88': '89'
'89': '9'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
splits:
- name: train
num_bytes: 69470217.72236988
num_examples: 70773
- name: test
num_bytes: 7719240.277630123
num_examples: 7864
download_size: 63311313
dataset_size: 77189458
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- split: test
path: all/test-*
- config_name: bp
data_files:
- split: train
path: bp/train-*
- split: test
path: bp/test-*
- config_name: cc
data_files:
- split: train
path: cc/train-*
- split: test
path: cc/test-*
- config_name: mf
data_files:
- split: train
path: mf/train-*
- split: test
path: mf/test-*
license: apache-2.0
task_categories:
- text-classification
tags:
- proteomics
- protein
- gene-ontology
pretty_name: CAFA 5
size_categories:
- 100K<n<1M
CAFA 5
This is the CAFA 5 dataset of 142k protein sequences annotated with their gene ontology (GO) terms. The samples are divided into three subsets each containing a set of GO terms that are associated with one of the three subgraphs of the gene ontology - Molecular Function, Biological Process, and Cellular Component. In addition, we provide a stratified train/test split that utilizes term embeddings to distribute term labels equally. The term embeddings are included in the dataset and can be used to stratify custom splits or to search for sequences with similar gene ontologies.
The code to export this dataset can be found here.
Subsets
The CAFA 5 dataset is available on HuggingFace Hub and can be loaded using the HuggingFace Datasets library.
The dataset is divided into three subsets according to the GO terms that the sequences are annotated with.
all- All annotationsmf- Only molecular function termscc- Only celluar component termsbp- Only biological process terms
To load the default CAFA 5 dataset with all function annotations you can use the example below.
from datasets import load_dataset
dataset = load_dataset("andrewdalpino/CAFA5")
To load a subset of the CAFA 5 dataset use the example below.
dataset = load_dataset("andrewdalpino/CAFA5", "mf")
Splits
We provide a 90/10 train and test split for your convenience. The subsets were determined using a stratified approach which assigns cluster numbers to sequences based on their terms embeddings. We've included the stratum IDs so that you can generate additional custom stratified splits as shown in the example below.
from datasets import load_dataset
dataset = load_dataset("andrewdalpino/CAFA5", split="train")
dataset = dataset.class_encode_column("stratum_id")
dataset = dataset.train_test_split(test_size=0.2, stratify_by_column="stratum_id")
Filtering
You can also filter the samples of the dataset like in the example below.
dataset = dataset.filter(lambda sample: sample["length"] <= 2048)
Tokenizing
Some tasks may require you to tokenize the amino acid sequences. In this example, we loop through the samples and add a tokens column to store the tokenized sequences.
def tokenize(sample: dict): list[int]:
tokens = tokenizer.tokenize(sample["sequence"])
sample["tokens"] = tokens
return sample
dataset = dataset.map(tokenize, remove_columns="sequence")
Original Dataset
Iddo Friedberg, Predrag Radivojac, Clara De Paolis, Damiano Piovesan, Parnal Joshi, Walter Reade, and Addison Howard. CAFA 5 Protein Function Prediction. https://kaggle.com/competitions/cafa-5-protein-function-prediction, 2023. Kaggle.