Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
The info cannot be fetched for the config 'default' of the dataset.
Error code:   InfoError
Exception:    HfHubHTTPError
Message:      500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/macwiatrak/bacbench-ppi-stringdb-dna-small/paths-info/dc685509f3e1ecf5ace090fd5ae4ab714ae2179a (Request ID: Root=1-69fe1665-58558dcc08a2f048747f75d4;e4ccdfe6-1111-4628-ac3f-ab707abf6165)

Internal Error - We're working hard to fix this as soon as possible!
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
                  response.raise_for_status()
                File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1026, in raise_for_status
                  raise HTTPError(http_error_msg, response=self)
              requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/macwiatrak/bacbench-ppi-stringdb-dna-small/paths-info/dc685509f3e1ecf5ace090fd5ae4ab714ae2179a
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 226, in compute_first_rows_from_streaming_response
                  info = get_dataset_config_info(path=dataset, config_name=config, token=hf_token)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
                  builder = load_dataset_builder(
                            ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1353, in load_dataset_builder
                  builder_instance: DatasetBuilder = builder_cls(
                                                     ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 343, in __init__
                  self.config, self.config_id = self._create_builder_config(
                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 558, in _create_builder_config
                  builder_config._resolve_data_files(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 206, in _resolve_data_files
                  self.data_files = self.data_files.resolve(base_path, download_config)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 822, in resolve
                  out[key] = data_files_patterns_list.resolve(base_path, download_config)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 775, in resolve
                  resolve_pattern(
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 372, in resolve_pattern
                  for filepath, info in fs.glob(fs_pattern, detail=True, **glob_kwargs).items():
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 521, in glob
                  return super().glob(path, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 604, in glob
                  allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 556, in find
                  return super().find(
                         ^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 495, in find
                  out[path] = self.info(path)
                              ^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 719, in info
                  paths_info = self._api.get_paths_info(
                               ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 3371, in get_paths_info
                  hf_raise_for_status(response)
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
                  raise _format(HfHubHTTPError, str(e), response) from e
              huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/macwiatrak/bacbench-ppi-stringdb-dna-small/paths-info/dc685509f3e1ecf5ace090fd5ae4ab714ae2179a (Request ID: Root=1-69fe1665-58558dcc08a2f048747f75d4;e4ccdfe6-1111-4628-ac3f-ab707abf6165)
              
              Internal Error - We're working hard to fix this as soon as possible!

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset for protein-protein interaction prediction across bacteria (DNA)

A dataset of 261 bacterial genomes across 215 genera with protein-protein interaction (PPI) scores for each genome.

The genomes' PPI scores have been extracted from STRING DB and their associated DNA from GenBank (https://www.ncbi.nlm.nih.gov/genbank/). Each row contains a set of DNA sequences from a genome, and a set of associated PPI scores. The PPI scores have been extracted using the combined score from STRING DB.

The interaction between two proteins is represented by a triple: [prot1_index, prot2_index, score]. Where to get a probability score, you must divide the score by 1000 (i.e. if the score is 721 then to get a true score do 721/1000=0.721). The index of a protein refers to the index of the protein in the protein_sequences column of the row. See example below in Usage

Usage

We recommend loading the dataset in a streaming mode to prevent memory errors.

from datasets import load_dataset
ds = load_dataset("macwiatrak/bacbench-ppi-stringdb-dna-small", split="validation", streaming=True)
item = next(iter(ds))

# select a contig_idx and gene_idx
contig_idx = 0
gene_idx = 0

# fetch protein sequences from a genome (list of strings) for the contig_idx
dna_seq = item["dna_sequence"][contig_idx]

# get gene sequence, indices are 1-based inclusive so we account for it
start_idx = item['start'][contig_idx][gene_idx] - 1
end_idx = item['end'][contig_idx][gene_idx]
strand_idx = item['strand'][contig_idx][gene_idx]  # we can also get strand which can be 1 (positive) or -1 (negative)
gene_seq = dna_seq[start_idx:end_idx]

# fetch PPI triples labels (i.e. [prot1_index, prot2_index, score])
ppi_triples = item["labels"][contig_idx]
# get protein seqs and label for one pair of proteins
prot1 = prot_seqs[ppi_triples[0][0]]
prot2 = prot_seqs[ppi_triples[0][1]]
score = ppi_triples[0][2] / 1000
# we recommend binarizing the labels based on the threshold of 0.6
binary_ppi_triples = [
  (prot1_index, prot2_index, int((score / 1000) >= 0.6)) for prot1_index, prot2_index, score in ppi_triples
]

Split

We provide a phylogeny-aware train, validation and test split by genus with proportions of 60 / 10 / 20 (%) respectively as part of the dataset. This means that the the genera in train, validation and test do not overlap.

See github repository for details on how to embed the dataset with DNA and protein language models as well as code to predict protein-protein interactions.

Other relevant resources:

Downloads last month
73

Collection including macwiatrak/bacbench-ppi-stringdb-dna-small