The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
dataset_id: string
row: struct<#Pdb;Mutation(s)_PDB;Mutation(s)_cleaned;iMutation_Location(s);Hold_out_type;Hold_out_protein (... 2000 chars omitted)
child 0, #Pdb;Mutation(s)_PDB;Mutation(s)_cleaned;iMutation_Location(s);Hold_out_type;Hold_out_proteins;Affin (... 378 chars omitted): string
child 1, column_1: string
child 2, column_2: string
child 3, column_3: string
child 4, column_4: string
child 5, column_5: string
child 6, column_6: string
child 7, column_7: string
child 8, column_8: string
child 9, column_10: string
child 10, column_9: string
child 11, column_11: string
child 12, column_12: string
child 13, column_13: string
child 14, column_14: string
child 15, column_15: string
child 16, column_16: string
child 17, column_17: string
child 18, column_18: string
child 19, column_19: string
child 20, column_20: string
child 21, column_21: string
child 22, column_22: string
child 23, column_23: string
child 24, column_24: string
child 25, column_25: string
child 26, column_26: string
child 27, column_27: string
child 28, column_28: string
child 29, column_29: string
child 30, column_30: string
child 31, column_31: string
child 32, column_32: string
child 33, column_33: string
child 34, column_34: string
child 35, column_35: string
child 36, column_36: string
child 37, column_37: string
child 38, column_38: string
child 39, column_39: string
child 40, column_40: string
child 41, column_41: s
...
, column_55: string
child 56, column_56: string
child 57, column_57: string
child 58, column_58: string
child 59, column_59: string
child 60, column_60: string
child 61, column_61: string
child 62, column_62: string
child 63, column_63: string
child 64, column_64: string
child 65, column_65: string
child 66, column_66: string
child 67, column_67: string
child 68, column_68: string
child 69, column_69: string
child 70, column_70: string
child 71, column_71: string
child 72, column_72: string
child 73, column_73: string
child 74, column_74: string
child 75, column_75: string
child 76, column_76: string
child 77, column_77: string
child 78, column_78: string
child 79, column_79: string
child 80, column_80: string
child 81, column_81: string
child 82, column_82: string
child 83, column_83: string
child 84, column_84: string
child 85, column_85: string
row_index: int64
source_file: string
tables: list<item: struct<bytes: int64, category: string, dataset_id: string, output_file: string, rows: int (... 41 chars omitted)
child 0, item: struct<bytes: int64, category: string, dataset_id: string, output_file: string, rows: int64, source_ (... 29 chars omitted)
child 0, bytes: int64
child 1, category: string
child 2, dataset_id: string
child 3, output_file: string
child 4, rows: int64
child 5, source_file: string
child 6, status: string
format: string
category: string
total_rows: int64
to
{'category': Value('string'), 'dataset_id': Value('string'), 'format': Value('string'), 'tables': List({'bytes': Value('int64'), 'category': Value('string'), 'dataset_id': Value('string'), 'output_file': Value('string'), 'rows': Value('int64'), 'source_file': Value('string'), 'status': Value('string')}), 'total_rows': Value('int64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
dataset_id: string
row: struct<#Pdb;Mutation(s)_PDB;Mutation(s)_cleaned;iMutation_Location(s);Hold_out_type;Hold_out_protein (... 2000 chars omitted)
child 0, #Pdb;Mutation(s)_PDB;Mutation(s)_cleaned;iMutation_Location(s);Hold_out_type;Hold_out_proteins;Affin (... 378 chars omitted): string
child 1, column_1: string
child 2, column_2: string
child 3, column_3: string
child 4, column_4: string
child 5, column_5: string
child 6, column_6: string
child 7, column_7: string
child 8, column_8: string
child 9, column_10: string
child 10, column_9: string
child 11, column_11: string
child 12, column_12: string
child 13, column_13: string
child 14, column_14: string
child 15, column_15: string
child 16, column_16: string
child 17, column_17: string
child 18, column_18: string
child 19, column_19: string
child 20, column_20: string
child 21, column_21: string
child 22, column_22: string
child 23, column_23: string
child 24, column_24: string
child 25, column_25: string
child 26, column_26: string
child 27, column_27: string
child 28, column_28: string
child 29, column_29: string
child 30, column_30: string
child 31, column_31: string
child 32, column_32: string
child 33, column_33: string
child 34, column_34: string
child 35, column_35: string
child 36, column_36: string
child 37, column_37: string
child 38, column_38: string
child 39, column_39: string
child 40, column_40: string
child 41, column_41: s
...
, column_55: string
child 56, column_56: string
child 57, column_57: string
child 58, column_58: string
child 59, column_59: string
child 60, column_60: string
child 61, column_61: string
child 62, column_62: string
child 63, column_63: string
child 64, column_64: string
child 65, column_65: string
child 66, column_66: string
child 67, column_67: string
child 68, column_68: string
child 69, column_69: string
child 70, column_70: string
child 71, column_71: string
child 72, column_72: string
child 73, column_73: string
child 74, column_74: string
child 75, column_75: string
child 76, column_76: string
child 77, column_77: string
child 78, column_78: string
child 79, column_79: string
child 80, column_80: string
child 81, column_81: string
child 82, column_82: string
child 83, column_83: string
child 84, column_84: string
child 85, column_85: string
row_index: int64
source_file: string
tables: list<item: struct<bytes: int64, category: string, dataset_id: string, output_file: string, rows: int (... 41 chars omitted)
child 0, item: struct<bytes: int64, category: string, dataset_id: string, output_file: string, rows: int64, source_ (... 29 chars omitted)
child 0, bytes: int64
child 1, category: string
child 2, dataset_id: string
child 3, output_file: string
child 4, rows: int64
child 5, source_file: string
child 6, status: string
format: string
category: string
total_rows: int64
to
{'category': Value('string'), 'dataset_id': Value('string'), 'format': Value('string'), 'tables': List({'bytes': Value('int64'), 'category': Value('string'), 'dataset_id': Value('string'), 'output_file': Value('string'), 'rows': Value('int64'), 'source_file': Value('string'), 'status': Value('string')}), 'total_rows': Value('int64')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SKEMPI v2
SKEMPI v2 protein-protein binding affinity mutation dataset, normalized to newline-delimited JSON with row-level provenance.
Processed and uploaded by the MegaData post-download pipeline (internal repo). Original source: https://life.bsc.es/pid/skempi2.
Statistics
| Table files | 1 |
| Total rows | 7,085 |
| Total bytes | 5.86 MiB (6,148,969) |
Tables
| Table | Rows | Bytes |
|---|---|---|
labeled_skempi2_skempi_v2.csv.jsonl |
7,085 | 5.86 MiB |
Layout
.
├── _MANIFEST.json # aggregate manifest (per-table counts)
└── tables/<source_slug>.jsonl # normalized rows (one JSON object per line)
Each line in a tables/*.jsonl file is a JSON object with at least
dataset_id, row (the raw upstream row), row_index, and source_file
fields, so every row carries its upstream provenance.
Loading
hf download LiteFold/SKEMPI2 --repo-type dataset --local-dir ./skempi2
Programmatic streaming:
import json
from pathlib import Path
from huggingface_hub import snapshot_download
local = snapshot_download(repo_id="LiteFold/SKEMPI2", repo_type="dataset")
for jsonl in sorted(Path(local, "tables").glob("*.jsonl")):
with jsonl.open() as f:
for line in f:
row = json.loads(line)
... # row["row"] is the upstream record
License
See upstream SKEMPI v2 license.
Citation
Jankauskaite J, et al. SKEMPI 2.0: an updated benchmark of changes in protein-protein binding energy, kinetics and thermodynamics upon mutation. Bioinformatics, 35(3):462-469, 2019.
Provenance
Built from the local manifest entry skempi2 of manifests/atlas_download_plan.json.
Pipeline source: megadata-post normalize --dataset skempi2 --tables-only.
- Downloads last month
- -