Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
text: string
token_count: int64
char_count: int64
word_count: int64
semantic_coherence: int64
information_density: double
quality_score: double
entities: struct<genes: list<item: string>, proteins: list<item: null>, pathways: list<item: null>, organisms: list<item: null>, conditions: list<item: string>>
biobert_ready: bool
timestamp: string
source: struct<split: string, doc_id: int64>
chunk_id: string
global_id: int64
vs
text: string
token_count: int64
char_count: int64
word_count: int64
semantic_coherence: int64
information_density: double
quality_score: double
entities: struct<conditions: list<item: string>, genes: list<item: string>, organisms: list<item: null>, pathways: list<item: null>, proteins: list<item: null>>
biobert_ready: bool
timestamp: timestamp[ns]
source: struct<doc_id: int64, split: string>
chunk_id: string
global_id: int64
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 547, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
text: string
token_count: int64
char_count: int64
word_count: int64
semantic_coherence: int64
information_density: double
quality_score: double
entities: struct<genes: list<item: string>, proteins: list<item: null>, pathways: list<item: null>, organisms: list<item: null>, conditions: list<item: string>>
biobert_ready: bool
timestamp: string
source: struct<split: string, doc_id: int64>
chunk_id: string
global_id: int64
vs
text: string
token_count: int64
char_count: int64
word_count: int64
semantic_coherence: int64
information_density: double
quality_score: double
entities: struct<conditions: list<item: string>, genes: list<item: string>, organisms: list<item: null>, pathways: list<item: null>, proteins: list<item: null>>
biobert_ready: bool
timestamp: timestamp[ns]
source: struct<doc_id: int64, split: string>
chunk_id: string
global_id: int64Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
🧬 Kluyveromyces marxianus - BioBERT-Optimized Chunks
PhD Research Dataset: Functional Genomics of Robust Linear Yeasts
Overview
Semantically-optimized 512-token chunks for BioBERT fine-tuning, processed with BiOMistral-7B.
Dataset Info
- Processing Date: 2025-11-01
- Target Model: BioBERT-Large v1.1
- Chunk Size: 512 tokens (optimal)
- Source: Milad96/Kluyveromyces-marxianus
Features
- Deep semantic coherence analysis
- Automatic genomic entity extraction
- Quality scoring and filtering
- BioBERT-ready format
Usage
from datasets import load_dataset
import json
# Load high-quality chunks
with open('chunks_high_quality.json', 'r') as f:
chunks = json.load(f)
# Each chunk has:
# - text: optimized content
# - token_count: ~512
# - semantic_coherence: 0-1
# - quality_score: 0-1
# - entities: {genes, proteins, pathways}
Citation
@dataset{kmx_chunks_2024,
author = {Milad96},
title = {Kluyveromyces marxianus BioBERT Chunks},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/Milad96/Kluyveromyces-marxianus-chunks}
}
License
Apache 2.0
- Downloads last month
- 187