The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
_from_model_config: bool
transformers_version: string
vs
best_metric: null
best_model_checkpoint: null
epoch: double
eval_steps: int64
global_step: int64
is_hyper_param_search: bool
is_local_process_zero: bool
is_world_process_zero: bool
log_history: list<item: struct<epoch: double, step: int64, train/loss_cls_postfix: double, grad_norm: double, learning_rate: double, loss: double, eval_loss: double, eval_loss_cls_postfix: double, eval_runtime: double, eval_samples_per_second: double, eval_steps_per_second: double>>
logging_steps: int64
max_steps: int64
num_input_tokens_seen: int64
num_train_epochs: int64
save_steps: int64
stateful_callbacks: struct<TrainerControl: struct<args: struct<should_epoch_stop: bool, should_evaluate: bool, should_log: bool, should_save: bool, should_training_stop: bool>, attributes: struct<>>>
total_flos: double
train_batch_size: int64
trial_name: null
trial_params: null
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 604, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
_from_model_config: bool
transformers_version: string
vs
best_metric: null
best_model_checkpoint: null
epoch: double
eval_steps: int64
global_step: int64
is_hyper_param_search: bool
is_local_process_zero: bool
is_world_process_zero: bool
log_history: list<item: struct<epoch: double, step: int64, train/loss_cls_postfix: double, grad_norm: double, learning_rate: double, loss: double, eval_loss: double, eval_loss_cls_postfix: double, eval_runtime: double, eval_samples_per_second: double, eval_steps_per_second: double>>
logging_steps: int64
max_steps: int64
num_input_tokens_seen: int64
num_train_epochs: int64
save_steps: int64
stateful_callbacks: struct<TrainerControl: struct<args: struct<should_epoch_stop: bool, should_evaluate: bool, should_log: bool, should_save: bool, should_training_stop: bool>, attributes: struct<>>>
total_flos: double
train_batch_size: int64
trial_name: null
trial_params: nullNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Welcome, this is the data repository of scBaseTraj. This dataset contains more than 48 million trajectories spanning 71 tissues, with each trajectory covering an average of 9.3 consecutive states. Approximately three-quarters of trajectories are confined within a single CytoTRACE2 stage, corresponding to relatively stable cell states, while the remaining trajectories span multiple CytoTRACE2 stages and capture dynamic state transitions.
We used this dataset to train a temporal generative AI model, CellTempo, to forecast future cellular dynamics by representing cells as learned semantic codes and training an autoregressive generation decoder to predict ordered code sequences. It can forecast long-range cell-state transition trajectories and landscapes from snapshot data.
- Downloads last month
- 906