Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
timestamp_utc: string
repo_id: string
repo_revision: null
model_id: string
load_in_4bit: bool
traits: list<item: string>
epochs: int64
limit_train_examples_per_epoch: int64
limit_val_examples: int64
lm_batch_size: int64
gradient_accumulation_steps: int64
learning_rate: double
warmup_ratio: double
seed: int64
final_eval: struct<phase: string, runtime_s: double, global_step: int64, eval_loss: double, eval_perplexity: double, eval_examples: int64, eval_batches: int64, mlp_global_accuracy: double, mlp_global_f1: double, mlp_global_precision: double, mlp_global_recall: double, mlp_per_trait: struct<backgroundnoise: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, badmic: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, cutoff: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, fadeout: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, glitch: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, robot: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, silence: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, unclear: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, unnatural: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, verygood: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>>>
docker_git_commit: string
max_audio_seconds: double
mlp: struct<emb_dim: int64, num_traits: int64, hidden_dim: int64, dropout: double, lr: double, batch_size: int64, epochs: int64, threshold: double, dtype: string>
vs
alora_invocation_tokens: null
alpha_pattern: struct<>
arrow_config: null
auto_mapping: null
base_model_name_or_path: string
bias: string
corda_config: null
ensure_weight_tying: bool
eva_config: null
exclude_modules: null
fan_in_fan_out: bool
inference_mode: bool
init_lora_weights: bool
layer_replication: null
layers_pattern: null
layers_to_transform: null
loftq_config: struct<>
lora_alpha: int64
lora_bias: bool
lora_dropout: double
megatron_config: null
megatron_core: string
modules_to_save: null
peft_type: string
peft_version: string
qalora_group_size: int64
r: int64
rank_pattern: struct<>
revision: null
target_modules: list<item: string>
target_parameters: null
task_type: string
trainable_token_indices: null
use_dora: bool
use_qalora: bool
use_rslora: bool
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 604, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              timestamp_utc: string
              repo_id: string
              repo_revision: null
              model_id: string
              load_in_4bit: bool
              traits: list<item: string>
              epochs: int64
              limit_train_examples_per_epoch: int64
              limit_val_examples: int64
              lm_batch_size: int64
              gradient_accumulation_steps: int64
              learning_rate: double
              warmup_ratio: double
              seed: int64
              final_eval: struct<phase: string, runtime_s: double, global_step: int64, eval_loss: double, eval_perplexity: double, eval_examples: int64, eval_batches: int64, mlp_global_accuracy: double, mlp_global_f1: double, mlp_global_precision: double, mlp_global_recall: double, mlp_per_trait: struct<backgroundnoise: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, badmic: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, cutoff: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, fadeout: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, glitch: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, robot: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, silence: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, unclear: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, unnatural: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>, verygood: struct<accuracy: double, precision: double, recall: double, f1: double, tp: double, fp: double, tn: double, fn: double>>>
              docker_git_commit: string
              max_audio_seconds: double
              mlp: struct<emb_dim: int64, num_traits: int64, hidden_dim: int64, dropout: double, lr: double, batch_size: int64, epochs: int64, threshold: double, dtype: string>
              vs
              alora_invocation_tokens: null
              alpha_pattern: struct<>
              arrow_config: null
              auto_mapping: null
              base_model_name_or_path: string
              bias: string
              corda_config: null
              ensure_weight_tying: bool
              eva_config: null
              exclude_modules: null
              fan_in_fan_out: bool
              inference_mode: bool
              init_lora_weights: bool
              layer_replication: null
              layers_pattern: null
              layers_to_transform: null
              loftq_config: struct<>
              lora_alpha: int64
              lora_bias: bool
              lora_dropout: double
              megatron_config: null
              megatron_core: string
              modules_to_save: null
              peft_type: string
              peft_version: string
              qalora_group_size: int64
              r: int64
              rank_pattern: struct<>
              revision: null
              target_modules: list<item: string>
              target_parameters: null
              task_type: string
              trainable_token_indices: null
              use_dora: bool
              use_qalora: bool
              use_rslora: bool

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

No dataset card yet

Downloads last month
12