The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
method: string
inter_dim: int64
epochs: int64
lr: double
device: string
results: struct<0: struct<loss: double, hit_rate: double>, 1: struct<loss: double, hit_rate: double>, 2: stru (... 78 chars omitted)
child 0, 0: struct<loss: double, hit_rate: double>
child 0, loss: double
child 1, hit_rate: double
child 1, 1: struct<loss: double, hit_rate: double>
child 0, loss: double
child 1, hit_rate: double
child 2, 2: struct<loss: double, hit_rate: double>
child 0, loss: double
child 1, hit_rate: double
child 3, 3: struct<loss: double, hit_rate: double>
child 0, loss: double
child 1, hit_rate: double
avg_hit_rate: double
profile_prompts: int64
num_layers: int64
generated_at: timestamp[s]
total_tokens_profiled: int64
summary: struct<total_gpu_pinned_experts: int64, total_cpu_fallback_experts: int64, cpu_layer_avg_gpu_pinned: (... 44 chars omitted)
child 0, total_gpu_pinned_experts: int64
child 1, total_cpu_fallback_experts: int64
child 2, cpu_layer_avg_gpu_pinned: double
child 3, estimated_vram_overhead_mb: double
num_experts: int64
ncmoe: int64
top_k: int64
layers: struct<0: struct<placement: string, gpu_resident: list<item: int64>, gpu_resident_detail: list<item: (... 4208 chars omitted)
child 0, 0: struct<placement: string, gpu_resident: list<item: int64>, gpu_resident_detail: list<item: struct<id (... 108 chars omitted)
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child
...
35, 35: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
child 36, 36: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
child 37, 37: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
child 38, 38: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
child 39, 39: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
coverage_target: double
model: string
to
{'model': Value('string'), 'num_layers': Value('int64'), 'num_experts': Value('int64'), 'top_k': Value('int64'), 'ncmoe': Value('int64'), 'coverage_target': Value('float64'), 'profile_prompts': Value('int64'), 'total_tokens_profiled': Value('int64'), 'generated_at': Value('timestamp[s]'), 'layers': {'0': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'gpu_resident_detail': List({'id': Value('int64'), 'freq': Value('float64')}), 'cpu_fallback': List(Value('int64')), 'coverage_achieved': Value('float64'), 'tokens_profiled': Value('int64')}, '1': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'gpu_resident_detail': List({'id': Value('int64'), 'freq': Value('float64')}), 'cpu_fallback': List(Value('int64')), 'coverage_achieved': Value('float64'), 'tokens_profiled': Value('int64')}, '2': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'gpu_resident_detail': List({'id': Value('int64'), 'freq': Value('float64')}), 'cpu_fallback': List(Value('int64')), 'coverage_achieved': Value('float64'), 'tokens_profiled': Value('int64')}, '3': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'gpu_resident_detail': List({'id': Value('int64'), 'freq': Value('float64')}), 'cpu_fallback': List(Value('int64')), 'coverage_achieved': Value('float64'), 'tokens_profiled': Value('int64')}, '4': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '5': {'placement': Valu
...
List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '29': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '30': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '31': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '32': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '33': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '34': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '35': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '36': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '37': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '38': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '39': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}}, 'summary': {'total_gpu_pinned_experts': Value('int64'), 'total_cpu_fallback_experts': Value('int64'), 'cpu_layer_avg_gpu_pinned': Value('float64'), 'estimated_vram_overhead_mb': Value('float64')}}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 265, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 120, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
method: string
inter_dim: int64
epochs: int64
lr: double
device: string
results: struct<0: struct<loss: double, hit_rate: double>, 1: struct<loss: double, hit_rate: double>, 2: stru (... 78 chars omitted)
child 0, 0: struct<loss: double, hit_rate: double>
child 0, loss: double
child 1, hit_rate: double
child 1, 1: struct<loss: double, hit_rate: double>
child 0, loss: double
child 1, hit_rate: double
child 2, 2: struct<loss: double, hit_rate: double>
child 0, loss: double
child 1, hit_rate: double
child 3, 3: struct<loss: double, hit_rate: double>
child 0, loss: double
child 1, hit_rate: double
avg_hit_rate: double
profile_prompts: int64
num_layers: int64
generated_at: timestamp[s]
total_tokens_profiled: int64
summary: struct<total_gpu_pinned_experts: int64, total_cpu_fallback_experts: int64, cpu_layer_avg_gpu_pinned: (... 44 chars omitted)
child 0, total_gpu_pinned_experts: int64
child 1, total_cpu_fallback_experts: int64
child 2, cpu_layer_avg_gpu_pinned: double
child 3, estimated_vram_overhead_mb: double
num_experts: int64
ncmoe: int64
top_k: int64
layers: struct<0: struct<placement: string, gpu_resident: list<item: int64>, gpu_resident_detail: list<item: (... 4208 chars omitted)
child 0, 0: struct<placement: string, gpu_resident: list<item: int64>, gpu_resident_detail: list<item: struct<id (... 108 chars omitted)
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child
...
35, 35: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
child 36, 36: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
child 37, 37: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
child 38, 38: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
child 39, 39: struct<placement: string, gpu_resident: list<item: int64>, cpu_fallback: list<item: null>>
child 0, placement: string
child 1, gpu_resident: list<item: int64>
child 0, item: int64
child 2, cpu_fallback: list<item: null>
child 0, item: null
coverage_target: double
model: string
to
{'model': Value('string'), 'num_layers': Value('int64'), 'num_experts': Value('int64'), 'top_k': Value('int64'), 'ncmoe': Value('int64'), 'coverage_target': Value('float64'), 'profile_prompts': Value('int64'), 'total_tokens_profiled': Value('int64'), 'generated_at': Value('timestamp[s]'), 'layers': {'0': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'gpu_resident_detail': List({'id': Value('int64'), 'freq': Value('float64')}), 'cpu_fallback': List(Value('int64')), 'coverage_achieved': Value('float64'), 'tokens_profiled': Value('int64')}, '1': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'gpu_resident_detail': List({'id': Value('int64'), 'freq': Value('float64')}), 'cpu_fallback': List(Value('int64')), 'coverage_achieved': Value('float64'), 'tokens_profiled': Value('int64')}, '2': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'gpu_resident_detail': List({'id': Value('int64'), 'freq': Value('float64')}), 'cpu_fallback': List(Value('int64')), 'coverage_achieved': Value('float64'), 'tokens_profiled': Value('int64')}, '3': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'gpu_resident_detail': List({'id': Value('int64'), 'freq': Value('float64')}), 'cpu_fallback': List(Value('int64')), 'coverage_achieved': Value('float64'), 'tokens_profiled': Value('int64')}, '4': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '5': {'placement': Valu
...
List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '29': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '30': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '31': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '32': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '33': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '34': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '35': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '36': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '37': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '38': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}, '39': {'placement': Value('string'), 'gpu_resident': List(Value('int64')), 'cpu_fallback': List(Value('null'))}}, 'summary': {'total_gpu_pinned_experts': Value('int64'), 'total_cpu_fallback_experts': Value('int64'), 'cpu_layer_avg_gpu_pinned': Value('float64'), 'estimated_vram_overhead_mb': Value('float64')}}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Chimere Expert Predictor Models
MLP-based expert prediction models trained to prefetch MoE experts before they're needed. This is an informative negative result: despite achieving 86.65% hit@8 prediction accuracy, expert prefetch provides zero speedup on consumer hardware because CPU GEMV for small experts (430 KB) takes <50μs — less than 45% of the compute budget.
Models
| Variant | Size | Description |
|---|---|---|
expert_predictor/ |
35 MB | Base: same-layer prediction |
expert_predictor_ncmoe8/ |
37 MB | With ncmoe=8 offloading |
expert_predictor_crosslayer_ncmoe8/ |
19 MB | Cross-layer (token N-1 → N) |
expert_predictor_crosslayer_ncmoe8_focal/ |
32 MB | Cross-layer + focal loss |
expert_placement.json |
180 KB | Expert assignment config |
Key finding
Expert prefetch is counter-productive on current consumer hardware (RTX 5060 Ti, PCIe 4.0 x8) where:
- MoE experts are small (~430 KB each for IQ3_S)
- CPU GEMV takes ~50μs per expert
- The ggml callback blocks thread 0, adding overhead
- Cross-token prediction fails because MoE expert selection is unstable between consecutive tokens
This matters for anyone implementing MoE inference optimization: don't prefetch unless your experts are >5 MB and your PCIe bandwidth is the bottleneck.
Format
NumPy (.npy) + binary (.bin) weight files per layer, with summary.json metadata.
Related
- Paper: "Expert Prefetch on Consumer MoE: When Prediction Accuracy Doesn't Matter" (in preparation)
- Code: chimere-dflash/chimere/expert_prefetch.py
Author
Kevin Remondiere — Independent ML researcher
- Downloads last month
- 336