Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
examples = [ujson_loads(line) for line in batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
EcoCompute: LLM Energy Efficiency Measurements
93+ empirical measurements of energy consumption during LLM inference, collected at 10 Hz via NVML across 3 NVIDIA GPU architectures.
Key Findings
- INT8 Energy Paradox: Default bitsandbytes INT8 (
load_in_8bit=True) increases energy by 17–147% vs FP16 - NF4 Small-Model Penalty: 4-bit quantization wastes 11–29% more energy on models ≤3B parameters
- Batch Size Impact: BS=1 → BS=64 reduces energy per request by 95.7%
Data Coverage
| GPU | Architecture | Measurements | Config Types |
|---|---|---|---|
| RTX 5090 | Blackwell | 8 | FP16, NF4 |
| RTX 4090D | Ada Lovelace | 15 | FP16, NF4, INT8, Pure INT8 |
| A800 | Ampere | 70 | Pure INT8 × 7 batch sizes |
| Total | 3 architectures | 93+ | 5 config types |
Models Tested
Qwen/Qwen2-1.5B(1.5B params)microsoft/Phi-3-mini-4k-instruct(3.8B params)01-ai/Yi-1.5-6B(6B params)mistralai/Mistral-7B-Instruct-v0.2(7B params)Qwen/Qwen2.5-7B-Instruct(7B params)
Dataset Structure
metadata/
├── README.md — Dataset documentation
├── COMPLETE_DATASET_MEMO.md — Full 93+ measurement documentation
├── rtx5090_metadata.json — RTX 5090 Blackwell results
├── rtx4090d_metadata.json — RTX 4090D Ada Lovelace results
├── pure_int8_metadata.json — Pure INT8 ablation study
├── a800_metadata.json — A800 Ampere results
└── batch_size_experiment/ — A800 batch size sweep (BS 1–64)
├── *_raw_*.csv — Raw per-run measurements (70 rows)
├── *_summary_*.csv — Aggregated statistics
└── *_results_*.png — Visualization
Measurement Methodology
- Tool: NVIDIA Management Library (NVML) via pynvml
- Frequency: 10 Hz (100ms polling)
- Metric: GPU board power (watts)
- Protocol: 3 warmup + 10 measured runs + 30s thermal stabilization
- Quality: CV < 2% (throughput), CV < 5% (power)
- Generation: Greedy decoding, max_new_tokens=256, fixed prompt
Software Environment
| GPU | PyTorch | CUDA | Driver | transformers | bitsandbytes |
|---|---|---|---|---|---|
| RTX 5090 | 2.6.0 | 12.6 | 570.86.15 | 4.48.0 | 0.45.3 |
| RTX 4090D | 2.4.1 | 12.1 | 560.35.03 | 4.47.0 | 0.45.0 |
| A800 | 2.4.1 | 12.1 | 535.183.01 | 4.47.0 | 0.45.0 |
Related Resources
- Interactive Dashboard: hongping-zh.github.io/ecocompute-dynamic-eval
- GitHub Repository: github.com/hongping-zh/ecocompute-dynamic-eval
- Zenodo Archive: zenodo.org/records/18900289
- HuggingFace Optimum PR: huggingface/optimum#2410
Citation
@dataset{zhang2026ecocompute,
title={EcoCompute: LLM Energy Efficiency Measurements},
author={Zhang, Hongping},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/hongpingzhang/ecocompute-energy-efficiency}
}
License
MIT
- Downloads last month
- 7