File size: 1,543 Bytes
4002e6c 0aedca6 4002e6c 13f384d 4002e6c 13f384d 0aedca6 13f384d 0aedca6 4002e6c 0aedca6 4002e6c 08a17c9 4002e6c 8685c9d 4002e6c 0aedca6 fa2174b 0aedca6 8685c9d 98e693e 0aedca6 4002e6c 0aedca6 fa2174b 8685c9d 4002e6c 6d27ff3 fa2174b 0aedca6 4002e6c 8685c9d 4002e6c 0aedca6 4002e6c 0aedca6 4002e6c 0aedca6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 | ---
license: apache-2.0
pretty_name: Open Telco Leaderboard Scores
language:
- en
task_categories:
- text-classification
- question-answering
tags:
- telecommunications
- 5g
- llm-evaluation
- benchmark
- leaderboard
configs:
- config_name: default
data_files:
- split: train
path: leaderboard_scores.csv
---
# Open Telco Leaderboard Scores
Benchmark scores for **84 models** across 7 telecom-domain benchmarks, sourced from the MWC leaderboard.
This dataset publishes **scores only** (no energy metrics).
## Files
- `leaderboard_scores.csv`: Flat table for the dataset viewer.
- `leaderboard_scores.json`: Structured JSON with per-model benchmark scores and standard errors.
## Schema (`leaderboard_scores.csv`)
Core columns:
- `model` — Model name
- `provider` — Model provider (e.g. OpenAI, Google, Meta)
- `rank` — Rank by average score (descending)
- `average` — Mean of available benchmark scores
- `benchmarks_completed` — Number of benchmarks with scores
One column per benchmark — each cell contains `[score, stderr]` as a JSON tuple, or empty if not evaluated:
Benchmarks:
- `teleqna` — Telecom Q&A (multiple choice)
- `teletables` — Table understanding
- `oranbench` — O-RAN knowledge
- `srsranbench` — srsRAN knowledge
- `telemath` — Telecom math problems
- `telelogs` — Telecom log analysis
- `three_gpp` — 3GPP specification knowledge
## Usage
```python
from datasets import load_dataset
ds = load_dataset("GSMA/leaderboard", split="train")
print(ds.column_names)
print(ds[0])
```
|