| --- |
| license: apache-2.0 |
| pretty_name: Open Telco Leaderboard Scores |
| language: |
| - en |
| task_categories: |
| - text-classification |
| - question-answering |
| tags: |
| - telecommunications |
| - 5g |
| - llm-evaluation |
| - benchmark |
| - leaderboard |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: leaderboard_scores.csv |
| --- |
| |
| # Open Telco Leaderboard Scores |
|
|
| Benchmark scores for **83 models** across 7 telecom-domain benchmarks, sourced from the MWC leaderboard. |
|
|
| This dataset publishes **scores only** (no energy metrics). |
|
|
| ## Files |
|
|
| - `leaderboard_scores.csv`: Flat table for the dataset viewer. |
| - `leaderboard_scores.json`: Structured JSON with per-model benchmark scores and standard errors. |
|
|
| ## Schema (`leaderboard_scores.csv`) |
| |
| Core columns: |
| |
| - `model` — Model name |
| - `provider` — Model provider (e.g. OpenAI, Google, Meta) |
| - `rank` — Rank by average score (descending) |
| - `average` — Mean of available benchmark scores |
| - `benchmarks_completed` — Number of benchmarks with scores |
|
|
| One column per benchmark — each cell contains `[score, stderr]` as a JSON tuple, or empty if not evaluated: |
|
|
| Benchmarks: |
|
|
| - `teleqna` — Telecom Q&A (multiple choice) |
| - `teletables` — Table understanding |
| - `oranbench` — O-RAN knowledge |
| - `srsranbench` — srsRAN knowledge |
| - `telemath` — Telecom math problems |
| - `telelogs` — Telecom log analysis |
| - `three_gpp` — 3GPP specification knowledge |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("GSMA/leaderboard", split="train") |
| print(ds.column_names) |
| print(ds[0]) |
| ``` |
|
|