Update README.md with score-only leaderboard data
Browse files
README.md
CHANGED
|
@@ -1,107 +1,66 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- text-classification
|
| 5 |
- question-answering
|
| 6 |
-
language:
|
| 7 |
-
- en
|
| 8 |
tags:
|
| 9 |
- telecommunications
|
| 10 |
-
-
|
| 11 |
-
-
|
| 12 |
- benchmark
|
| 13 |
- leaderboard
|
| 14 |
-
pretty_name: Open Telco Leaderboard
|
| 15 |
-
size_categories:
|
| 16 |
-
- n<1K
|
| 17 |
-
dataset_info:
|
| 18 |
-
features:
|
| 19 |
-
- name: model
|
| 20 |
-
dtype: large_string
|
| 21 |
-
- name: teleqna
|
| 22 |
-
list: float64
|
| 23 |
-
- name: telelogs
|
| 24 |
-
list: float64
|
| 25 |
-
- name: telemath
|
| 26 |
-
list: float64
|
| 27 |
-
- name: 3gpp_tsg
|
| 28 |
-
list: float64
|
| 29 |
-
- name: date
|
| 30 |
-
dtype: large_string
|
| 31 |
-
- name: tci_legacy
|
| 32 |
-
list: float64
|
| 33 |
-
- name: teletables
|
| 34 |
-
list: float64
|
| 35 |
-
- name: avg_score
|
| 36 |
-
list: float64
|
| 37 |
-
splits:
|
| 38 |
-
- name: train
|
| 39 |
-
num_bytes: 2328
|
| 40 |
-
num_examples: 11
|
| 41 |
-
download_size: 6772
|
| 42 |
-
dataset_size: 2328
|
| 43 |
configs:
|
| 44 |
- config_name: default
|
| 45 |
data_files:
|
| 46 |
- split: train
|
| 47 |
-
path:
|
| 48 |
---
|
| 49 |
|
| 50 |
-
# Open Telco Leaderboard
|
| 51 |
|
| 52 |
-
|
| 53 |
|
| 54 |
-
|
| 55 |
|
| 56 |
-
|
| 57 |
-
|--------|------|-------------|
|
| 58 |
-
| `model` | string | Model name with provider |
|
| 59 |
-
| `teleqna` | list | [score, stderr, n_samples] |
|
| 60 |
-
| `telelogs` | list | [score, stderr, n_samples] |
|
| 61 |
-
| `telemath` | list | [score, stderr, n_samples] |
|
| 62 |
-
| `3gpp_tsg` | list | [score, stderr, n_samples] |
|
| 63 |
-
| `date` | string | Evaluation date |
|
| 64 |
|
| 65 |
-
|
|
|
|
| 66 |
|
| 67 |
-
##
|
| 68 |
-
|
| 69 |
-
| Benchmark | Description |
|
| 70 |
-
|-----------|-------------|
|
| 71 |
-
| **TeleQnA** | Q&A pairs testing telecom knowledge |
|
| 72 |
-
| **TeleMath** | Mathematical reasoning in telecommunications |
|
| 73 |
-
| **TeleLogs** | Root cause analysis for 5G network issues |
|
| 74 |
-
| **3GPP-TSG** | Classification of 3GPP technical documents |
|
| 75 |
-
|
| 76 |
-
## Usage
|
| 77 |
|
| 78 |
-
|
| 79 |
-
from datasets import load_dataset
|
| 80 |
|
| 81 |
-
|
| 82 |
-
|
|
|
|
|
|
|
| 83 |
|
| 84 |
-
|
| 85 |
-
for bench in ['teleqna', 'telelogs', 'telemath', '3gpp_tsg']:
|
| 86 |
-
df[f'{bench}_score'] = df[bench].apply(lambda x: x[0])
|
| 87 |
-
df[f'{bench}_stderr'] = df[bench].apply(lambda x: x[1])
|
| 88 |
-
df[f'{bench}_n'] = df[bench].apply(lambda x: x[2])
|
| 89 |
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
df['rank'] = df['mean'].rank(ascending=False).astype(int)
|
| 94 |
|
| 95 |
-
|
| 96 |
-
```
|
| 97 |
|
| 98 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
|
| 100 |
-
|
| 101 |
-
- [TeleMath](https://huggingface.co/datasets/netop/TeleMath)
|
| 102 |
-
- [TeleLogs](https://huggingface.co/datasets/netop/TeleLogs)
|
| 103 |
|
| 104 |
-
|
|
|
|
| 105 |
|
| 106 |
-
|
| 107 |
-
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
pretty_name: Open Telco Leaderboard Scores
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
task_categories:
|
| 7 |
- text-classification
|
| 8 |
- question-answering
|
|
|
|
|
|
|
| 9 |
tags:
|
| 10 |
- telecommunications
|
| 11 |
+
- 5g
|
| 12 |
+
- llm-evaluation
|
| 13 |
- benchmark
|
| 14 |
- leaderboard
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
configs:
|
| 16 |
- config_name: default
|
| 17 |
data_files:
|
| 18 |
- split: train
|
| 19 |
+
path: leaderboard_scores.csv
|
| 20 |
---
|
| 21 |
|
| 22 |
+
# Open Telco Leaderboard Scores
|
| 23 |
|
| 24 |
+
Leaderboard scores extracted from Inspect evaluation logs for 30 models.
|
| 25 |
|
| 26 |
+
This dataset currently publishes **scores only** (no energy metrics).
|
| 27 |
|
| 28 |
+
## Files
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
+
- `leaderboard_scores.csv`: Flat table for the dataset viewer.
|
| 31 |
+
- `leaderboard_scores.json`: Structured JSON with per-model benchmark scores, stderr, sample counts, and source eval file paths.
|
| 32 |
|
| 33 |
+
## Schema (`leaderboard_scores.csv`)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
+
Core columns:
|
|
|
|
| 36 |
|
| 37 |
+
- `model`
|
| 38 |
+
- `rank`
|
| 39 |
+
- `average`
|
| 40 |
+
- `benchmarks_completed`
|
| 41 |
|
| 42 |
+
Per-benchmark columns (for each benchmark):
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
+
- `<benchmark>_score`
|
| 45 |
+
- `<benchmark>_stderr`
|
| 46 |
+
- `<benchmark>_n`
|
|
|
|
| 47 |
|
| 48 |
+
Benchmarks:
|
|
|
|
| 49 |
|
| 50 |
+
- `teleqna`
|
| 51 |
+
- `teletables`
|
| 52 |
+
- `oranbench`
|
| 53 |
+
- `srsranbench`
|
| 54 |
+
- `telemath`
|
| 55 |
+
- `telelogs`
|
| 56 |
+
- `three_gpp`
|
| 57 |
|
| 58 |
+
## Usage
|
|
|
|
|
|
|
| 59 |
|
| 60 |
+
```python
|
| 61 |
+
from datasets import load_dataset
|
| 62 |
|
| 63 |
+
ds = load_dataset("GSMA/leaderboard", split="train")
|
| 64 |
+
print(ds.column_names)
|
| 65 |
+
print(ds[0])
|
| 66 |
+
```
|