Enrique Molero commited on
Commit
fa2174b
·
verified ·
1 Parent(s): 4002e6c

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +42 -46
  2. evals.parquet +2 -2
README.md CHANGED
@@ -18,72 +18,68 @@ size_categories:
18
 
19
  # Open Telco Leaderboard Dataset
20
 
21
- This dataset contains evaluation results from the **Open Telco** benchmark suite, which measures LLM performance on telecommunications-specific tasks.
22
 
23
- ## Dataset Description
24
-
25
- The leaderboard tracks model performance across four specialized telecom benchmarks:
26
-
27
- | Benchmark | Description | Samples |
28
- |-----------|-------------|---------|
29
- | **TeleQnA** | 10,000 Q&A pairs testing telecom knowledge across lexicon, research, and standards | 10,000 |
30
- | **TeleMath** | Mathematical reasoning in telecommunications (signal processing, network optimization) | 500 |
31
- | **TeleLogs** | Root cause analysis for 5G network throughput degradation across 8 failure modes | 1,000+ |
32
- | **3GPP-TSG** | Classification of technical documents by 3GPP working group (RAN, SA, CT) | 5,000+ |
33
-
34
- ## Data Schema
35
-
36
- The parquet file contains the following columns:
37
 
38
  | Column | Type | Description |
39
  |--------|------|-------------|
40
- | `rank` | int | Model's overall ranking position |
41
  | `model` | string | Model name (e.g., "gpt-5.2", "claude-opus-4.5") |
42
- | `provider` | string | Provider name (e.g., "OpenAI", "Anthropic") |
43
- | `repo` | string | Full model path |
44
- | `mean` | float | Mean score across all benchmarks |
45
- | `teleqna` | float | TeleQnA benchmark score |
46
- | `teleqna_stderr` | float | TeleQnA standard error |
47
- | `telelogs` | float | TeleLogs benchmark score |
48
- | `telelogs_stderr` | float | TeleLogs standard error |
49
- | `telemath` | float | TeleMath benchmark score |
50
- | `telemath_stderr` | float | TeleMath standard error |
51
- | `tsg` | float | 3GPP-TSG benchmark score |
52
- | `tsg_stderr` | float | 3GPP-TSG standard error |
 
 
 
 
 
 
 
53
 
54
  ## Usage
55
 
56
  ```python
57
  from datasets import load_dataset
 
 
 
 
 
58
 
59
- dataset = load_dataset("GSMA/leaderboard")
60
- df = dataset["train"].to_pandas()
 
61
 
62
- # View top models
63
- print(df.sort_values("mean", ascending=False).head(10))
 
 
64
  ```
65
 
66
- ## How Results Are Generated
67
 
68
- Evaluations are run using the [Inspect AI](https://inspect.ai-safety-institute.org.uk/) framework. The workflow:
69
 
70
- 1. Models are evaluated against each benchmark
71
- 2. Results are logged as `.eval` files
72
- 3. Scores are aggregated into this parquet file
73
- 4. The leaderboard website displays the rankings
74
 
75
  ## Related Datasets
76
 
77
- - [TeleQnA](https://huggingface.co/datasets/netop/TeleQnA) - Telecom Q&A benchmark
78
- - [TeleMath](https://huggingface.co/datasets/netop/TeleMath) - Telecom math reasoning
79
- - [TeleLogs](https://huggingface.co/datasets/netop/TeleLogs) - Network log analysis
80
- - [3GPP-TSG](https://huggingface.co/datasets/eaguaida/gsma_sample) - 3GPP document classification
81
 
82
  ## Links
83
 
84
  - [Open Telco Website](https://gsma-research.github.io/open_telco/)
85
- - [GitHub Repository](https://github.com/gsma-research/open_telco)
86
-
87
- ## License
88
-
89
- Apache 2.0
 
18
 
19
  # Open Telco Leaderboard Dataset
20
 
21
+ Raw benchmark scores for LLMs evaluated on telecommunications-specific tasks. This minimal dataset contains only essential fields - all derived metrics (rank, mean, TCI) should be calculated from these scores.
22
 
23
+ ## Schema
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  | Column | Type | Description |
26
  |--------|------|-------------|
 
27
  | `model` | string | Model name (e.g., "gpt-5.2", "claude-opus-4.5") |
28
+ | `provider` | string | Provider (e.g., "OpenAI", "Anthropic") |
29
+ | `teleqna` | float | TeleQnA benchmark score (0-100) |
30
+ | `teleqna_stderr` | float | Standard error |
31
+ | `telelogs` | float | TeleLogs benchmark score (0-100) |
32
+ | `telelogs_stderr` | float | Standard error |
33
+ | `telemath` | float | TeleMath benchmark score (0-100) |
34
+ | `telemath_stderr` | float | Standard error |
35
+ | `tsg` | float | 3GPP-TSG benchmark score (0-100) |
36
+ | `tsg_stderr` | float | Standard error |
37
+
38
+ ## Benchmarks
39
+
40
+ | Benchmark | Description | Samples |
41
+ |-----------|-------------|---------|
42
+ | **TeleQnA** | Q&A pairs testing telecom knowledge | 10,000 |
43
+ | **TeleMath** | Mathematical reasoning in telecommunications | 500 |
44
+ | **TeleLogs** | Root cause analysis for 5G network issues | 1,000+ |
45
+ | **3GPP-TSG** | Classification of 3GPP technical documents | 5,000+ |
46
 
47
  ## Usage
48
 
49
  ```python
50
  from datasets import load_dataset
51
+ import pandas as pd
52
+
53
+ # Load dataset
54
+ ds = load_dataset("GSMA/leaderboard")
55
+ df = ds["train"].to_pandas()
56
 
57
+ # Calculate mean score
58
+ benchmarks = ['teleqna', 'telelogs', 'telemath', 'tsg']
59
+ df['mean'] = df[benchmarks].mean(axis=1)
60
 
61
+ # Rank by mean score
62
+ df['rank'] = df['mean'].rank(ascending=False).astype(int)
63
+
64
+ print(df.sort_values('rank'))
65
  ```
66
 
67
+ ## Derived Metrics
68
 
69
+ These fields are NOT stored but can be calculated:
70
 
71
+ - **rank**: Sort by mean or TCI score
72
+ - **mean**: Average of the 4 benchmark scores
73
+ - **TCI (Telco Capability Index)**: IRT-inspired score using benchmark difficulties
 
74
 
75
  ## Related Datasets
76
 
77
+ - [TeleQnA](https://huggingface.co/datasets/netop/TeleQnA)
78
+ - [TeleMath](https://huggingface.co/datasets/netop/TeleMath)
79
+ - [TeleLogs](https://huggingface.co/datasets/netop/TeleLogs)
80
+ - [3GPP-TSG](https://huggingface.co/datasets/eaguaida/gsma_sample)
81
 
82
  ## Links
83
 
84
  - [Open Telco Website](https://gsma-research.github.io/open_telco/)
85
+ - [GitHub](https://github.com/gsma-research/open_telco)
 
 
 
 
evals.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:56db654b1ad079f16a8c59a44806d918005cec7c8738a939f34b250df4a63283
3
- size 47503
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1d174838cb72b1bb7ac747a9df6e5eff23ea8ad3c9c2c93947731a9230278b9
3
+ size 6608