Enrique Molero commited on
Commit
4002e6c
·
verified ·
1 Parent(s): d9dcea1

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +89 -3
  2. evals.parquet +3 -0
README.md CHANGED
@@ -1,3 +1,89 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-classification
5
+ - question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - telecommunications
10
+ - 5G
11
+ - LLM-evaluation
12
+ - benchmark
13
+ - leaderboard
14
+ pretty_name: Open Telco Leaderboard
15
+ size_categories:
16
+ - n<1K
17
+ ---
18
+
19
+ # Open Telco Leaderboard Dataset
20
+
21
+ This dataset contains evaluation results from the **Open Telco** benchmark suite, which measures LLM performance on telecommunications-specific tasks.
22
+
23
+ ## Dataset Description
24
+
25
+ The leaderboard tracks model performance across four specialized telecom benchmarks:
26
+
27
+ | Benchmark | Description | Samples |
28
+ |-----------|-------------|---------|
29
+ | **TeleQnA** | 10,000 Q&A pairs testing telecom knowledge across lexicon, research, and standards | 10,000 |
30
+ | **TeleMath** | Mathematical reasoning in telecommunications (signal processing, network optimization) | 500 |
31
+ | **TeleLogs** | Root cause analysis for 5G network throughput degradation across 8 failure modes | 1,000+ |
32
+ | **3GPP-TSG** | Classification of technical documents by 3GPP working group (RAN, SA, CT) | 5,000+ |
33
+
34
+ ## Data Schema
35
+
36
+ The parquet file contains the following columns:
37
+
38
+ | Column | Type | Description |
39
+ |--------|------|-------------|
40
+ | `rank` | int | Model's overall ranking position |
41
+ | `model` | string | Model name (e.g., "gpt-5.2", "claude-opus-4.5") |
42
+ | `provider` | string | Provider name (e.g., "OpenAI", "Anthropic") |
43
+ | `repo` | string | Full model path |
44
+ | `mean` | float | Mean score across all benchmarks |
45
+ | `teleqna` | float | TeleQnA benchmark score |
46
+ | `teleqna_stderr` | float | TeleQnA standard error |
47
+ | `telelogs` | float | TeleLogs benchmark score |
48
+ | `telelogs_stderr` | float | TeleLogs standard error |
49
+ | `telemath` | float | TeleMath benchmark score |
50
+ | `telemath_stderr` | float | TeleMath standard error |
51
+ | `tsg` | float | 3GPP-TSG benchmark score |
52
+ | `tsg_stderr` | float | 3GPP-TSG standard error |
53
+
54
+ ## Usage
55
+
56
+ ```python
57
+ from datasets import load_dataset
58
+
59
+ dataset = load_dataset("GSMA/leaderboard")
60
+ df = dataset["train"].to_pandas()
61
+
62
+ # View top models
63
+ print(df.sort_values("mean", ascending=False).head(10))
64
+ ```
65
+
66
+ ## How Results Are Generated
67
+
68
+ Evaluations are run using the [Inspect AI](https://inspect.ai-safety-institute.org.uk/) framework. The workflow:
69
+
70
+ 1. Models are evaluated against each benchmark
71
+ 2. Results are logged as `.eval` files
72
+ 3. Scores are aggregated into this parquet file
73
+ 4. The leaderboard website displays the rankings
74
+
75
+ ## Related Datasets
76
+
77
+ - [TeleQnA](https://huggingface.co/datasets/netop/TeleQnA) - Telecom Q&A benchmark
78
+ - [TeleMath](https://huggingface.co/datasets/netop/TeleMath) - Telecom math reasoning
79
+ - [TeleLogs](https://huggingface.co/datasets/netop/TeleLogs) - Network log analysis
80
+ - [3GPP-TSG](https://huggingface.co/datasets/eaguaida/gsma_sample) - 3GPP document classification
81
+
82
+ ## Links
83
+
84
+ - [Open Telco Website](https://gsma-research.github.io/open_telco/)
85
+ - [GitHub Repository](https://github.com/gsma-research/open_telco)
86
+
87
+ ## License
88
+
89
+ Apache 2.0
evals.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56db654b1ad079f16a8c59a44806d918005cec7c8738a939f34b250df4a63283
3
+ size 47503