aarabil commited on
Commit
70bcc8b
·
verified ·
1 Parent(s): a5f96b2

Initial BTZSC Phase 1 results dataset

Browse files
Files changed (2) hide show
  1. README.md +29 -6
  2. SUBMISSION.md +31 -6
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  pretty_name: BTZSC Leaderboard Results
3
- license: apache-2.0
4
  tags:
5
  - leaderboard
6
  - text-classification
@@ -12,14 +12,21 @@ language:
12
 
13
  # BTZSC Results
14
 
15
- This repository stores model submissions for the **BTZSC** leaderboard:
16
 
17
  **BTZSC: A Benchmark for Zero-Shot Text Classification across Cross-Encoders, Embedding Models, Rerankers and LLMs**.
18
 
19
- - Paper: https://openreview.net/pdf?id=IxMryAz2p3
20
- - Eval harness: https://github.com/btzsc/btzsc
21
  - Leaderboard Space: https://huggingface.co/spaces/btzsc/btzsc-leaderboard
22
 
 
 
 
 
 
 
 
23
  ## What this repo contains
24
 
25
  - One JSON file per model evaluation run in `results/<model_type>/<model-name>.json`
@@ -34,10 +41,26 @@ Each submission follows schema version `1.0` with:
34
  - `evaluation`: harness versioning and runtime metadata
35
  - `results.overall`: averaged macro F1 / accuracy / macro precision / macro recall
36
  - `results.by_task`: sentiment/topic/intent/emotion aggregates
37
- - `results.by_dataset`: per-dataset metric blocks (ground truth)
38
 
39
  ## Contributing results
40
 
41
- See [SUBMISSION.md](SUBMISSION.md) for exact instructions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  PRs adding result files are validated in CI with `validate.py`.
 
1
  ---
2
  pretty_name: BTZSC Leaderboard Results
3
+ license: mit
4
  tags:
5
  - leaderboard
6
  - text-classification
 
12
 
13
  # BTZSC Results
14
 
15
+ This repository stores model submissions for the **BTZSC** leaderboard.
16
 
17
  **BTZSC: A Benchmark for Zero-Shot Text Classification across Cross-Encoders, Embedding Models, Rerankers and LLMs**.
18
 
19
+ - Paper: https://openreview.net/forum?id=IxMryAz2p3
20
+ - Eval harness: https://github.com/IliasAarab/btzsc
21
  - Leaderboard Space: https://huggingface.co/spaces/btzsc/btzsc-leaderboard
22
 
23
+ Benchmark summary:
24
+
25
+ - 22 English single-label datasets
26
+ - 4 task families: sentiment, topic, intent, emotion
27
+ - Strict zero-shot protocol (no BTZSC-label training/tuning)
28
+ - Primary metric: macro-F1
29
+
30
  ## What this repo contains
31
 
32
  - One JSON file per model evaluation run in `results/<model_type>/<model-name>.json`
 
41
  - `evaluation`: harness versioning and runtime metadata
42
  - `results.overall`: averaged macro F1 / accuracy / macro precision / macro recall
43
  - `results.by_task`: sentiment/topic/intent/emotion aggregates
44
+ - `results.by_dataset`: per-dataset metric blocks
45
 
46
  ## Contributing results
47
 
48
+ Destination path format:
49
+
50
+ - `results/<model_type>/<model-name>.json`
51
+
52
+ Recommended flow:
53
+
54
+ 1. Export with the official harness (`btzsc evaluate ... --output-json ...`).
55
+ 2. Validate locally (`python validate.py results/<model_type>/<model-name>.json`).
56
+ 3. Add your file at the required path.
57
+ 4. Submit by one of these methods:
58
+ - Web UI upload on Hugging Face (no clone required)
59
+ - Git workflow (direct push if you have write access, otherwise fork + PR)
60
+ - API workflow via `huggingface_hub` with `create_pr=True` (PR-based)
61
+
62
+ In short: **add** means placing the JSON at the correct path; **submit** means publishing that change to this remote repo.
63
+
64
+ See [SUBMISSION.md](SUBMISSION.md) for full requirements and review checks.
65
 
66
  PRs adding result files are validated in CI with `validate.py`.
SUBMISSION.md CHANGED
@@ -40,12 +40,37 @@ python validate.py results/<model_type>/<model-name>.json
40
 
41
  ### 4) Open a Pull Request
42
 
43
- 1. Fork this repository.
44
- 2. Place your JSON file in `results/<model_type>/<model-name>.json`.
45
- 3. Open a PR with:
46
- - The result JSON file.
47
- - A short model description (architecture/training notes).
48
- - Confirmation that model weights are public on Hugging Face Hub.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  ## Merge checks
51
 
 
40
 
41
  ### 4) Open a Pull Request
42
 
43
+ All submission actions in this step target the results dataset repo `btzsc/btzsc-results`: https://huggingface.co/datasets/btzsc/btzsc-results.
44
+
45
+ Required destination path:
46
+
47
+ - `results/<model_type>/<model-name>.json`
48
+
49
+ Example:
50
+
51
+ - `results/embedding/e5-base-v2.json`
52
+
53
+ Choose one submission workflow:
54
+
55
+ 1. Web UI (no clone required)
56
+ - Open https://huggingface.co/datasets/btzsc/btzsc-results and upload the JSON in **Files and versions** at the required path.
57
+ - If you do not have write access, fork `btzsc/btzsc-results` and open a PR to `btzsc/btzsc-results`.
58
+
59
+ 2. Git workflow (clone/fork + push)
60
+ - Clone or fork `https://huggingface.co/datasets/btzsc/btzsc-results`.
61
+ - Add your JSON at the required path.
62
+ - Push directly (if you have write access) or push to your fork and open a PR to `btzsc/btzsc-results`.
63
+
64
+ 3. API workflow (`huggingface_hub`, PR-based)
65
+ - Authenticate first (`huggingface-cli login` or `HF_TOKEN`).
66
+ - Use `create_pr=True` against repo_id `btzsc/btzsc-results` to open a PR branch programmatically.
67
+ - If PR creation is restricted for your account, upload to your fork and open a PR to `btzsc/btzsc-results` manually.
68
+
69
+ For every PR, include:
70
+
71
+ - The result JSON file.
72
+ - A short model description (architecture/training notes).
73
+ - Confirmation that model weights are public on Hugging Face Hub.
74
 
75
  ## Merge checks
76