Zhejian commited on
Commit
6c8a35e
·
1 Parent(s): 539f362
Files changed (3) hide show
  1. README.md +1 -44
  2. app.py +1 -0
  3. requirements.txt +2 -1
README.md CHANGED
@@ -1,46 +1,3 @@
1
- ---
2
- title: CAIA Benchmark Leaderboard
3
- emoji: 🥇
4
- colorFrom: green
5
- colorTo: indigo
6
- sdk: gradio
7
- app_file: app.py
8
- pinned: true
9
- license: apache-2.0
10
- short_description: Duplicate this leaderboard to initialize your own!
11
- sdk_version: 5.19.0
12
- ---
13
 
14
- # Start the configuration
15
 
16
- Most of the variables to change for a default leaderboard are in `src/env.py` (replace the path for your leaderboard) and `src/about.py` (for tasks).
17
-
18
- Results files should have the following format and be stored as json files:
19
- ```json
20
- {
21
- "config": {
22
- "model_dtype": "torch.float16", # or torch.bfloat16 or 8bit or 4bit
23
- "model_name": "path of the model on the hub: org/model",
24
- "model_sha": "revision on the hub",
25
- },
26
- "results": {
27
- "task_name": {
28
- "metric_name": score,
29
- },
30
- "task_name2": {
31
- "metric_name": score,
32
- }
33
- }
34
- }
35
- ```
36
-
37
- Request files are created automatically by this tool.
38
-
39
- If you encounter problem on the space, don't hesitate to restart it to remove the create eval-queue, eval-queue-bk, eval-results and eval-results-bk created folder.
40
-
41
- # Code logic for more complex edits
42
-
43
- You'll find
44
- - the main table' columns names and properties in `src/display/utils.py`
45
- - the logic to read all results and request files, then convert them in dataframe lines, in `src/leaderboard/read_evals.py`, and `src/populate.py`
46
- - the logic to allow or filter submissions in `src/submission/submit.py` and `src/submission/check_validity.py`
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
 
2
 
3
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app.py CHANGED
@@ -21,6 +21,7 @@ from content import (
21
  )
22
  from evaluator import Evaluator
23
  from score import init_evaluators, score_item
 
24
 
25
 
26
  # from src.envs import API, EVAL_REQUESTS_PATH, EVAL_RESULTS_PATH, QUEUE_REPO, REPO_ID, RESULTS_REPO, TOKEN
 
21
  )
22
  from evaluator import Evaluator
23
  from score import init_evaluators, score_item
24
+ from loguru import logger
25
 
26
 
27
  # from src.envs import API, EVAL_REQUESTS_PATH, EVAL_RESULTS_PATH, QUEUE_REPO, REPO_ID, RESULTS_REPO, TOKEN
requirements.txt CHANGED
@@ -17,4 +17,5 @@ sentencepiece
17
  pydantic==2.10.1
18
  openai==1.78.1
19
  tiktoken==0.9.0
20
- tenacity===9.1.2
 
 
17
  pydantic==2.10.1
18
  openai==1.78.1
19
  tiktoken==0.9.0
20
+ tenacity===9.1.2
21
+ loguru