Fhrozen commited on
Commit
73f1431
·
1 Parent(s): ebca337

updates on main files

Browse files
Files changed (5) hide show
  1. .gitignore +1 -0
  2. .python-version +1 -0
  3. README.md +0 -34
  4. pyproject.toml +45 -1
  5. requirements.txt +1 -1
.gitignore CHANGED
@@ -1,5 +1,6 @@
1
  auto_evals/
2
  venv/
 
3
  __pycache__/
4
  .env
5
  .ipynb_checkpoints
 
1
  auto_evals/
2
  venv/
3
+ .venv/
4
  __pycache__/
5
  .env
6
  .ipynb_checkpoints
.python-version ADDED
@@ -0,0 +1 @@
 
 
1
+ 3.12
README.md CHANGED
@@ -12,37 +12,3 @@ sdk_version: 5.43.1
12
  tags:
13
  - leaderboard
14
  ---
15
-
16
- # Start the configuration
17
-
18
- Most of the variables to change for a default leaderboard are in `src/env.py` (replace the path for your leaderboard) and `src/about.py` (for tasks).
19
-
20
- Results files should have the following format and be stored as json files:
21
- ```json
22
- {
23
- "config": {
24
- "model_dtype": "torch.float16", # or torch.bfloat16 or 8bit or 4bit
25
- "model_name": "path of the model on the hub: org/model",
26
- "model_sha": "revision on the hub",
27
- },
28
- "results": {
29
- "task_name": {
30
- "metric_name": score,
31
- },
32
- "task_name2": {
33
- "metric_name": score,
34
- }
35
- }
36
- }
37
- ```
38
-
39
- Request files are created automatically by this tool.
40
-
41
- If you encounter problem on the space, don't hesitate to restart it to remove the create eval-queue, eval-queue-bk, eval-results and eval-results-bk created folder.
42
-
43
- # Code logic for more complex edits
44
-
45
- You'll find
46
- - the main table' columns names and properties in `src/display/utils.py`
47
- - the logic to read all results and request files, then convert them in dataframe lines, in `src/leaderboard/read_evals.py`, and `src/populate.py`
48
- - the logic to allow or filter submissions in `src/submission/submit.py` and `src/submission/check_validity.py`
 
12
  tags:
13
  - leaderboard
14
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
pyproject.toml CHANGED
@@ -3,7 +3,51 @@
3
  select = ["E", "F"]
4
  ignore = ["E501"] # line too long (black is taking care of this)
5
  line-length = 119
6
- fixable = ["A", "B", "C", "D", "E", "F", "G", "I", "N", "Q", "S", "T", "W", "ANN", "ARG", "BLE", "COM", "DJ", "DTZ", "EM", "ERA", "EXE", "FBT", "ICN", "INP", "ISC", "NPY", "PD", "PGH", "PIE", "PL", "PT", "PTH", "PYI", "RET", "RSE", "RUF", "SIM", "SLF", "TCH", "TID", "TRY", "UP", "YTT"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  [tool.isort]
9
  profile = "black"
 
3
  select = ["E", "F"]
4
  ignore = ["E501"] # line too long (black is taking care of this)
5
  line-length = 119
6
+ fixable = [
7
+ "A",
8
+ "B",
9
+ "C",
10
+ "D",
11
+ "E",
12
+ "F",
13
+ "G",
14
+ "I",
15
+ "N",
16
+ "Q",
17
+ "S",
18
+ "T",
19
+ "W",
20
+ "ANN",
21
+ "ARG",
22
+ "BLE",
23
+ "COM",
24
+ "DJ",
25
+ "DTZ",
26
+ "EM",
27
+ "ERA",
28
+ "EXE",
29
+ "FBT",
30
+ "ICN",
31
+ "INP",
32
+ "ISC",
33
+ "NPY",
34
+ "PD",
35
+ "PGH",
36
+ "PIE",
37
+ "PL",
38
+ "PT",
39
+ "PTH",
40
+ "PYI",
41
+ "RET",
42
+ "RSE",
43
+ "RUF",
44
+ "SIM",
45
+ "SLF",
46
+ "TID",
47
+ "TRY",
48
+ "UP",
49
+ "YTT"
50
+ ]
51
 
52
  [tool.isort]
53
  profile = "black"
requirements.txt CHANGED
@@ -3,8 +3,8 @@ black
3
  datasets
4
  gradio
5
  gradio[oauth]
6
- gradio_leaderboard==0.0.13
7
  gradio_client
 
8
  huggingface-hub>=0.18.0
9
  matplotlib
10
  numpy
 
3
  datasets
4
  gradio
5
  gradio[oauth]
 
6
  gradio_client
7
+ plotly
8
  huggingface-hub>=0.18.0
9
  matplotlib
10
  numpy