Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,30 +6,3 @@ colorTo: purple
|
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
-
|
| 10 |
-
# What is this?
|
| 11 |
-
This repository is a demo leaderboard template.
|
| 12 |
-
You can copy the leaderboard space and the two datasets (results and requests) to your org to get started with your own leaderboard!
|
| 13 |
-
|
| 14 |
-
The space does 3 things:
|
| 15 |
-
- stores users submissions, and sends them to the `requests` dataset
|
| 16 |
-
- reads the submissions depending on their status/date of creation.
|
| 17 |
-
- reads the results (results of running evaluations should be sent to `results`) and displays them in a leaderboard.
|
| 18 |
-
|
| 19 |
-
You should use this leaderboard if you have your own backend and plan to run it elsewhere.
|
| 20 |
-
|
| 21 |
-
# Getting started
|
| 22 |
-
## Defining environment variables
|
| 23 |
-
To get started on your own leaderboard, you will need to edit 2 files:
|
| 24 |
-
- `src/envs.py` to define your own environment variable (like the org name in which this has been copied)
|
| 25 |
-
- `src/display/about.py` with the tasks and number of few_shots you want for your tasks
|
| 26 |
-
|
| 27 |
-
## Setting up fake results to initialize the leaderboard
|
| 28 |
-
Once this is done, you need to edit the "fake results" file to fit the format of your tasks: in the sub dictionary `results`, replace task_name1 and metric_name by the correct values you defined in Tasks above.
|
| 29 |
-
```
|
| 30 |
-
"results": {
|
| 31 |
-
"task_name1": {
|
| 32 |
-
"metric_name": 0
|
| 33 |
-
}
|
| 34 |
-
}
|
| 35 |
-
```
|
|
|
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|