Datasets:

ArXiv:
Tags:
agent
License:
nielsr HF Staff commited on
Commit
c428aa6
·
verified ·
1 Parent(s): b7b05b9

Improve dataset card: Add task categories, update language, add sample usage and acknowledgements

Browse files

This PR improves the dataset card for VitaBench by:

* **Updating metadata**:
* Adding `task_categories: ['text-generation']` for better discoverability of the benchmark's domain.
* Updating the `language` tag from `zh` to `['zh', 'en']`, as the benchmark and its documentation support both Chinese and English tasks, as indicated by the `vita run` command.
* Adding `benchmark` and `llm-agents` to the `tags` for more precise categorization.
* **Adding comprehensive usage instructions**:
* A new "Sample Usage" section has been added, incorporating the "Quick Start" guide from the GitHub repository. This includes detailed steps for installation, LLM configuration, and running/re-evaluating/viewing evaluations, complete with code snippets.
* The content from the redundant `

Files changed (1) hide show
  1. README.md +111 -3
README.md CHANGED
@@ -1,10 +1,16 @@
1
  ---
2
- license: mit
3
  language:
4
  - zh
 
 
5
  tags:
6
  - agent
 
 
 
 
7
  ---
 
8
  <div align=center><h1>
9
  🌱VitaBench: Benchmarking LLM Agents<br>
10
  with Versatile Interactive Tasks
@@ -47,10 +53,104 @@ Statistics of databases and environments:
47
  | &nbsp;&nbsp; General | 6 | 6 | 5 | 5 |
48
  | **Tasks** | 100 | 100 | 100 | 100 |
49
 
50
- ## 🛠️ Environment
51
 
52
  VitaBench provides an evaluation framework that supports model evaluations on both single-domain and cross-domain tasks through flexible configuration. For cross-domain evaluation, simply connect multiple domain names with commas—this will automatically merge the environments of the specified domains into a unified environment. Please visit our GitHub repository [vitabench](https://github.com/meituan/vitabench) for more detailed instructions.
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  ## 🔎 Citation
55
 
56
  If you find our work helpful or relevant to your research, please kindly cite our paper:
@@ -64,6 +164,14 @@ If you find our work helpful or relevant to your research, please kindly cite ou
64
  }
65
  ```
66
 
 
 
 
 
67
  ## 📜 License
68
 
69
- This project is licensed under the MIT License - see the [LICENSE](./LICENSE) file for details.
 
 
 
 
 
1
  ---
 
2
  language:
3
  - zh
4
+ - en
5
+ license: mit
6
  tags:
7
  - agent
8
+ - benchmark
9
+ - llm-agents
10
+ task_categories:
11
+ - text-generation
12
  ---
13
+
14
  <div align=center><h1>
15
  🌱VitaBench: Benchmarking LLM Agents<br>
16
  with Versatile Interactive Tasks
 
53
  | &nbsp;&nbsp; General | 6 | 6 | 5 | 5 |
54
  | **Tasks** | 100 | 100 | 100 | 100 |
55
 
56
+ ## Sample Usage
57
 
58
  VitaBench provides an evaluation framework that supports model evaluations on both single-domain and cross-domain tasks through flexible configuration. For cross-domain evaluation, simply connect multiple domain names with commas—this will automatically merge the environments of the specified domains into a unified environment. Please visit our GitHub repository [vitabench](https://github.com/meituan/vitabench) for more detailed instructions.
59
 
60
+ ### Installation
61
+
62
+ 1. Clone the repository:
63
+ ```bash
64
+ git clone https://github.com/meituan/vitabench.git
65
+ cd vitabench
66
+ ```
67
+
68
+ 2. Install Vita-Bench
69
+
70
+ ```bash
71
+ pip install -e .
72
+ ```
73
+
74
+ This will enable you to run the `vita` command.
75
+
76
+
77
+ ### Setup LLM Configurations
78
+
79
+ If you want to customize the location of the `models.yaml` file, you can specify the environment variable `VITA_MODEL_CONFIG_PATH` (default path from repository root is `src/vita/models.yaml`). For example:
80
+
81
+ ```bash
82
+ export VITA_MODEL_CONFIG_PATH=/path/to/your/model/configuration
83
+ ```
84
+
85
+ Example `models.yaml` file
86
+
87
+ ```yaml
88
+ default:
89
+ base_url: <base url>
90
+ temperature: <temperature>
91
+ max_input_tokens: <max input tokens>
92
+ headers:
93
+ Accept: "*/*"
94
+ Accept-Encoding: "gzip, deflate, br"
95
+ Content-Type: "application/json"
96
+ Authorization: "Bearer <api key>"
97
+ Connection: "keep-alive"
98
+ Cookie: <cookie>
99
+ User-Agent: <user agent>
100
+
101
+ models:
102
+ - name: <model name>
103
+ max_tokens: <max completion tokens (for some models, use max_completion_tokens)>
104
+ max_input_tokens: <max input tokens>
105
+ reasoning_effort: "high"
106
+ thinking:
107
+ type: "enabled"
108
+ budget_tokens: <budget tokens>
109
+ cost_1m_token_dollar:
110
+ prompt_price: <dollars per 1 million tokens>
111
+ completion_price: <dollars per 1 million tokens>
112
+ ```
113
+ The default configuration can apply to all models, the custom model configuration can overwrite default values.
114
+
115
+ ### Run evaluations
116
+
117
+ To run a test evaluation:
118
+
119
+ ```bash
120
+ vita run \
121
+ --domain <domain> \ # support single domain (delivery/instore/ota) and cross domain ([delivery,instore,ota])
122
+ --user-llm <model name> \ # model name in models.yaml
123
+ --agent-llm <model name> \ # model name in models.yaml
124
+ --enable-think \ # Enable think mode for the agent. Default is False.
125
+ --evaluator-llm <model name> \ # The LLM to use for evaluation.
126
+ --num-trials 1 \ # (Optional) The number of times each task is run. Default is 1.
127
+ --num-tasks 1 \ # (Optional) The number of tasks to run. Default is the number of all tasks.
128
+ --task-ids 1 \ # (Optional) Run only the tasks with the given IDs. Default is run all tasks.
129
+ --max-steps 300 \ # (Optional) The maximum number of steps to run the simulation. Default is 300.
130
+ --max-concurrency 1 \ # (Optional) The maximum number of concurrent simulations to run. Default is 1.
131
+ --csv-output <csv path> \ # (Optional) Path to CSV file to append results.
132
+ --language <chinese/english> \ # (Optional) The language to use for prompts and tasks. Choices: chinese, english. Default is chinese.
133
+ ```
134
+
135
+ Results will be saved in `data/simulations/`.
136
+
137
+ ### Re-evaluation simulation
138
+
139
+ Re-evaluate the simulation instead of running new ones.
140
+ ```bash
141
+ vita run \
142
+ --re-evaluate-file <simulation file path> \
143
+ --evaluation-type <evaluation type> \
144
+ --evaluator-llm <evaluation model> \
145
+ --save-to <new simulation file path>
146
+ ```
147
+
148
+ ### Viewing Results
149
+ ```bash
150
+ vita view \
151
+ --file <simulation file path> # If provided, only view the given simulation
152
+ ```
153
+
154
  ## 🔎 Citation
155
 
156
  If you find our work helpful or relevant to your research, please kindly cite our paper:
 
164
  }
165
  ```
166
 
167
+ ## 🤗 Acknowledgement
168
+
169
+ We adapted part of the [tau2-bench](https://github.com/sierra-research/tau2-bench)'s codebase in building our evaluation framework, and we greatly appreciate their contributions to the agent community.
170
+
171
  ## 📜 License
172
 
173
+ This project is licensed under the MIT License - see the [LICENSE](./LICENSE) file for details.
174
+
175
+ ## 📪 Support
176
+
177
+ For questions and support, please open an issue on GitHub or contact the maintainers.