Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"Quicktour","local":"quicktour","sections":[{"title":"Basic usage","local":"basic-usage","sections":[],"depth":2},{"title":"Evaluate a model on one or more GPUs","local":"evaluate-a-model-on-one-or-more-gpus","sections":[{"title":"Data parallelism","local":"data-parallelism","sections":[],"depth":4},{"title":"Pipeline parallelism","local":"pipeline-parallelism","sections":[],"depth":4}],"depth":2},{"title":"Backend configuration","local":"backend-configuration","sections":[{"title":"Accelerate","local":"accelerate","sections":[],"depth":3},{"title":"VLLM","local":"vllm","sections":[],"depth":3}],"depth":2},{"title":"Nanotron","local":"nanotron","sections":[],"depth":2}],"depth":1}"> | |
| <link href="/docs/lighteval/pr_706/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/entry/start.282ba36b.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/chunks/scheduler.7da89386.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/chunks/singletons.95bcf9b7.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/chunks/paths.757e4909.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/entry/app.aa546cdc.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/chunks/index.20910acc.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/nodes/0.1774b32f.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/nodes/16.6a7e3c9d.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/chunks/Tip.53e22153.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/chunks/CodeBlock.143bd81e.js"> | |
| <link rel="modulepreload" href="/docs/lighteval/pr_706/en/_app/immutable/chunks/index.3af8f81c.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"Quicktour","local":"quicktour","sections":[{"title":"Basic usage","local":"basic-usage","sections":[],"depth":2},{"title":"Evaluate a model on one or more GPUs","local":"evaluate-a-model-on-one-or-more-gpus","sections":[{"title":"Data parallelism","local":"data-parallelism","sections":[],"depth":4},{"title":"Pipeline parallelism","local":"pipeline-parallelism","sections":[],"depth":4}],"depth":2},{"title":"Backend configuration","local":"backend-configuration","sections":[{"title":"Accelerate","local":"accelerate","sections":[],"depth":3},{"title":"VLLM","local":"vllm","sections":[],"depth":3}],"depth":2},{"title":"Nanotron","local":"nanotron","sections":[],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="quicktour" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#quicktour"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Quicktour</span></h1> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1gm4bcl">We recommend using the <code>--help</code> flag to get more information about the | |
| available options for each command. | |
| <code>lighteval --help</code></p></div> <p data-svelte-h="svelte-170zkli">Lighteval can be used with a few different commands.</p> <ul data-svelte-h="svelte-15k7gob"><li><code>lighteval accelerate</code> : evaluate models on CPU or one or more GPUs using <a href="https://github.com/huggingface/accelerate" rel="nofollow">🤗 | |
| Accelerate</a></li> <li><code>lighteval nanotron</code>: evaluate models in distributed settings using <a href="https://github.com/huggingface/nanotron" rel="nofollow">⚡️ | |
| Nanotron</a></li> <li><code>lighteval vllm</code>: evaluate models on one or more GPUs using <a href="https://github.com/vllm-project/vllm" rel="nofollow">🚀 | |
| VLLM</a></li> <li><code>lighteval endpoint</code><ul><li><code>inference-endpoint</code>: evaluate models on one or more GPUs using <a href="https://huggingface.co/inference-endpoints/dedicated" rel="nofollow">🔗 | |
| Inference Endpoint</a></li> <li><code>tgi</code>: evaluate models on one or more GPUs using <a href="https://huggingface.co/docs/text-generation-inference/en/index" rel="nofollow">🔗 Text Generation Inference</a></li> <li><code>openai</code>: evaluate models on one or more GPUs using <a href="https://platform.openai.com/" rel="nofollow">🔗 OpenAI API</a></li></ul></li></ul> <h2 class="relative group"><a id="basic-usage" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#basic-usage"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Basic usage</span></h2> <p data-svelte-h="svelte-1hliq6s">To evaluate <code>GPT-2</code> on the Truthful QA benchmark with <a href="https://github.com/huggingface/accelerate" rel="nofollow">🤗 | |
| Accelerate</a> , run:</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->lighteval accelerate \ | |
| <span class="hljs-string">"model_name=openai-community/gpt2"</span> \ | |
| <span class="hljs-string">"leaderboard|truthfulqa:mc|0|0"</span><!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-sen3yg">Here, we first choose a backend (either <code>accelerate</code>, <code>nanotron</code>, or <code>vllm</code>), and then specify the model and task(s) to run.</p> <p data-svelte-h="svelte-1nogtsd">The syntax for the model arguments is <code>key1=value1,key2=value2,etc</code>. | |
| Valid key-value pairs correspond with the backend configuration, and are detailed [below](#Model Arguments).</p> <p data-svelte-h="svelte-1u8n90i">The syntax for the task specification might be a bit hard to grasp at first. The format is as follows:</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->{suite}|{task}|{num_few_shot}|{0 for strict `num_few_shots`, or 1 to allow a truncation if context size is too small}<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-17wosto">If the fourth value is set to 1, lighteval will check if the prompt (including the few-shot examples) is too long for the context size of the task or the model. | |
| If so, the number of few shot examples is automatically reduced.</p> <p data-svelte-h="svelte-1rp0bxx">All officially supported tasks can be found at the <a href="available-tasks">tasks_list</a> and in the | |
| <a href="https://github.com/huggingface/lighteval/tree/main/src/lighteval/tasks/extended" rel="nofollow">extended folder</a>. | |
| Moreover, community-provided tasks can be found in the | |
| <a href="https://github.com/huggingface/lighteval/tree/main/community_tasks" rel="nofollow">community</a> folder. | |
| For more details on the implementation of the tasks, such as how prompts are constructed, or which metrics are used, you can have a look at the | |
| <a href="https://github.com/huggingface/lighteval/blob/main/src/lighteval/tasks/default_tasks.py" rel="nofollow">file</a> | |
| implementing them.</p> <p data-svelte-h="svelte-uvvbnv">Running multiple tasks is supported, either with a comma-separated list, or by specifying a file path. | |
| The file should be structured like <a href="https://github.com/huggingface/lighteval/blob/main/examples/tasks/recommended_set.txt" rel="nofollow">examples/tasks/recommended_set.txt</a>. | |
| When specifying a path to file, it should start with <code>./</code>.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->lighteval accelerate \ | |
| <span class="hljs-string">"model_name=openai-community/gpt2"</span> \ | |
| ./path/to/lighteval/examples/tasks/recommended_set.txt | |
| <span class="hljs-comment"># or, e.g., "leaderboard|truthfulqa:mc|0|0|,leaderboard|gsm8k|3|1"</span><!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="evaluate-a-model-on-one-or-more-gpus" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#evaluate-a-model-on-one-or-more-gpus"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Evaluate a model on one or more GPUs</span></h2> <h4 class="relative group"><a id="data-parallelism" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#data-parallelism"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Data parallelism</span></h4> <p data-svelte-h="svelte-y0ncq1">To evaluate a model on one or more GPUs, first create a multi-gpu config by running.</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->accelerate config<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-3b2ins">You can then evaluate a model using data parallelism on 8 GPUs like follows:</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->accelerate launch --multi_gpu --num_processes=8 -m \ | |
| lighteval accelerate \ | |
| <span class="hljs-string">"model_name=openai-community/gpt2"</span> \ | |
| <span class="hljs-string">"leaderboard|truthfulqa:mc|0|0"</span><!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1ibtfit">Here, <code>--override_batch_size</code> defines the batch size per device, so the effective | |
| batch size will be <code>override_batch_size * num_gpus</code>.</p> <h4 class="relative group"><a id="pipeline-parallelism" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#pipeline-parallelism"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Pipeline parallelism</span></h4> <p data-svelte-h="svelte-21byj1">To evaluate a model using pipeline parallelism on 2 or more GPUs, run:</p> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->lighteval accelerate \ | |
| <span class="hljs-string">"model_name=openai-community/gpt2,model_parallel=True"</span> \ | |
| <span class="hljs-string">"leaderboard|truthfulqa:mc|0|0"</span><!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1px9mui">This will automatically use accelerate to distribute the model across the GPUs.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-2p0bbo">Both data and pipeline parallelism can be combined by setting | |
| <code>model_parallel=True</code> and using accelerate to distribute the data across the | |
| GPUs.</p></div> <h2 class="relative group"><a id="backend-configuration" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#backend-configuration"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Backend configuration</span></h2> <p data-svelte-h="svelte-1qd0o29">The <code>model-args</code> argument takes a string representing a list of model | |
| argument. The arguments allowed vary depending on the backend you use (vllm or | |
| accelerate).</p> <h3 class="relative group"><a id="accelerate" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#accelerate"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Accelerate</span></h3> <ul data-svelte-h="svelte-1khr4gh"><li><strong>pretrained</strong> (str): | |
| HuggingFace Hub model ID name or the path to a pre-trained | |
| model to load. This is effectively the <code>pretrained_model_name_or_path</code> | |
| argument of <code>from_pretrained</code> in the HuggingFace <code>transformers</code> API.</li> <li><strong>tokenizer</strong> (Optional[str]): HuggingFace Hub tokenizer ID that will be | |
| used for tokenization.</li> <li><strong>multichoice_continuations_start_space</strong> (Optional[bool]): Whether to add a | |
| space at the start of each continuation in multichoice generation. | |
| For example, context: “What is the capital of France?” and choices: “Paris”, “London”. | |
| Will be tokenized as: “What is the capital of France? Paris” and “What is the capital of France? London”. | |
| True adds a space, False strips a space, None does nothing</li> <li><strong>subfolder</strong> (Optional[str]): The subfolder within the model repository.</li> <li><strong>revision</strong> (str): The revision of the model.</li> <li><strong>max_gen_toks</strong> (Optional[int]): The maximum number of tokens to generate.</li> <li><strong>max_length</strong> (Optional[int]): The maximum length of the generated output.</li> <li><strong>add_special_tokens</strong> (bool, optional, defaults to True): Whether to add special tokens to the input sequences. | |
| If <code>None</code>, the default value will be set to <code>True</code> for seq2seq models (e.g. T5) and | |
| <code>False</code> for causal models.</li> <li><strong>model_parallel</strong> (bool, optional, defaults to None): | |
| True/False: force to use or not the <code>accelerate</code> library to load a large | |
| model across multiple devices. | |
| Default: None which corresponds to comparing the number of processes with | |
| the number of GPUs. If it’s smaller => model-parallelism, else not.</li> <li><strong>dtype</strong> (Union[str, torch.dtype], optional, defaults to None):): | |
| Converts the model weights to <code>dtype</code>, if specified. Strings get | |
| converted to <code>torch.dtype</code> objects (e.g. <code>float16</code> -> <code>torch.float16</code>). | |
| Use <code>dtype="auto"</code> to derive the type from the model’s weights.</li> <li><strong>device</strong> (Union[int, str]): device to use for model training.</li> <li><strong>quantization_config</strong> (Optional[BitsAndBytesConfig]): quantization | |
| configuration for the model, manually provided to load a normally floating point | |
| model at a quantized precision. Needed for 4-bit and 8-bit precision.</li> <li><strong>trust_remote_code</strong> (bool): Whether to trust remote code during model | |
| loading.</li></ul> <h3 class="relative group"><a id="vllm" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#vllm"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>VLLM</span></h3> <ul data-svelte-h="svelte-r0e92d"><li><strong>pretrained</strong> (str): HuggingFace Hub model ID name or the path to a pre-trained model to load.</li> <li><strong>gpu_memory_utilization</strong> (float): The fraction of GPU memory to use.</li> <li><strong>batch_size</strong> (int): The batch size for model training.</li> <li><strong>revision</strong> (str): The revision of the model.</li> <li><strong>dtype</strong> (str, None): The data type to use for the model.</li> <li><strong>tensor_parallel_size</strong> (int): The number of tensor parallel units to use.</li> <li><strong>data_parallel_size</strong> (int): The number of data parallel units to use.</li> <li><strong>max_model_length</strong> (int): The maximum length of the model.</li> <li><strong>swap_space</strong> (int): The CPU swap space size (GiB) per GPU.</li> <li><strong>seed</strong> (int): The seed to use for the model.</li> <li><strong>trust_remote_code</strong> (bool): Whether to trust remote code during model loading.</li> <li><strong>use_chat_template</strong> (bool): Whether to use the chat template or not.</li> <li><strong>add_special_tokens</strong> (bool): Whether to add special tokens to the input sequences.</li> <li><strong>multichoice_continuations_start_space</strong> (bool): Whether to add a space at the start of each continuation in multichoice generation.</li> <li><strong>subfolder</strong> (Optional[str]): The subfolder within the model repository.</li></ul> <h2 class="relative group"><a id="nanotron" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#nanotron"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Nanotron</span></h2> <p data-svelte-h="svelte-1j3m91m">To evaluate a model trained with nanotron on a single gpu.</p> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-xg3kaz">Nanotron models cannot be evaluated without torchrun.</p></div> <div class="code-block relative "><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --> torchrun --standalone --nnodes=1 --nproc-per-node=1 \ | |
| src/lighteval/__main__.py nanotron \ | |
| --checkpoint-config-path ../nanotron/checkpoints/10/config.yaml \ | |
| --lighteval-config-path examples/nanotron/lighteval_config_override_template.yaml<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-106lbf0">The <code>nproc-per-node</code> argument should match the data, tensor and pipeline | |
| parallelism confidured in the <code>lighteval_config_template.yaml</code> file. | |
| That is: <code>nproc-per-node = data_parallelism * tensor_parallelism * pipeline_parallelism</code>.</p> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/lighteval/blob/main/docs/source/quicktour.mdx" target="_blank"><span data-svelte-h="svelte-1kd6by1"><</span> <span data-svelte-h="svelte-x0xyl0">></span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_kkv2dy = { | |
| assets: "/docs/lighteval/pr_706/en", | |
| base: "/docs/lighteval/pr_706/en", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/lighteval/pr_706/en/_app/immutable/entry/start.282ba36b.js"), | |
| import("/docs/lighteval/pr_706/en/_app/immutable/entry/app.aa546cdc.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 16], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 34.4 kB
- Xet hash:
- 4cf1416c9c8620929bcc7153ecb1622bf2aaaccdf6a83f0ba0af0de03f5dd0ac
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.