Update README.md
Browse files
README.md
CHANGED
|
@@ -272,7 +272,7 @@ Evaluation Code: [SalesforceAIResearch/SFR-RAG](https://github.com/SalesforceAIR
|
|
| 272 |
|
| 273 |
ContextualBench is a powerful evaluation framework designed to assess the performance of Large Language Models (LLMs) on contextual datasets. It provides a flexible pipeline for evaluating various LLM families across different tasks, with a focus on handling large context inputs.
|
| 274 |
|
| 275 |
-
>
|
| 276 |
|
| 277 |
|
| 278 |
## Features
|
|
@@ -284,7 +284,7 @@ ContextualBench is a powerful evaluation framework designed to assess the perfor
|
|
| 284 |
|
| 285 |
The dataset can be loaded using the command
|
| 286 |
```python
|
| 287 |
-
task = "hotpotqa" # it can be any other option
|
| 288 |
load_dataset("Salesforce/ContextualBench", task, split="validation")
|
| 289 |
```
|
| 290 |
|
|
|
|
| 272 |
|
| 273 |
ContextualBench is a powerful evaluation framework designed to assess the performance of Large Language Models (LLMs) on contextual datasets. It provides a flexible pipeline for evaluating various LLM families across different tasks, with a focus on handling large context inputs.
|
| 274 |
|
| 275 |
+
> Users need to make their own assessment regarding any obligations or responsibilities under the corresponding licenses or terms and conditions pertaining to the original datasets and data.
|
| 276 |
|
| 277 |
|
| 278 |
## Features
|
|
|
|
| 284 |
|
| 285 |
The dataset can be loaded using the command
|
| 286 |
```python
|
| 287 |
+
task = "hotpotqa" # it can be any other option
|
| 288 |
load_dataset("Salesforce/ContextualBench", task, split="validation")
|
| 289 |
```
|
| 290 |
|