id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
pharaouk/biology_dataset_standardized_cluster_20 | 2023-10-13T02:16:53.000Z | [
"region:us"
] | pharaouk | null | null | 0 | 0 | 2023-10-13T02:16:51 | ---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biology_dataset_standardized_cluster_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 418 | [
[
-0.033447265625,
-0.017303466796875,
0.0279541015625,
0.0267486572265625,
-0.02117919921875,
0.00914764404296875,
0.013702392578125,
-0.0178680419921875,
0.063720703125,
0.0211944580078125,
-0.04364013671875,
-0.06939697265625,
-0.045135498046875,
0.00469970... |
pharaouk/biology_dataset_standardized_cluster_21 | 2023-10-13T02:17:02.000Z | [
"region:us"
] | pharaouk | null | null | 0 | 0 | 2023-10-13T02:17:00 | ---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "biology_dataset_standardized_cluster_21"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 418 | [
[
-0.03216552734375,
-0.0230712890625,
0.025238037109375,
0.025054931640625,
-0.0225677490234375,
0.0109710693359375,
0.016082763671875,
-0.0191497802734375,
0.06378173828125,
0.0258941650390625,
-0.045654296875,
-0.0672607421875,
-0.043304443359375,
0.0046844... |
KevinGeng/Arthur_test | 2023-10-13T02:19:44.000Z | [
"region:us"
] | KevinGeng | null | null | 0 | 0 | 2023-10-13T02:19:44 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
AlignmentLab-AI/llava_v1_5_mix625k-fixed | 2023-10-13T06:02:59.000Z | [
"region:us"
] | AlignmentLab-AI | null | null | 1 | 0 | 2023-10-13T02:22:43 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01496124267578125,
-0.06036376953125,
0.0379... |
LevionB123/Hex_Snake_Data | 2023-10-13T03:28:28.000Z | [
"region:us"
] | LevionB123 | null | null | 0 | 0 | 2023-10-13T02:23:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_TFLai__bloom-560m-4bit-alpaca | 2023-10-13T02:31:53.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-13T02:31:44 | ---
pretty_name: Evaluation run of TFLai/bloom-560m-4bit-alpaca
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TFLai/bloom-560m-4bit-alpaca](https://huggingface.co/TFLai/bloom-560m-4bit-alpaca)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TFLai__bloom-560m-4bit-alpaca\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-13T02:31:40.775341](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__bloom-560m-4bit-alpaca/blob/main/results_2023-10-13T02-31-40.775341.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001153523489932886,\n\
\ \"em_stderr\": 0.00034761798968570957,\n \"f1\": 0.028393456375839028,\n\
\ \"f1_stderr\": 0.0009648156202587861,\n \"acc\": 0.25213936558333583,\n\
\ \"acc_stderr\": 0.007562025280082852\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.001153523489932886,\n \"em_stderr\": 0.00034761798968570957,\n\
\ \"f1\": 0.028393456375839028,\n \"f1_stderr\": 0.0009648156202587861\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.001516300227445034,\n \
\ \"acc_stderr\": 0.001071779348549266\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5027624309392266,\n \"acc_stderr\": 0.014052271211616438\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TFLai/bloom-560m-4bit-alpaca
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_13T02_31_40.775341
path:
- '**/details_harness|drop|3_2023-10-13T02-31-40.775341.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-13T02-31-40.775341.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_13T02_31_40.775341
path:
- '**/details_harness|gsm8k|5_2023-10-13T02-31-40.775341.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-13T02-31-40.775341.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_13T02_31_40.775341
path:
- '**/details_harness|winogrande|5_2023-10-13T02-31-40.775341.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-13T02-31-40.775341.parquet'
- config_name: results
data_files:
- split: 2023_10_13T02_31_40.775341
path:
- results_2023-10-13T02-31-40.775341.parquet
- split: latest
path:
- results_2023-10-13T02-31-40.775341.parquet
---
# Dataset Card for Evaluation run of TFLai/bloom-560m-4bit-alpaca
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TFLai/bloom-560m-4bit-alpaca
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TFLai/bloom-560m-4bit-alpaca](https://huggingface.co/TFLai/bloom-560m-4bit-alpaca) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TFLai__bloom-560m-4bit-alpaca",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T02:31:40.775341](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__bloom-560m-4bit-alpaca/blob/main/results_2023-10-13T02-31-40.775341.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968570957,
"f1": 0.028393456375839028,
"f1_stderr": 0.0009648156202587861,
"acc": 0.25213936558333583,
"acc_stderr": 0.007562025280082852
},
"harness|drop|3": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968570957,
"f1": 0.028393456375839028,
"f1_stderr": 0.0009648156202587861
},
"harness|gsm8k|5": {
"acc": 0.001516300227445034,
"acc_stderr": 0.001071779348549266
},
"harness|winogrande|5": {
"acc": 0.5027624309392266,
"acc_stderr": 0.014052271211616438
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,263 | [
[
-0.028167724609375,
-0.0498046875,
0.016021728515625,
0.0208740234375,
-0.01241302490234375,
0.00865936279296875,
-0.022430419921875,
-0.0152435302734375,
0.034454345703125,
0.03216552734375,
-0.04998779296875,
-0.06744384765625,
-0.050750732421875,
0.007633... |
open-llm-leaderboard/details_bigscience__bloomz-560m | 2023-10-13T02:59:49.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-13T02:59:41 | ---
pretty_name: Evaluation run of bigscience/bloomz-560m
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bigscience__bloomz-560m\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-13T02:59:38.387630](https://huggingface.co/datasets/open-llm-leaderboard/details_bigscience__bloomz-560m/blob/main/results_2023-10-13T02-59-38.387630.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.14523909395973153,\n\
\ \"em_stderr\": 0.003608309171282643,\n \"f1\": 0.17240142617449677,\n\
\ \"f1_stderr\": 0.0036932344433969273,\n \"acc\": 0.26558800315706393,\n\
\ \"acc_stderr\": 0.007012571320319757\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.14523909395973153,\n \"em_stderr\": 0.003608309171282643,\n\
\ \"f1\": 0.17240142617449677,\n \"f1_stderr\": 0.0036932344433969273\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5311760063141279,\n\
\ \"acc_stderr\": 0.014025142640639515\n }\n}\n```"
repo_url: https://huggingface.co/bigscience/bloomz-560m
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_13T02_59_38.387630
path:
- '**/details_harness|drop|3_2023-10-13T02-59-38.387630.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-13T02-59-38.387630.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_13T02_59_38.387630
path:
- '**/details_harness|gsm8k|5_2023-10-13T02-59-38.387630.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-13T02-59-38.387630.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_13T02_59_38.387630
path:
- '**/details_harness|winogrande|5_2023-10-13T02-59-38.387630.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-13T02-59-38.387630.parquet'
- config_name: results
data_files:
- split: 2023_10_13T02_59_38.387630
path:
- results_2023-10-13T02-59-38.387630.parquet
- split: latest
path:
- results_2023-10-13T02-59-38.387630.parquet
---
# Dataset Card for Evaluation run of bigscience/bloomz-560m
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bigscience/bloomz-560m
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bigscience__bloomz-560m",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T02:59:38.387630](https://huggingface.co/datasets/open-llm-leaderboard/details_bigscience__bloomz-560m/blob/main/results_2023-10-13T02-59-38.387630.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.14523909395973153,
"em_stderr": 0.003608309171282643,
"f1": 0.17240142617449677,
"f1_stderr": 0.0036932344433969273,
"acc": 0.26558800315706393,
"acc_stderr": 0.007012571320319757
},
"harness|drop|3": {
"em": 0.14523909395973153,
"em_stderr": 0.003608309171282643,
"f1": 0.17240142617449677,
"f1_stderr": 0.0036932344433969273
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5311760063141279,
"acc_stderr": 0.014025142640639515
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,102 | [
[
-0.0285491943359375,
-0.04510498046875,
0.02410888671875,
0.020843505859375,
-0.003787994384765625,
0.00662994384765625,
-0.03582763671875,
-0.01318359375,
0.029083251953125,
0.034210205078125,
-0.054412841796875,
-0.0731201171875,
-0.04534912109375,
0.01295... |
open-llm-leaderboard/details_MayaPH__opt-flan-iml-6.7b | 2023-10-13T03:06:44.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-13T03:06:36 | ---
pretty_name: Evaluation run of MayaPH/opt-flan-iml-6.7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [MayaPH/opt-flan-iml-6.7b](https://huggingface.co/MayaPH/opt-flan-iml-6.7b) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MayaPH__opt-flan-iml-6.7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-13T03:06:32.697788](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__opt-flan-iml-6.7b/blob/main/results_2023-10-13T03-06-32.697788.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.07518875838926174,\n\
\ \"em_stderr\": 0.002700490526265294,\n \"f1\": 0.10838401845637569,\n\
\ \"f1_stderr\": 0.0028760995167941457,\n \"acc\": 0.3212312549329124,\n\
\ \"acc_stderr\": 0.006735003721960345\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.07518875838926174,\n \"em_stderr\": 0.002700490526265294,\n\
\ \"f1\": 0.10838401845637569,\n \"f1_stderr\": 0.0028760995167941457\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6424625098658248,\n\
\ \"acc_stderr\": 0.01347000744392069\n }\n}\n```"
repo_url: https://huggingface.co/MayaPH/opt-flan-iml-6.7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_13T03_06_32.697788
path:
- '**/details_harness|drop|3_2023-10-13T03-06-32.697788.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-13T03-06-32.697788.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_13T03_06_32.697788
path:
- '**/details_harness|gsm8k|5_2023-10-13T03-06-32.697788.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-13T03-06-32.697788.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_13T03_06_32.697788
path:
- '**/details_harness|winogrande|5_2023-10-13T03-06-32.697788.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-13T03-06-32.697788.parquet'
- config_name: results
data_files:
- split: 2023_10_13T03_06_32.697788
path:
- results_2023-10-13T03-06-32.697788.parquet
- split: latest
path:
- results_2023-10-13T03-06-32.697788.parquet
---
# Dataset Card for Evaluation run of MayaPH/opt-flan-iml-6.7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/MayaPH/opt-flan-iml-6.7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [MayaPH/opt-flan-iml-6.7b](https://huggingface.co/MayaPH/opt-flan-iml-6.7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MayaPH__opt-flan-iml-6.7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T03:06:32.697788](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__opt-flan-iml-6.7b/blob/main/results_2023-10-13T03-06-32.697788.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.07518875838926174,
"em_stderr": 0.002700490526265294,
"f1": 0.10838401845637569,
"f1_stderr": 0.0028760995167941457,
"acc": 0.3212312549329124,
"acc_stderr": 0.006735003721960345
},
"harness|drop|3": {
"em": 0.07518875838926174,
"em_stderr": 0.002700490526265294,
"f1": 0.10838401845637569,
"f1_stderr": 0.0028760995167941457
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.6424625098658248,
"acc_stderr": 0.01347000744392069
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,122 | [
[
-0.036407470703125,
-0.044830322265625,
0.0085906982421875,
0.0170745849609375,
-0.01168060302734375,
0.007053375244140625,
-0.0237274169921875,
-0.018798828125,
0.0276336669921875,
0.03826904296875,
-0.05169677734375,
-0.06719970703125,
-0.04443359375,
0.01... |
open-llm-leaderboard/details_huggingtweets__gladosystem | 2023-10-13T03:18:51.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-13T03:18:43 | ---
pretty_name: Evaluation run of huggingtweets/gladosystem
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [huggingtweets/gladosystem](https://huggingface.co/huggingtweets/gladosystem)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_huggingtweets__gladosystem\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-13T03:18:40.922910](https://huggingface.co/datasets/open-llm-leaderboard/details_huggingtweets__gladosystem/blob/main/results_2023-10-13T03-18-40.922910.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.010276845637583893,\n\
\ \"em_stderr\": 0.0010328242665282317,\n \"f1\": 0.014896182885906039,\n\
\ \"f1_stderr\": 0.0011273085873104653,\n \"acc\": 0.2533543804262036,\n\
\ \"acc_stderr\": 0.0070256103461651745\n },\n \"harness|drop|3\":\
\ {\n \"em\": 0.010276845637583893,\n \"em_stderr\": 0.0010328242665282317,\n\
\ \"f1\": 0.014896182885906039,\n \"f1_stderr\": 0.0011273085873104653\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5067087608524072,\n\
\ \"acc_stderr\": 0.014051220692330349\n }\n}\n```"
repo_url: https://huggingface.co/huggingtweets/gladosystem
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_13T03_18_40.922910
path:
- '**/details_harness|drop|3_2023-10-13T03-18-40.922910.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-13T03-18-40.922910.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_13T03_18_40.922910
path:
- '**/details_harness|gsm8k|5_2023-10-13T03-18-40.922910.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-13T03-18-40.922910.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_13T03_18_40.922910
path:
- '**/details_harness|winogrande|5_2023-10-13T03-18-40.922910.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-13T03-18-40.922910.parquet'
- config_name: results
data_files:
- split: 2023_10_13T03_18_40.922910
path:
- results_2023-10-13T03-18-40.922910.parquet
- split: latest
path:
- results_2023-10-13T03-18-40.922910.parquet
---
# Dataset Card for Evaluation run of huggingtweets/gladosystem
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/huggingtweets/gladosystem
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [huggingtweets/gladosystem](https://huggingface.co/huggingtweets/gladosystem) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_huggingtweets__gladosystem",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T03:18:40.922910](https://huggingface.co/datasets/open-llm-leaderboard/details_huggingtweets__gladosystem/blob/main/results_2023-10-13T03-18-40.922910.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.010276845637583893,
"em_stderr": 0.0010328242665282317,
"f1": 0.014896182885906039,
"f1_stderr": 0.0011273085873104653,
"acc": 0.2533543804262036,
"acc_stderr": 0.0070256103461651745
},
"harness|drop|3": {
"em": 0.010276845637583893,
"em_stderr": 0.0010328242665282317,
"f1": 0.014896182885906039,
"f1_stderr": 0.0011273085873104653
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5067087608524072,
"acc_stderr": 0.014051220692330349
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,150 | [
[
-0.03167724609375,
-0.04345703125,
0.0174102783203125,
0.0185394287109375,
-0.0176849365234375,
0.00507354736328125,
-0.027984619140625,
-0.017181396484375,
0.039031982421875,
0.034698486328125,
-0.058624267578125,
-0.06988525390625,
-0.051971435546875,
0.01... |
AlexHung29629/stack-exchange-paired-128K | 2023-10-13T05:42:06.000Z | [
"region:us"
] | AlexHung29629 | null | null | 0 | 0 | 2023-10-13T04:07:53 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 243412260
num_examples: 128000
download_size: 82603750
dataset_size: 243412260
---
# Dataset Card for "stack-exchange-paired-128K"
## token數
llama2: 97868021
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 477 | [
[
-0.02288818359375,
-0.00493621826171875,
0.0078887939453125,
0.045379638671875,
-0.047210693359375,
0.01947021484375,
0.0224609375,
0.0015077590942382812,
0.0667724609375,
0.0372314453125,
-0.040771484375,
-0.04754638671875,
-0.0489501953125,
-0.004924774169... |
open-llm-leaderboard/details_RoversX__llama-2-7b-hf-small-shards-Samantha-V1-SFT | 2023-10-13T04:33:40.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-13T04:33:32 | ---
pretty_name: Evaluation run of RoversX/llama-2-7b-hf-small-shards-Samantha-V1-SFT
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [RoversX/llama-2-7b-hf-small-shards-Samantha-V1-SFT](https://huggingface.co/RoversX/llama-2-7b-hf-small-shards-Samantha-V1-SFT)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_RoversX__llama-2-7b-hf-small-shards-Samantha-V1-SFT\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-13T04:33:28.538192](https://huggingface.co/datasets/open-llm-leaderboard/details_RoversX__llama-2-7b-hf-small-shards-Samantha-V1-SFT/blob/main/results_2023-10-13T04-33-28.538192.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0012583892617449664,\n\
\ \"em_stderr\": 0.00036305608931188796,\n \"f1\": 0.052196937919463185,\n\
\ \"f1_stderr\": 0.0012732861194066877,\n \"acc\": 0.4008241516587451,\n\
\ \"acc_stderr\": 0.009542578755221624\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.00036305608931188796,\n\
\ \"f1\": 0.052196937919463185,\n \"f1_stderr\": 0.0012732861194066877\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06368460955269144,\n \
\ \"acc_stderr\": 0.006726213078805692\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7379636937647988,\n \"acc_stderr\": 0.012358944431637557\n\
\ }\n}\n```"
repo_url: https://huggingface.co/RoversX/llama-2-7b-hf-small-shards-Samantha-V1-SFT
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_13T04_33_28.538192
path:
- '**/details_harness|drop|3_2023-10-13T04-33-28.538192.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-13T04-33-28.538192.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_13T04_33_28.538192
path:
- '**/details_harness|gsm8k|5_2023-10-13T04-33-28.538192.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-13T04-33-28.538192.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_13T04_33_28.538192
path:
- '**/details_harness|winogrande|5_2023-10-13T04-33-28.538192.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-13T04-33-28.538192.parquet'
- config_name: results
data_files:
- split: 2023_10_13T04_33_28.538192
path:
- results_2023-10-13T04-33-28.538192.parquet
- split: latest
path:
- results_2023-10-13T04-33-28.538192.parquet
---
# Dataset Card for Evaluation run of RoversX/llama-2-7b-hf-small-shards-Samantha-V1-SFT
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/RoversX/llama-2-7b-hf-small-shards-Samantha-V1-SFT
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [RoversX/llama-2-7b-hf-small-shards-Samantha-V1-SFT](https://huggingface.co/RoversX/llama-2-7b-hf-small-shards-Samantha-V1-SFT) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_RoversX__llama-2-7b-hf-small-shards-Samantha-V1-SFT",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T04:33:28.538192](https://huggingface.co/datasets/open-llm-leaderboard/details_RoversX__llama-2-7b-hf-small-shards-Samantha-V1-SFT/blob/main/results_2023-10-13T04-33-28.538192.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0012583892617449664,
"em_stderr": 0.00036305608931188796,
"f1": 0.052196937919463185,
"f1_stderr": 0.0012732861194066877,
"acc": 0.4008241516587451,
"acc_stderr": 0.009542578755221624
},
"harness|drop|3": {
"em": 0.0012583892617449664,
"em_stderr": 0.00036305608931188796,
"f1": 0.052196937919463185,
"f1_stderr": 0.0012732861194066877
},
"harness|gsm8k|5": {
"acc": 0.06368460955269144,
"acc_stderr": 0.006726213078805692
},
"harness|winogrande|5": {
"acc": 0.7379636937647988,
"acc_stderr": 0.012358944431637557
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,527 | [
[
-0.0260009765625,
-0.04840087890625,
0.0240325927734375,
0.0137481689453125,
-0.016357421875,
0.018707275390625,
-0.0169219970703125,
-0.00945281982421875,
0.036468505859375,
0.048492431640625,
-0.057281494140625,
-0.070068359375,
-0.0526123046875,
0.0160522... |
nlplabtdtu/sts15-vi | 2023-10-13T05:30:36.000Z | [
"region:us"
] | nlplabtdtu | null | null | 0 | 0 | 2023-10-13T05:30:24 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
wecover/MKQA_NQ | 2023-10-13T05:41:29.000Z | [
"region:us"
] | wecover | null | null | 0 | 0 | 2023-10-13T05:41:29 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
pytorch-survival/gene_annotations | 2023-10-13T06:02:40.000Z | [
"region:us"
] | pytorch-survival | null | null | 0 | 0 | 2023-10-13T05:59:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
hrangel/MexLot2 | 2023-10-13T06:04:10.000Z | [
"region:us"
] | hrangel | null | null | 0 | 0 | 2023-10-13T06:04:04 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4162083.0
num_examples: 33
download_size: 4161878
dataset_size: 4162083.0
---
# Dataset Card for "MexLot2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 470 | [
[
-0.037261962890625,
-0.006145477294921875,
0.0196685791015625,
0.0169830322265625,
-0.0121612548828125,
0.008544921875,
0.0364990234375,
-0.01139068603515625,
0.0577392578125,
0.037384033203125,
-0.060302734375,
-0.03564453125,
-0.0360107421875,
-0.022811889... |
autoevaluate/autoeval-eval-acronym_identification-default-3cc14e-94828146206 | 2023-10-13T06:17:48.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-13T06:17:44 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
someet/w2 | 2023-10-13T06:44:55.000Z | [
"region:us"
] | someet | null | null | 0 | 0 | 2023-10-13T06:44:55 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_Qwen__Qwen-14B | 2023-10-13T07:08:16.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-13T07:08:03 | ---
pretty_name: Evaluation run of Qwen/Qwen-14B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Qwen/Qwen-14B](https://huggingface.co/Qwen/Qwen-14B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 61 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Qwen__Qwen-14B_public\"\
,\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\
\nThese are the [latest results from run 2023-10-13T07:07:43.344774](https://huggingface.co/datasets/open-llm-leaderboard/details_Qwen__Qwen-14B_public/blob/main/results_2023-10-13T07-07-43.344774.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6741274333689082,\n\
\ \"acc_stderr\": 0.03234188422888031,\n \"acc_norm\": 0.6782046919453042,\n\
\ \"acc_norm_stderr\": 0.032320246904756274,\n \"mc1\": 0.34394124847001223,\n\
\ \"mc1_stderr\": 0.016629087514276785,\n \"mc2\": 0.49432944608876894,\n\
\ \"mc2_stderr\": 0.015023548526740723\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5366894197952219,\n \"acc_stderr\": 0.014572000527756998,\n\
\ \"acc_norm\": 0.5827645051194539,\n \"acc_norm_stderr\": 0.014409825518403079\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6453893646683927,\n\
\ \"acc_stderr\": 0.004774174590205148,\n \"acc_norm\": 0.8398725353515236,\n\
\ \"acc_norm_stderr\": 0.003659747476241057\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n\
\ \"acc_stderr\": 0.04218506215368879,\n \"acc_norm\": 0.6074074074074074,\n\
\ \"acc_norm_stderr\": 0.04218506215368879\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.7171052631578947,\n \"acc_stderr\": 0.03665349695640767,\n\
\ \"acc_norm\": 0.7171052631578947,\n \"acc_norm_stderr\": 0.03665349695640767\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n\
\ \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \
\ \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337145,\n\
\ \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337145\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n\
\ \"acc_stderr\": 0.03437079344106134,\n \"acc_norm\": 0.7847222222222222,\n\
\ \"acc_norm_stderr\": 0.03437079344106134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.61,\n\
\ \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7109826589595376,\n\
\ \"acc_stderr\": 0.03456425745086999,\n \"acc_norm\": 0.7109826589595376,\n\
\ \"acc_norm_stderr\": 0.03456425745086999\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.46078431372549017,\n \"acc_stderr\": 0.049598599663841815,\n\
\ \"acc_norm\": 0.46078431372549017,\n \"acc_norm_stderr\": 0.049598599663841815\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \"acc_norm\": 0.77,\n\
\ \"acc_norm_stderr\": 0.04229525846816507\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.6212765957446809,\n \"acc_stderr\": 0.03170995606040655,\n\
\ \"acc_norm\": 0.6212765957446809,\n \"acc_norm_stderr\": 0.03170995606040655\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.047028804320496165,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.047028804320496165\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.6551724137931034,\n \"acc_stderr\": 0.039609335494512087,\n\
\ \"acc_norm\": 0.6551724137931034,\n \"acc_norm_stderr\": 0.039609335494512087\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.5211640211640212,\n \"acc_stderr\": 0.025728230952130726,\n \"\
acc_norm\": 0.5211640211640212,\n \"acc_norm_stderr\": 0.025728230952130726\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.49206349206349204,\n\
\ \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.49206349206349204,\n\
\ \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.832258064516129,\n\
\ \"acc_stderr\": 0.021255464065371318,\n \"acc_norm\": 0.832258064516129,\n\
\ \"acc_norm_stderr\": 0.021255464065371318\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.645320197044335,\n \"acc_stderr\": 0.0336612448905145,\n\
\ \"acc_norm\": 0.645320197044335,\n \"acc_norm_stderr\": 0.0336612448905145\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\"\
: 0.74,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.8303030303030303,\n \"acc_stderr\": 0.029311188674983116,\n\
\ \"acc_norm\": 0.8303030303030303,\n \"acc_norm_stderr\": 0.029311188674983116\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.8434343434343434,\n \"acc_stderr\": 0.025890520358141454,\n \"\
acc_norm\": 0.8434343434343434,\n \"acc_norm_stderr\": 0.025890520358141454\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9222797927461139,\n \"acc_stderr\": 0.019321805557223164,\n\
\ \"acc_norm\": 0.9222797927461139,\n \"acc_norm_stderr\": 0.019321805557223164\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n\
\ \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.37407407407407406,\n \"acc_stderr\": 0.029502861128955286,\n \
\ \"acc_norm\": 0.37407407407407406,\n \"acc_norm_stderr\": 0.029502861128955286\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7436974789915967,\n \"acc_stderr\": 0.02835962087053395,\n \
\ \"acc_norm\": 0.7436974789915967,\n \"acc_norm_stderr\": 0.02835962087053395\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.41721854304635764,\n \"acc_stderr\": 0.0402614149763461,\n \"\
acc_norm\": 0.41721854304635764,\n \"acc_norm_stderr\": 0.0402614149763461\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8458715596330275,\n \"acc_stderr\": 0.0154808268653743,\n \"acc_norm\"\
: 0.8458715596330275,\n \"acc_norm_stderr\": 0.0154808268653743\n },\n\
\ \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5601851851851852,\n\
\ \"acc_stderr\": 0.033851779760448106,\n \"acc_norm\": 0.5601851851851852,\n\
\ \"acc_norm_stderr\": 0.033851779760448106\n },\n \"harness|hendrycksTest-high_school_us_history|5\"\
: {\n \"acc\": 0.8186274509803921,\n \"acc_stderr\": 0.02704462171947407,\n\
\ \"acc_norm\": 0.8186274509803921,\n \"acc_norm_stderr\": 0.02704462171947407\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8227848101265823,\n \"acc_stderr\": 0.024856364184503217,\n \
\ \"acc_norm\": 0.8227848101265823,\n \"acc_norm_stderr\": 0.024856364184503217\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7399103139013453,\n\
\ \"acc_stderr\": 0.029442495585857473,\n \"acc_norm\": 0.7399103139013453,\n\
\ \"acc_norm_stderr\": 0.029442495585857473\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n\
\ \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8347107438016529,\n \"acc_stderr\": 0.03390780612972776,\n \"\
acc_norm\": 0.8347107438016529,\n \"acc_norm_stderr\": 0.03390780612972776\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.03957835471980981,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.03957835471980981\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n\
\ \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6160714285714286,\n\
\ \"acc_stderr\": 0.046161430750285455,\n \"acc_norm\": 0.6160714285714286,\n\
\ \"acc_norm_stderr\": 0.046161430750285455\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n\
\ \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092375,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092375\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8403575989782887,\n\
\ \"acc_stderr\": 0.013097934513263005,\n \"acc_norm\": 0.8403575989782887,\n\
\ \"acc_norm_stderr\": 0.013097934513263005\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7630057803468208,\n \"acc_stderr\": 0.02289408248992599,\n\
\ \"acc_norm\": 0.7630057803468208,\n \"acc_norm_stderr\": 0.02289408248992599\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.423463687150838,\n\
\ \"acc_stderr\": 0.016525425898773507,\n \"acc_norm\": 0.423463687150838,\n\
\ \"acc_norm_stderr\": 0.016525425898773507\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.024954184324879912,\n\
\ \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.024954184324879912\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n\
\ \"acc_stderr\": 0.02549425935069491,\n \"acc_norm\": 0.7202572347266881,\n\
\ \"acc_norm_stderr\": 0.02549425935069491\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.025407197798890165,\n\
\ \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.025407197798890165\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4645390070921986,\n \"acc_stderr\": 0.02975238965742705,\n \
\ \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.02975238965742705\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.48826597131681876,\n\
\ \"acc_stderr\": 0.012766719019686724,\n \"acc_norm\": 0.48826597131681876,\n\
\ \"acc_norm_stderr\": 0.012766719019686724\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n\
\ \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6977124183006536,\n \"acc_stderr\": 0.018579232711113877,\n \
\ \"acc_norm\": 0.6977124183006536,\n \"acc_norm_stderr\": 0.018579232711113877\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302505,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302505\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n\
\ \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n\
\ \"acc_stderr\": 0.024112678240900798,\n \"acc_norm\": 0.8656716417910447,\n\
\ \"acc_norm_stderr\": 0.024112678240900798\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \
\ \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197769\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n\
\ \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n\
\ \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727682,\n\
\ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727682\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.34394124847001223,\n\
\ \"mc1_stderr\": 0.016629087514276785,\n \"mc2\": 0.49432944608876894,\n\
\ \"mc2_stderr\": 0.015023548526740723\n }\n}\n```"
repo_url: https://huggingface.co/Qwen/Qwen-14B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|arc:challenge|25_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hellaswag|10_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-10-13T07-07-43.344774.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-10-13T07-07-43.344774.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-13T07-07-43.344774.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-10-13T07-07-43.344774.parquet'
- config_name: results
data_files:
- split: 2023_10_13T07_07_43.344774
path:
- results_2023-10-13T07-07-43.344774.parquet
- split: latest
path:
- results_2023-10-13T07-07-43.344774.parquet
---
# Dataset Card for Evaluation run of Qwen/Qwen-14B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Qwen/Qwen-14B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Qwen/Qwen-14B](https://huggingface.co/Qwen/Qwen-14B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Qwen__Qwen-14B_public",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T07:07:43.344774](https://huggingface.co/datasets/open-llm-leaderboard/details_Qwen__Qwen-14B_public/blob/main/results_2023-10-13T07-07-43.344774.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6741274333689082,
"acc_stderr": 0.03234188422888031,
"acc_norm": 0.6782046919453042,
"acc_norm_stderr": 0.032320246904756274,
"mc1": 0.34394124847001223,
"mc1_stderr": 0.016629087514276785,
"mc2": 0.49432944608876894,
"mc2_stderr": 0.015023548526740723
},
"harness|arc:challenge|25": {
"acc": 0.5366894197952219,
"acc_stderr": 0.014572000527756998,
"acc_norm": 0.5827645051194539,
"acc_norm_stderr": 0.014409825518403079
},
"harness|hellaswag|10": {
"acc": 0.6453893646683927,
"acc_stderr": 0.004774174590205148,
"acc_norm": 0.8398725353515236,
"acc_norm_stderr": 0.003659747476241057
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7171052631578947,
"acc_stderr": 0.03665349695640767,
"acc_norm": 0.7171052631578947,
"acc_norm_stderr": 0.03665349695640767
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.74,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.74,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7094339622641509,
"acc_stderr": 0.027943219989337145,
"acc_norm": 0.7094339622641509,
"acc_norm_stderr": 0.027943219989337145
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7847222222222222,
"acc_stderr": 0.03437079344106134,
"acc_norm": 0.7847222222222222,
"acc_norm_stderr": 0.03437079344106134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7109826589595376,
"acc_stderr": 0.03456425745086999,
"acc_norm": 0.7109826589595376,
"acc_norm_stderr": 0.03456425745086999
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.46078431372549017,
"acc_stderr": 0.049598599663841815,
"acc_norm": 0.46078431372549017,
"acc_norm_stderr": 0.049598599663841815
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6212765957446809,
"acc_stderr": 0.03170995606040655,
"acc_norm": 0.6212765957446809,
"acc_norm_stderr": 0.03170995606040655
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.047028804320496165,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.047028804320496165
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6551724137931034,
"acc_stderr": 0.039609335494512087,
"acc_norm": 0.6551724137931034,
"acc_norm_stderr": 0.039609335494512087
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.5211640211640212,
"acc_stderr": 0.025728230952130726,
"acc_norm": 0.5211640211640212,
"acc_norm_stderr": 0.025728230952130726
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.832258064516129,
"acc_stderr": 0.021255464065371318,
"acc_norm": 0.832258064516129,
"acc_norm_stderr": 0.021255464065371318
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.645320197044335,
"acc_stderr": 0.0336612448905145,
"acc_norm": 0.645320197044335,
"acc_norm_stderr": 0.0336612448905145
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.74,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.74,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8303030303030303,
"acc_stderr": 0.029311188674983116,
"acc_norm": 0.8303030303030303,
"acc_norm_stderr": 0.029311188674983116
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8434343434343434,
"acc_stderr": 0.025890520358141454,
"acc_norm": 0.8434343434343434,
"acc_norm_stderr": 0.025890520358141454
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9222797927461139,
"acc_stderr": 0.019321805557223164,
"acc_norm": 0.9222797927461139,
"acc_norm_stderr": 0.019321805557223164
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37407407407407406,
"acc_stderr": 0.029502861128955286,
"acc_norm": 0.37407407407407406,
"acc_norm_stderr": 0.029502861128955286
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7436974789915967,
"acc_stderr": 0.02835962087053395,
"acc_norm": 0.7436974789915967,
"acc_norm_stderr": 0.02835962087053395
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.41721854304635764,
"acc_stderr": 0.0402614149763461,
"acc_norm": 0.41721854304635764,
"acc_norm_stderr": 0.0402614149763461
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.0154808268653743,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.0154808268653743
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5601851851851852,
"acc_stderr": 0.033851779760448106,
"acc_norm": 0.5601851851851852,
"acc_norm_stderr": 0.033851779760448106
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8186274509803921,
"acc_stderr": 0.02704462171947407,
"acc_norm": 0.8186274509803921,
"acc_norm_stderr": 0.02704462171947407
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8227848101265823,
"acc_stderr": 0.024856364184503217,
"acc_norm": 0.8227848101265823,
"acc_norm_stderr": 0.024856364184503217
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7399103139013453,
"acc_stderr": 0.029442495585857473,
"acc_norm": 0.7399103139013453,
"acc_norm_stderr": 0.029442495585857473
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8347107438016529,
"acc_stderr": 0.03390780612972776,
"acc_norm": 0.8347107438016529,
"acc_norm_stderr": 0.03390780612972776
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.03957835471980981,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.03957835471980981
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7423312883435583,
"acc_stderr": 0.03436150827846917,
"acc_norm": 0.7423312883435583,
"acc_norm_stderr": 0.03436150827846917
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6160714285714286,
"acc_stderr": 0.046161430750285455,
"acc_norm": 0.6160714285714286,
"acc_norm_stderr": 0.046161430750285455
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092375,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092375
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8403575989782887,
"acc_stderr": 0.013097934513263005,
"acc_norm": 0.8403575989782887,
"acc_norm_stderr": 0.013097934513263005
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7630057803468208,
"acc_stderr": 0.02289408248992599,
"acc_norm": 0.7630057803468208,
"acc_norm_stderr": 0.02289408248992599
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.423463687150838,
"acc_stderr": 0.016525425898773507,
"acc_norm": 0.423463687150838,
"acc_norm_stderr": 0.016525425898773507
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.024954184324879912,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.024954184324879912
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.02549425935069491,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.02549425935069491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.025407197798890165,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.025407197798890165
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4645390070921986,
"acc_stderr": 0.02975238965742705,
"acc_norm": 0.4645390070921986,
"acc_norm_stderr": 0.02975238965742705
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.48826597131681876,
"acc_stderr": 0.012766719019686724,
"acc_norm": 0.48826597131681876,
"acc_norm_stderr": 0.012766719019686724
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6213235294117647,
"acc_stderr": 0.02946513363977613,
"acc_norm": 0.6213235294117647,
"acc_norm_stderr": 0.02946513363977613
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6977124183006536,
"acc_stderr": 0.018579232711113877,
"acc_norm": 0.6977124183006536,
"acc_norm_stderr": 0.018579232711113877
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302505,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302505
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8656716417910447,
"acc_stderr": 0.024112678240900798,
"acc_norm": 0.8656716417910447,
"acc_norm_stderr": 0.024112678240900798
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197769,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197769
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.038823108508905954,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.038823108508905954
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727682,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727682
},
"harness|truthfulqa:mc|0": {
"mc1": 0.34394124847001223,
"mc1_stderr": 0.016629087514276785,
"mc2": 0.49432944608876894,
"mc2_stderr": 0.015023548526740723
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 64,822 | [
[
-0.049285888671875,
-0.056488037109375,
0.018951416015625,
0.01434326171875,
-0.01080322265625,
-0.0052032470703125,
0.00273895263671875,
-0.01546478271484375,
0.0377197265625,
-0.003322601318359375,
-0.0357666015625,
-0.0496826171875,
-0.029296875,
0.017440... |
melvindave/embedded_faqs_medicare | 2023-10-13T07:08:08.000Z | [
"region:us"
] | melvindave | null | null | 0 | 0 | 2023-10-13T07:08:08 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
seonglae/chroma_psgs_w100 | 2023-10-23T07:30:30.000Z | [
"size_categories:100K<n<1M",
"chroma",
"chromadb",
"wikipedia",
"dpr",
"region:us"
] | seonglae | null | null | 0 | 0 | 2023-10-13T07:26:15 | ---
tags:
- chroma
- chromadb
- wikipedia
- dpr
pretty_name: Chroma psgs_w100 subset NQ vectors
size_categories:
- 100K<n<1M
---
DPR encoded wikipedia psge_w100 dataset are stored as a ChromaDB folder format
Only wiki subset data is stored and full dataset is [here]() | 268 | [
[
-0.050750732421875,
-0.01389312744140625,
0.0260162353515625,
0.00884246826171875,
-0.01837158203125,
0.00891876220703125,
-0.029266357421875,
0.003246307373046875,
0.025054931640625,
0.033050537109375,
-0.0689697265625,
-0.037811279296875,
-0.03326416015625,
... |
MThonar/mk_scorpion | 2023-10-13T08:53:33.000Z | [
"region:us"
] | MThonar | null | null | 0 | 0 | 2023-10-13T08:38:35 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
haonanqqq/AgriSFT | 2023-10-13T09:34:16.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"license:apache-2.0",
"region:us"
] | haonanqqq | null | null | 0 | 0 | 2023-10-13T09:22:21 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
- text2text-generation
- text-generation
size_categories:
- 10K<n<100K
---
## 数据集描述
这是一个基于Agricultural-dataset构建的农业指令跟随数据集。由于Agricultural-dataset是一个比较脏的数据集,并且包含了大量印度相关的内容。所以此数据集也是不干净的。干净版本将会在未来上传。
## Dataset Description
This is an agricultural instruction-following dataset built upon the Agricultural-dataset. Since the Agricultural-dataset is somewhat messy and contains a significant amount of content related to India, this dataset is also not entirely clean. A clean version will be uploaded in the future.
## 构建方法
本数据集使用gpt-3.5-turbo构建
this dataset was created by gpt-3.5-turbo | 665 | [
[
-0.01418304443359375,
-0.051177978515625,
-0.00402069091796875,
0.019683837890625,
-0.0289154052734375,
-0.015960693359375,
0.00783538818359375,
-0.0206298828125,
0.00682830810546875,
0.0234222412109375,
-0.028411865234375,
-0.05303955078125,
-0.066650390625,
... |
autoevaluate/autoeval-eval-ade_corpus_v2-Ade_corpus_v2_classification-668b00-94859146211 | 2023-10-13T09:32:24.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-13T09:32:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
jonas9983/aquarium | 2023-10-13T10:01:43.000Z | [
"region:us"
] | jonas9983 | null | null | 0 | 0 | 2023-10-13T09:41:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
open-llm-leaderboard/details_OpenBuddy__openbuddy-openllama-13b-v7-fp16 | 2023-10-14T17:51:36.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-13T09:46:56 | ---
pretty_name: Evaluation run of OpenBuddy/openbuddy-openllama-13b-v7-fp16
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [OpenBuddy/openbuddy-openllama-13b-v7-fp16](https://huggingface.co/OpenBuddy/openbuddy-openllama-13b-v7-fp16)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-openllama-13b-v7-fp16\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-14T17:51:28.265681](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-openllama-13b-v7-fp16/blob/main/results_2023-10-14T17-51-28.265681.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.13496224832214765,\n\
\ \"em_stderr\": 0.00349915623734624,\n \"f1\": 0.19493917785234854,\n\
\ \"f1_stderr\": 0.0036402036609824453,\n \"acc\": 0.39774068872582313,\n\
\ \"acc_stderr\": 0.010563523906790405\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.13496224832214765,\n \"em_stderr\": 0.00349915623734624,\n\
\ \"f1\": 0.19493917785234854,\n \"f1_stderr\": 0.0036402036609824453\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09855951478392722,\n \
\ \"acc_stderr\": 0.008210320350946331\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.696921862667719,\n \"acc_stderr\": 0.012916727462634477\n\
\ }\n}\n```"
repo_url: https://huggingface.co/OpenBuddy/openbuddy-openllama-13b-v7-fp16
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_13T09_46_52.076737
path:
- '**/details_harness|drop|3_2023-10-13T09-46-52.076737.parquet'
- split: 2023_10_14T17_51_28.265681
path:
- '**/details_harness|drop|3_2023-10-14T17-51-28.265681.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-14T17-51-28.265681.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_13T09_46_52.076737
path:
- '**/details_harness|gsm8k|5_2023-10-13T09-46-52.076737.parquet'
- split: 2023_10_14T17_51_28.265681
path:
- '**/details_harness|gsm8k|5_2023-10-14T17-51-28.265681.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-14T17-51-28.265681.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_13T09_46_52.076737
path:
- '**/details_harness|winogrande|5_2023-10-13T09-46-52.076737.parquet'
- split: 2023_10_14T17_51_28.265681
path:
- '**/details_harness|winogrande|5_2023-10-14T17-51-28.265681.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-14T17-51-28.265681.parquet'
- config_name: results
data_files:
- split: 2023_10_13T09_46_52.076737
path:
- results_2023-10-13T09-46-52.076737.parquet
- split: 2023_10_14T17_51_28.265681
path:
- results_2023-10-14T17-51-28.265681.parquet
- split: latest
path:
- results_2023-10-14T17-51-28.265681.parquet
---
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-openllama-13b-v7-fp16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/OpenBuddy/openbuddy-openllama-13b-v7-fp16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-openllama-13b-v7-fp16](https://huggingface.co/OpenBuddy/openbuddy-openllama-13b-v7-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-openllama-13b-v7-fp16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-14T17:51:28.265681](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-openllama-13b-v7-fp16/blob/main/results_2023-10-14T17-51-28.265681.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.13496224832214765,
"em_stderr": 0.00349915623734624,
"f1": 0.19493917785234854,
"f1_stderr": 0.0036402036609824453,
"acc": 0.39774068872582313,
"acc_stderr": 0.010563523906790405
},
"harness|drop|3": {
"em": 0.13496224832214765,
"em_stderr": 0.00349915623734624,
"f1": 0.19493917785234854,
"f1_stderr": 0.0036402036609824453
},
"harness|gsm8k|5": {
"acc": 0.09855951478392722,
"acc_stderr": 0.008210320350946331
},
"harness|winogrande|5": {
"acc": 0.696921862667719,
"acc_stderr": 0.012916727462634477
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,850 | [
[
-0.030120849609375,
-0.055999755859375,
0.0150146484375,
0.0184173583984375,
-0.006877899169921875,
0.00424957275390625,
-0.03515625,
-0.012908935546875,
0.0271453857421875,
0.03594970703125,
-0.04229736328125,
-0.0712890625,
-0.04119873046875,
0.00411224365... |
chrishocb/sports_classification_100 | 2023-10-13T09:50:02.000Z | [
"region:us"
] | chrishocb | null | null | 0 | 0 | 2023-10-13T09:50:02 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
temasarkisov/EsportLogosV2_processed_V3 | 2023-10-13T10:13:50.000Z | [
"region:us"
] | temasarkisov | null | null | 0 | 0 | 2023-10-13T10:13:47 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4563348.0
num_examples: 73
download_size: 4560668
dataset_size: 4563348.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "EsportLogosV2_processed_V3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 489 | [
[
-0.022857666015625,
-0.01284027099609375,
0.01788330078125,
0.024932861328125,
-0.0230865478515625,
0.0010061264038085938,
0.0211334228515625,
-0.0272216796875,
0.058624267578125,
0.04876708984375,
-0.0675048828125,
-0.049896240234375,
-0.04376220703125,
-0.... |
uiuiuiui8/gura | 2023-10-13T10:34:57.000Z | [
"region:us"
] | uiuiuiui8 | null | null | 0 | 0 | 2023-10-13T10:34:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
mahdi134/4 | 2023-10-13T10:39:52.000Z | [
"region:us"
] | mahdi134 | null | null | 0 | 0 | 2023-10-13T10:39:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2 | 2023-10-13T11:51:54.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-13T11:51:46 | ---
pretty_name: Evaluation run of YeungNLP/firefly-bloom-2b6-v2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [YeungNLP/firefly-bloom-2b6-v2](https://huggingface.co/YeungNLP/firefly-bloom-2b6-v2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-13T11:51:41.999066](https://huggingface.co/datasets/open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2/blob/main/results_2023-10-13T11-51-41.999066.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08630453020134228,\n\
\ \"em_stderr\": 0.002875790094905939,\n \"f1\": 0.1275723573825503,\n\
\ \"f1_stderr\": 0.00310355978869451,\n \"acc\": 0.2825940222825524,\n\
\ \"acc_stderr\": 0.008796871542302145\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.08630453020134228,\n \"em_stderr\": 0.002875790094905939,\n\
\ \"f1\": 0.1275723573825503,\n \"f1_stderr\": 0.00310355978869451\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.017437452615617893,\n \
\ \"acc_stderr\": 0.003605486867998265\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.5477505919494869,\n \"acc_stderr\": 0.013988256216606024\n\
\ }\n}\n```"
repo_url: https://huggingface.co/YeungNLP/firefly-bloom-2b6-v2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_13T11_51_41.999066
path:
- '**/details_harness|drop|3_2023-10-13T11-51-41.999066.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-13T11-51-41.999066.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_13T11_51_41.999066
path:
- '**/details_harness|gsm8k|5_2023-10-13T11-51-41.999066.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-13T11-51-41.999066.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_13T11_51_41.999066
path:
- '**/details_harness|winogrande|5_2023-10-13T11-51-41.999066.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-13T11-51-41.999066.parquet'
- config_name: results
data_files:
- split: 2023_10_13T11_51_41.999066
path:
- results_2023-10-13T11-51-41.999066.parquet
- split: latest
path:
- results_2023-10-13T11-51-41.999066.parquet
---
# Dataset Card for Evaluation run of YeungNLP/firefly-bloom-2b6-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/YeungNLP/firefly-bloom-2b6-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [YeungNLP/firefly-bloom-2b6-v2](https://huggingface.co/YeungNLP/firefly-bloom-2b6-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T11:51:41.999066](https://huggingface.co/datasets/open-llm-leaderboard/details_YeungNLP__firefly-bloom-2b6-v2/blob/main/results_2023-10-13T11-51-41.999066.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08630453020134228,
"em_stderr": 0.002875790094905939,
"f1": 0.1275723573825503,
"f1_stderr": 0.00310355978869451,
"acc": 0.2825940222825524,
"acc_stderr": 0.008796871542302145
},
"harness|drop|3": {
"em": 0.08630453020134228,
"em_stderr": 0.002875790094905939,
"f1": 0.1275723573825503,
"f1_stderr": 0.00310355978869451
},
"harness|gsm8k|5": {
"acc": 0.017437452615617893,
"acc_stderr": 0.003605486867998265
},
"harness|winogrande|5": {
"acc": 0.5477505919494869,
"acc_stderr": 0.013988256216606024
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,245 | [
[
-0.02008056640625,
-0.04510498046875,
0.0108489990234375,
0.0214996337890625,
-0.005336761474609375,
0.0024433135986328125,
-0.0263824462890625,
-0.01824951171875,
0.0265960693359375,
0.03558349609375,
-0.050018310546875,
-0.06494140625,
-0.042572021484375,
... |
bongo2112/mixed-CLEAN-Video-Outputs_v3 | 2023-10-13T11:57:39.000Z | [
"region:us"
] | bongo2112 | null | null | 0 | 0 | 2023-10-13T11:51:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_LLMs__Stable-Vicuna-13B | 2023-10-13T11:52:07.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-13T11:51:58 | ---
pretty_name: Evaluation run of LLMs/Stable-Vicuna-13B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [LLMs/Stable-Vicuna-13B](https://huggingface.co/LLMs/Stable-Vicuna-13B) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_LLMs__Stable-Vicuna-13B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-13T11:51:54.162285](https://huggingface.co/datasets/open-llm-leaderboard/details_LLMs__Stable-Vicuna-13B/blob/main/results_2023-10-13T11-51-54.162285.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.012688758389261746,\n\
\ \"em_stderr\": 0.0011462418380586343,\n \"f1\": 0.06941170302013412,\n\
\ \"f1_stderr\": 0.0017195070383295536,\n \"acc\": 0.2849250197316496,\n\
\ \"acc_stderr\": 0.006957342547358349\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.012688758389261746,\n \"em_stderr\": 0.0011462418380586343,\n\
\ \"f1\": 0.06941170302013412,\n \"f1_stderr\": 0.0017195070383295536\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5698500394632992,\n\
\ \"acc_stderr\": 0.013914685094716698\n }\n}\n```"
repo_url: https://huggingface.co/LLMs/Stable-Vicuna-13B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_13T11_51_54.162285
path:
- '**/details_harness|drop|3_2023-10-13T11-51-54.162285.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-13T11-51-54.162285.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_13T11_51_54.162285
path:
- '**/details_harness|gsm8k|5_2023-10-13T11-51-54.162285.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-13T11-51-54.162285.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_13T11_51_54.162285
path:
- '**/details_harness|winogrande|5_2023-10-13T11-51-54.162285.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-13T11-51-54.162285.parquet'
- config_name: results
data_files:
- split: 2023_10_13T11_51_54.162285
path:
- results_2023-10-13T11-51-54.162285.parquet
- split: latest
path:
- results_2023-10-13T11-51-54.162285.parquet
---
# Dataset Card for Evaluation run of LLMs/Stable-Vicuna-13B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/LLMs/Stable-Vicuna-13B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [LLMs/Stable-Vicuna-13B](https://huggingface.co/LLMs/Stable-Vicuna-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_LLMs__Stable-Vicuna-13B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-13T11:51:54.162285](https://huggingface.co/datasets/open-llm-leaderboard/details_LLMs__Stable-Vicuna-13B/blob/main/results_2023-10-13T11-51-54.162285.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.012688758389261746,
"em_stderr": 0.0011462418380586343,
"f1": 0.06941170302013412,
"f1_stderr": 0.0017195070383295536,
"acc": 0.2849250197316496,
"acc_stderr": 0.006957342547358349
},
"harness|drop|3": {
"em": 0.012688758389261746,
"em_stderr": 0.0011462418380586343,
"f1": 0.06941170302013412,
"f1_stderr": 0.0017195070383295536
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5698500394632992,
"acc_stderr": 0.013914685094716698
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,108 | [
[
-0.0294342041015625,
-0.04962158203125,
0.019317626953125,
0.0221099853515625,
-0.01739501953125,
0.005786895751953125,
-0.0261077880859375,
-0.0142059326171875,
0.032806396484375,
0.03900146484375,
-0.05523681640625,
-0.07403564453125,
-0.048065185546875,
0... |
Weni/Zeroshot-multilanguages | 2023-10-13T12:11:18.000Z | [
"region:us"
] | Weni | null | null | 0 | 0 | 2023-10-13T12:11:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
pureeasecbdscam/PureEase-CBD-Gummies | 2023-10-13T13:00:12.000Z | [
"region:us"
] | pureeasecbdscam | null | null | 0 | 0 | 2023-10-13T12:59:47 | <h2 style="text-align: center;"><span style="font-size: large;"><a style="color: #0b5394;" href="https://sale365day.com/get-purewase-cbd-gunmmies">Click Here -- Official Website -- Order Now</a></span></h2>
<h2 style="text-align: center;"><span style="color: red; font-size: large;">⚠️Beware Of Fake Websites⚠️</span></h2>
<p><strong>✔For Order Official Website - <a href="https://sale365day.com/get-purewase-cbd-gunmmies">https://sale365day.com/get-purewase-cbd-gunmmies</a><br /><br />✔Product Name - PureEase CBD Gummies<br /><br />✔Side Effect - No Side Effects<br /><br />✔Availability - <a href="https://sale365day.com/get-purewase-cbd-gunmmies">Online</a><br /><br />✔ Rating -⭐⭐⭐⭐⭐</strong></p>
<p><a href="https://sale365day.com/get-purewase-cbd-gunmmies"><span style="font-size: large;"><strong>Hurry Up - Limited Time Offer - Buy Now</strong></span></a></p>
<p><a href="https://sale365day.com/get-purewase-cbd-gunmmies"><span style="font-size: large;"><strong>Hurry Up - Limited Time Offer - Buy Now</strong></span></a></p>
<p><a href="https://sale365day.com/get-purewase-cbd-gunmmies"><span style="font-size: large;"><strong>Hurry Up - Limited Time Offer - Buy Now</strong></span></a> </p>
<p>Recently, CBD products have dominated the market, especially in the health and Wellness sector. Currently, you can come across plenty of alternatives that incorporate cannabidiol in your routine, from tinctures to topicals. Still, one product that has gained a lot of prominence these days for its practicality and potency is <a href="https://pureease-cbd-gummies-official.jimdosite.com/"><strong>PureEase CBD Gummies</strong></a>. </p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-purewase-cbd-gunmmies"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWNHSsr8AjJNgTU_WBQavokk7TNJ79H_kBxvG2GOBY3UVgzhhJmyhhNygJGjFnz9JWT5ekUzh-D7HUlVStEklkhHWZJz5Zv7OHSyVQrBQ1b3yO8eBLanTV1a_J_VsWSMhQcBkSlez8CdZO1itAueO89gZ8qHGUdLc5Z2a-k23V7TztqFgi7QD6RsT_/w640-h472/oie_bk7AtKBhsJKo.jpg" alt="" width="640" height="472" border="0" data-original-height="471" data-original-width="640" /></a></div>
<p style="text-align: justify;">Thousands of clinical studies have proven that <a href="https://yourpillsboss.blogspot.com/2023/10/pureease-cbd-gummies-reviews-benefits.html">Pure Ease CBD Gummies</a> are a successful health breakthrough in the pharmaceutical industry. The wonders and benefits of <a href="https://pureease-cbd-gummies-2023.webflow.io/">Pure Ease CBD Gummies</a> have had a positive impact on the health of users. Not only a healthful product but also an alternative medicine to ensure good health and fitness.</p>
<p>One of the best parts about using <a href="https://devfolio.co/@pureeasecbdscam">PureEase CBD Gummies</a> is that they're pretty simple. They are portable and discreet. The best part is that you don't need any extra tools or advanced planning to consume these gummies. </p>
<p style="text-align: justify;"><a href="https://sale365day.com/get-purewase-cbd-gunmmies" target="_blank" rel="nofollow noopener"><strong><span style="font-size: large;">Click Here to Buy – “OFFICIAL WEBSITE”</span></strong></a></p>
<h2 style="text-align: justify;"><strong>How do Pure Ease CBD Gummies benefit our overall health? </strong></h2>
<p style="text-align: justify;"><a href="https://groups.google.com/g/pureease-cbd-gummies-offer/c/FT-H1pKhkwg"><strong>Pure Ease CBD Gummies</strong></a> is a scientific discovery that allows the body to heal and recover naturally through its medicinal properties. These candies help maintain good health and reduce symptoms of many debilitating diseases. CBD gummies are said to strengthen the immune system and provide the body with great potential to fight many diseases. This is a safe and effective remedy that has soothing and healing properties as well as being packed with vitamins and many nutrients.</p>
<p style="text-align: justify;">CBD gummies are easy-to-swallow candies that dissolve quickly in the blood to effectively address various health risks and problems. It protects the body against cell damage and prevents the onset of health problems. Nowadays, CBD gummies are widely known for improving the body's ability to combat poor health and a sedentary lifestyle. Therefore, it helps the patient's body recover effectively in a short time. Additionally, they provide great health benefits to users and are rewarded with regular snacking.</p>
<h2 style="text-align: justify;"><strong>Pure Ease CBD Gummies Source:</strong></h2>
<p style="text-align: justify;">It can be argued that <a href="https://groups.google.com/g/pureease-cbd-gummies-offer/c/6WVuMiDfolY">Pure Ease CBD Gummies</a> facilitate perfect health without side effects. It is a plant-based medicine extracted from cannabis and hemp. Marijuana and hemp are therapeutic plants that contain cannabidiol which keeps the body healthy and eliminates many health complications. In addition to cannabidiol, CBD gummies also contain organic ingredients such as coconut oil, clove oil, hemp extract, grape seed and various fruit extracts to create a delicious taste. CBD gummies are all plant-based and contain no harmful chemicals, toxins, or dangerous preservatives. Therefore, they do not have any unpleasant effects.</p>
<p style="text-align: justify;"><a href="https://sale365day.com/get-purewase-cbd-gunmmies" target="_blank" rel="nofollow noopener"><strong><span style="font-size: large;">Visit Here Know More: Click Here To Go to Official Website Now Pure Ease CBD Gummies</span></strong></a></p>
<h2 style="text-align: justify;"><strong>How does the CBD ingredient help?</strong></h2>
<p style="text-align: justify;">All the ingredients of the <a href="https://pureease-cbd-gummies-scam.company.site/">CBD gummies</a> are lab tested and work wonders to facilitate faster recovery and recovery. Each ingredient has specific benefits and improves our overall health in its own way.</p>
<p style="text-align: justify;"><strong>Hemp Extract:</strong></p>
<p style="text-align: justify;"> Hemp is used in many medicines and has a sedative effect. It helps control sugar levels, boosts immunity, enhances digestion and contributes to overall health.</p>
<p style="text-align: justify;"><strong>Coconut oil:</strong></p>
<p style="text-align: justify;">This oil is very beneficial and is known for its antibacterial and antibacterial properties. It can miraculously improve hair and skin health as well as boost immune function.</p>
<p style="text-align: justify;"><strong>Clove essential oil:</strong></p>
<p style="text-align: justify;"> It can soothe toothache because it has an analgesic effect and contains antioxidants.</p>
<p style="text-align: justify;"><strong>Grape seeds:</strong></p>
<p style="text-align: justify;"> Grape seeds are also known to be effective in curing many different diseases. It prevents radical damage and is effective in curing heart disease and diabetes. </p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-purewase-cbd-gunmmies"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_pjORjVLP_uxDnnuoZqBYxSpHOY47BbjIKYikRYTa_7fmP00wW2FXx3reFtD9V4kbfKwFhssMQre-UdnQ970YoiItgiUjp7hUaHXqFuIdEO5-ek-iDv-h50IV7pxc-D0ba3UPB9x-qELUpwUUNsNaPsJX-xHprtrZz0ZSr3WbpvRPYvL84-UMHu46/w640-h320/0%20xIlNaR_Bp6Aw_5nd.webp" alt="" width="640" height="320" border="0" data-original-height="319" data-original-width="640" /></a> </div>
<p style="text-align: justify;"><a href="https://sketchfab.com/3d-models/pureease-cbd-gummies-exposed-2023-work-or-not-3aa31e8f9fa64cfcbe74522dcd7e3914">Pure Ease CBD Gummies</a> come in a variety of flavours, colours, shapes and sizes. They include a variety of fruit flavours including orange, mango, grape, raspberry, watermelon, apple, lemon, and more.</p>
<p style="text-align: justify;">This supplement does not contain any gases, compounds, flavours, tints or allergens. It may not contain artificial materials, fillers or synthetic preservatives. Ingredients from the farm are selected, shipped to a lab for evaluation, and used by medical professionals to create these candies. Thanks to its pure ingredients and ecological formula, this supplement is suitable for long-term use. There may be no adverse health effects, including migraines, headaches, or stomach upset.</p>
<h2 style="text-align: justify;"><strong>The science behind Pure Ease CBD Gummies:</strong></h2>
<p style="text-align: justify;">First, <a href="https://groups.google.com/g/pureease-cbd-gummies-offer">Pure Ease CBD Gummies</a> improve the immune system, build immunity and allow you to easily solve and beat many health puzzles. Second, it communicates with the endocannabinoid system “ECS,” which is a cellular system that works to improve the body's ability to recover faster. Plus, it regulates everything from appetite, sleep, relaxation to psychological functioning.</p>
<p style="text-align: justify;">ECS also ensures longevity and maintains your overall health to prevent health problems. It actively interacts with every cell in the body and causes an active anti-inflammatory response. CBD gummies have been shown to improve ECS performance as they are great at supporting a fit body without affecting your fitness. This is how <strong>Pure Ease CBD Gummies</strong> work wonderfully for users.</p>
<p style="text-align: justify;"><span style="font-size: large;"><a href="https://sale365day.com/get-purewase-cbd-gunmmies" target="_blank" rel="nofollow noopener"><strong>MUST SEE: Click Here to Order Pure Ease CBD Gummies For The Best Price Available!</strong></a></span></p>
<h2 style="text-align: justify;"><strong>What benefits can we get from using Pure Ease CBD Gummies?</strong></h2>
<p style="text-align: justify;"><strong>Treatment of sleep disorders:</strong></p>
<p style="text-align: justify;">A daily dose of CBD gummies helps treat insomnia, sleep apnea, and related problems. They have the ability to make you sleep like a baby and also bring relaxation and calmness while you sleep. Fights pain and soreness</p>
<p style="text-align: justify;">Severe pain or any type of bodily discomfort can be easily cured by consuming CBD gummies on a daily basis. These gummies immediately provide much-needed pain relief and accelerate the healing process. It can easily treat shoulder pain, neck pain, back pain, numbness, leg fatigue, etc.</p>
<p style="text-align: justify;"><strong>Stay away from stress and depression:</strong></p>
<p style="text-align: justify;"><a href="https://medium.com/@pureeasecbd_78963/pureease-cbd-gummies-review-legit-critical-report-exposed-about-directions-and-labels-799264ca4b60">Pure Ease CBD Gummies</a> help positively impact your psychological health. It helps improve mental focus, clarity and eliminate brain fog. Additionally, it also helps control stress levels and alleviate symptoms of depression, agitation, and anxiety.</p>
<p style="text-align: justify;"><strong>Improve skin:</strong></p>
<p style="text-align: justify;">CBD gummies have anti-aging properties that help reduce the effects of premature aging and improve the appearance of the skin. It also helps cure many skin disorders including dermatitis and psoriasis. </p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-purewase-cbd-gunmmies"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh38NNAx0iT-YZpWLRVEQ5MFKUrRVqFgctKFAwVxNb6tBvPQuNTQ_uXH1QeLqzuALsEWeyJOgJBTDdlV4VeOuiBb3xloVVS3tmWYqJ6ZGcLNgwlglibqISCluTDr-UmdSghz6L27zhNq06nCDN-IuwSGCocbBfBt-Y59fzo8rCTKv54eLD0uQ6MMIyq/w640-h302/0%20duj6p_3Tcsmlp88w.webp" alt="" width="640" height="302" border="0" data-original-height="302" data-original-width="640" /></a> </div>
<h2 style="text-align: justify;"><strong>Why only Pure Ease CBD Gummies?</strong></h2>
<p style="text-align: justify;">Instead of any dietary supplement, choosing <a href="https://pureeasecbdgummiesscam.contently.com/"><strong>Pure Ease CBD Gummies</strong></a> is always the ideal choice to have a toned physique. They work tirelessly to promote healthier, happier well-being.</p>
<ul style="text-align: justify;">
<li>CBD gummies are not addictive.</li>
<li>They are easily digestible and safe when consumed regularly</li>
<li>They provide guaranteed results in a short time.</li>
<li><a href="https://lookerstudio.google.com/reporting/048c8b5e-6c9c-4c80-94d3-c637a0381e6f/page/16PfD">Pure Ease CBD Gummies</a> are effective in managing excellent health and preventing diseases.</li>
</ul>
<p style="text-align: justify;">They contain only natural and herbal elements.</p>
<ul style="text-align: justify;">
<li>CBD gummies are effective in treating illnesses and improving your overall fitness.</li>
<li>They are 100% safe and legal.</li>
</ul>
<h2 style="text-align: justify;"><strong>How to snack with Pure Ease CBD Gummies?</strong></h2>
<p style="text-align: justify;"><a href="https://colab.research.google.com/drive/1NZ0PHsKFI-CzFbtkcmmzmsIH2NOhnEd2">Pure Ease CBD Gummies</a> are fine to consume and should be used in consultation with a physician. You can chew and swallow CBD gummies in small pieces daily. Read dosage instructions before applying and seek medical approval.</p>
<p style="text-align: justify;">Do not consume too much because consuming too much can cause dizziness or constipation. Consumption produces a variety of outcomes. After a few days, some people may begin to feel better, especially with increased emotions, calmness, and deeper sleep. To achieve the most effective results, the gum should be continued for more than 180 days.</p>
<p style="text-align: justify;"><a href="https://sale365day.com/get-purewase-cbd-gunmmies" target="_blank" rel="nofollow noopener"><strong><span style="font-size: large;">MUST SEE: Click Here to Order Pure Ease CBD Gummies For The Best Price Available!</span></strong></a></p>
<h2 style="text-align: justify;"><strong>Who should take precautions while on Pure Ease CBD Gummies?</strong></h2>
<ul style="text-align: justify;">
<li>Pregnant and lactating mothers</li>
<li>Children under 18</li>
<li>Drug addicts</li>
<li>Patients benefit from other recovery options.</li>
</ul>
<h2 style="text-align: justify;"><strong>Are they 100% legit or a scam? </strong></h2>
<p style="text-align: justify;">There is no doubt that <a href="https://experiment.com/projects/vjgbqlgufiqlrzzlakbk/methods">Pure Ease CBD Gummies</a> are original, legal and scientifically reviewed. You should buy CBD products from an authorized website to avoid buying fake or fraudulent products. These delicious candies made from tropical organic ingredients are adorable and beautiful. Plus, they constantly communicate with your entire body. CBD, a substance rich in cannabinoids, is one of the ingredients of <a href="https://devfolio.co/project/new/pureease-cbd-gummies-8822">Pure Ease CBD Gummies</a>. Your body changes thanks to ECS. This implies that if you become anxious, your ECS may produce endocannabinoids that help you feel calmer. Also, this is important for stress. If you are in pain, it also provides endogenous cannabinoids</p>
<h2 style="text-align: justify;"><strong>Are there any side effects with Pure Ease CBD Gummies? </strong></h2>
<p style="text-align: justify;">While <a href="https://community.weddingwire.in/forum/pureease-cbd-gummies-2023-warning-will-it-work-for-you-shocking-customer-results--t179986">Pure Ease CBD Gummies</a> are generally considered trustworthy and acceptable, they do have some possible negative consequences, just like any other product. These may include fluctuations in weight, food intake, drowsiness, and chapped mouth. Before using CBD gummies, it is essential to discuss your medical history with your doctor as some medications and CBD can cause problems.</p>
<h2 style="text-align: justify;"><strong>How are these chewy treats made?</strong></h2>
<p style="text-align: justify;">Each gummy bear is produced in hygienic atmosphere. Many best practices are used to create products. The most renowned medical institutions and experts supervise the entire production process. Gummies are manufactured according to the strictest guidelines in the industry. They are strongly recommended by leading healthcare experts around the world.</p>
<h2 style="text-align: justify;"><strong>Product reviews:</strong></h2>
<p style="text-align: justify;"><a href="https://devfolio.co/project/new/pureease-cbd-gummies-8822">Pure Ease CBD Gummies</a> have attracted great attention in the US and many countries. These methods are very popular in many countries due to their promising and long-lasting results. This product has mixed reviews because CBD works according to your body's preferences; Individual results may vary.</p>
<h2 style="text-align: justify;"><strong>Where to shop?</strong></h2>
<p style="text-align: justify;">People can get their favourite <a href="https://pureease-update.clubeo.com/page/pureease-cbd-gummies-1-usa-premium-formula-dont-buy-until-you-read-this-critical-report.html"><strong>Pure Ease CBD Gummies</strong></a> delivered to their door by shopping online from a certified CBD supplier's website. They offer A1 products with great discounts, 100% money back guarantee, and try the products for 90 days without risking a penny. They also offer a clear return policy for those who don't feel satisfied with their purchase. Before purchasing, ensure product quality and talk to a healthcare professional. </p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-purewase-cbd-gunmmies"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC53Vrmg8fBfTAMJJpMpz3GoTRuuWvaku3SR1qBgFQmwlZ4LOJofeT2eXDMH6shTSH24qsGUyeRReyVnNkWdXjn2R_yI04mIydAeDwaYbac1u2CYjrC2lA1H4akeT8Kadj84mcPIHLDq0e3hTW1_oVvvDfrfKce-3t_s3R7WRlYy8Vwnw7D1ZCyWbw/w640-h300/0%20bacicwEr6v9elXrt.jpg" alt="" width="640" height="300" border="0" data-original-height="299" data-original-width="640" /></a> </div>
<h2 style="text-align: justify;"><strong>Last words</strong></h2>
<p style="text-align: justify;">So, without any effort or risk, men and women can improve their health and deal with certain health problems with the effective treatment called <a href="https://pureease-update.clubeo.com/page/pureease-cbd-gummies-reviews-ingredients-and-side-effects-exposed-you-must-see-this.html">Pure Ease CBD Gummies</a>. It is a scientific method that has amazing effects in building a disease-free body and makes you completely healthy.</p>
<p style="text-align: justify;"><a href="https://groups.google.com/g/pureease-cbd-gummies-offer/c/FT-H1pKhkwg"><strong>Pure Ease CBD Gummies</strong></a> is a scientific discovery that allows the body to heal and recover naturally through its medicinal properties. These candies help maintain good health and reduce symptoms of many debilitating diseases. CBD gummies are said to strengthen the immune system and provide the body with great potential to fight many diseases. This is a safe and effective remedy that has soothing and healing properties as well as being packed with vitamins and many nutrients.</p>
<p><strong>Read More:</strong></p>
<p><a href="https://yourpillsboss.blogspot.com/2023/10/pureease-cbd-gummies-reviews-benefits.html">https://yourpillsboss.blogspot.com/2023/10/pureease-cbd-gummies-reviews-benefits.html</a><br /><a href="https://pureease-cbd-gummies-official.jimdosite.com/">https://pureease-cbd-gummies-official.jimdosite.com/</a><br /><a href="https://groups.google.com/g/pureease-cbd-gummies-offer">https://groups.google.com/g/pureease-cbd-gummies-offer</a><br /><a href="https://groups.google.com/g/pureease-cbd-gummies-offer/c/6WVuMiDfolY">https://groups.google.com/g/pureease-cbd-gummies-offer/c/6WVuMiDfolY</a><br /><a href="https://groups.google.com/g/pureease-cbd-gummies-offer/c/FT-H1pKhkwg">https://groups.google.com/g/pureease-cbd-gummies-offer/c/FT-H1pKhkwg</a><br /><a href="https://devfolio.co/@pureeasecbdscam">https://devfolio.co/@pureeasecbdscam</a><br /><a href="https://pureease-update.clubeo.com/page/pureease-cbd-gummies-1-usa-premium-formula-dont-buy-until-you-read-this-critical-report.html">https://pureease-update.clubeo.com/page/pureease-cbd-gummies-1-usa-premium-formula-dont-buy-until-you-read-this-critical-report.html</a><br /><a href="https://pureease-update.clubeo.com/page/pureease-cbd-gummies-reviews-ingredients-and-side-effects-exposed-you-must-see-this.html">https://pureease-update.clubeo.com/page/pureease-cbd-gummies-reviews-ingredients-and-side-effects-exposed-you-must-see-this.html</a><br /><a href="https://devfolio.co/project/new/pureease-cbd-gummies-8822">https://devfolio.co/project/new/pureease-cbd-gummies-8822</a><br /><a href="https://experiment.com/projects/vjgbqlgufiqlrzzlakbk/methods">https://experiment.com/projects/vjgbqlgufiqlrzzlakbk/methods</a><br /><a href="https://colab.research.google.com/drive/1NZ0PHsKFI-CzFbtkcmmzmsIH2NOhnEd2">https://colab.research.google.com/drive/1NZ0PHsKFI-CzFbtkcmmzmsIH2NOhnEd2</a><br /><a href="https://lookerstudio.google.com/reporting/048c8b5e-6c9c-4c80-94d3-c637a0381e6f/page/16PfD">https://lookerstudio.google.com/reporting/048c8b5e-6c9c-4c80-94d3-c637a0381e6f/page/16PfD</a><br /><a href="https://medium.com/@pureeasecbd_78963/pureease-cbd-gummies-review-legit-critical-report-exposed-about-directions-and-labels-799264ca4b60">https://medium.com/@pureeasecbd_78963/pureease-cbd-gummies-review-legit-critical-report-exposed-about-directions-and-labels-799264ca4b60</a><br /><a href="https://medium.com/@pureeasecbd_78963">https://medium.com/@pureeasecbd_78963</a><br /><a href="https://pureease-cbd-gummies-2023.webflow.io/">https://pureease-cbd-gummies-2023.webflow.io/</a><br /><a href="https://pureease-cbd-gummies-scam.company.site/">https://pureease-cbd-gummies-scam.company.site/</a><br /><a href="https://sketchfab.com/3d-models/pureease-cbd-gummies-exposed-2023-work-or-not-3aa31e8f9fa64cfcbe74522dcd7e3914">https://sketchfab.com/3d-models/pureease-cbd-gummies-exposed-2023-work-or-not-3aa31e8f9fa64cfcbe74522dcd7e3914</a><br /><a href="https://pdfhost.io/v/Rz~2znUpbB_PureEase_CBD_Gummies_New_Report_2023_Do_Not_Buy_Till_You_Read_My_Real_Experience">https://pdfhost.io/v/Rz~2znUpbB_PureEase_CBD_Gummies_New_Report_2023_Do_Not_Buy_Till_You_Read_My_Real_Experience</a><br /><a href="https://pureeasecbdgummiesscam.contently.com/">https://pureeasecbdgummiesscam.contently.com/</a><br /><a href="https://community.weddingwire.in/forum/pureease-cbd-gummies-2023-warning-will-it-work-for-you-shocking-customer-results--t179986">https://community.weddingwire.in/forum/pureease-cbd-gummies-2023-warning-will-it-work-for-you-shocking-customer-results--t179986</a><br /><a href="https://pureeasecbdgummiesusa.bandcamp.com/track/pureease-cbd-gummies-fda-exposed-2023-unexpected-details-revealed">https://pureeasecbdgummiesusa.bandcamp.com/track/pureease-cbd-gummies-fda-exposed-2023-unexpected-details-revealed</a><br /><a href="https://soundcloud.com/pureease-cbd-gummies-official/pureease-cbd-gummies-reviews-what-other-users-say-prostadine-customer-reports">https://soundcloud.com/pureease-cbd-gummies-official/pureease-cbd-gummies-reviews-what-other-users-say-prostadine-customer-reports</a><br /><a href="https://forum.molihua.org/d/191818-pureease-cbd-gummies-2023-warning-shocking-side-effects-or-fraud-risks">https://forum.molihua.org/d/191818-pureease-cbd-gummies-2023-warning-shocking-side-effects-or-fraud-risks</a><br /><a href="https://www.protocols.io/blind/3B346AFF69BD11EE81BC0A58A9FEAC02">https://www.protocols.io/blind/3B346AFF69BD11EE81BC0A58A9FEAC02</a><br /><a href="https://gamma.app/public/PureEase-CBD-Gummies-9kbgdjgqpw7vkxi">https://gamma.app/public/PureEase-CBD-Gummies-9kbgdjgqpw7vkxi</a><br /><a href="https://www.forexagone.com/forum/experiences-trading/pureease-cbd-gummies-formulated-with-100-pure-ingredients-that-reduce-stress-pain-anxiety-85681#183083">https://www.forexagone.com/forum/experiences-trading/pureease-cbd-gummies-formulated-with-100-pure-ingredients-that-reduce-stress-pain-anxiety-85681#183083</a><br /><a href="https://sketchfab.com/3d-models/pureease-cbd-gummies-reviews-hidden-facts-2023-c36b8f2154614795922ba8ca40b05efd">https://sketchfab.com/3d-models/pureease-cbd-gummies-reviews-hidden-facts-2023-c36b8f2154614795922ba8ca40b05efd</a><br /><a href="https://devfolio.co/@pureeasecbdu">https://devfolio.co/@pureeasecbdu</a><br /><a href="https://pureeasecbdreport.bandcamp.com/track/pureease-cbd-gummies-reviews-does-it-work-urgent-customer-update-2023">https://pureeasecbdreport.bandcamp.com/track/pureease-cbd-gummies-reviews-does-it-work-urgent-customer-update-2023</a><br /><a href="https://devfolio.co/projects/pureease-cbd-gummies-reviews-exposed-must-r-816c">https://devfolio.co/projects/pureease-cbd-gummies-reviews-exposed-must-r-816c</a></p> | 25,238 | [
[
-0.0271453857421875,
-0.0859375,
0.016876220703125,
0.00717926025390625,
-0.0231781005859375,
0.012420654296875,
-0.0154571533203125,
-0.0638427734375,
0.060302734375,
0.021942138671875,
-0.0256500244140625,
-0.0648193359375,
-0.043609619140625,
-0.007785797... |
erbacher/trivia_qa-halM | 2023-10-13T13:16:19.000Z | [
"region:us"
] | erbacher | null | null | 0 | 0 | 2023-10-13T13:16:05 | ---
dataset_info:
features:
- name: target
dtype: string
- name: query
dtype: string
- name: gold_generation
sequence: string
- name: text
dtype: string
- name: results
dtype: string
- name: em
dtype: float64
- name: hal_m
dtype: string
splits:
- name: train1
num_bytes: 36799502.40639716
num_examples: 39392
- name: train2
num_bytes: 36800436.59360284
num_examples: 39393
- name: dev
num_bytes: 8307250
num_examples: 8837
- name: test
num_bytes: 10650305
num_examples: 11313
download_size: 34799920
dataset_size: 92557494.0
---
# Dataset Card for "trivia_qa-halM"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 787 | [
[
-0.04364013671875,
-0.0274505615234375,
0.0238494873046875,
0.005321502685546875,
-0.01995849609375,
0.01837158203125,
0.020263671875,
-0.004680633544921875,
0.0576171875,
0.03570556640625,
-0.045867919921875,
-0.0718994140625,
-0.0286102294921875,
-0.008468... |
piazzola/addressWithContext | 2023-10-13T18:18:55.000Z | [
"language:en",
"license:cc-by-nc-2.0",
"region:us"
] | piazzola | null | null | 0 | 0 | 2023-10-13T14:14:36 | ---
language:
- en
license: cc-by-nc-2.0
---
This dataset contains addresses and sentences pairs, where the sentence contains the address. For instance, `"4450 WEST 32ND STREET": "Lena walked up the path to the white colonial-style house with the blue shutters and addressed the letter to Mr. and Mrs. Morrison at 4450 West 32nd Street."` I prompted the quantized version of Llama-2 to generate the sentences. | 409 | [
[
0.00885772705078125,
-0.0538330078125,
0.0491943359375,
0.01462554931640625,
-0.017913818359375,
-0.0020694732666015625,
0.0206756591796875,
-0.0201873779296875,
0.02850341796875,
0.04852294921875,
-0.0574951171875,
-0.039337158203125,
-0.0438232421875,
0.00... |
ichiro0128/seisokukatwo | 2023-10-13T14:52:40.000Z | [
"region:us"
] | ichiro0128 | null | null | 0 | 0 | 2023-10-13T14:48:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
OdiaGenAI/roleplay_odia | 2023-10-16T13:19:44.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:or",
"code",
"art",
"finance",
"architecture",
"books",
"astronomy",
"acting",
"accounting",
"region:us"
] | OdiaGenAI | null | null | 0 | 0 | 2023-10-13T14:58:35 | ---
task_categories:
- question-answering
- conversational
language:
- or
tags:
- code
- art
- finance
- architecture
- books
- astronomy
- acting
- accounting
size_categories:
- 1K<n<10K
---
The following dataset has been created using camel-ai, by passing various combinations of user and assistant. The dataset was translated to Odia using OdiaGenAI English=>Indic translation app. | 385 | [
[
-0.019805908203125,
-0.04144287109375,
-0.019256591796875,
0.046875,
-0.03240966796875,
-0.016357421875,
0.0134735107421875,
-0.04620361328125,
0.045654296875,
0.06658935546875,
-0.045318603515625,
-0.0133514404296875,
-0.0250701904296875,
0.036712646484375,... |
c123ian/Dublin_House_Prices_2010_2022 | 2023-10-13T15:11:58.000Z | [
"region:us"
] | c123ian | null | null | 0 | 0 | 2023-10-13T15:02:16 | This dataset pulled originally from https://www.propertypriceregister.ie/ , You can visit that website and specify all or one specific county. This version I pulled goes up to March 2022.

| 310 | [
[
-0.051025390625,
-0.0230712890625,
0.038726806640625,
0.0289764404296875,
-0.0225982666015625,
-0.04840087890625,
0.0102386474609375,
-0.02557373046875,
0.0230560302734375,
0.0740966796875,
-0.02532958984375,
-0.04254150390625,
-0.01983642578125,
-0.00285148... |
Imran1/icons | 2023-10-13T15:15:16.000Z | [
"region:us"
] | Imran1 | null | null | 0 | 0 | 2023-10-13T15:15:07 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': a-minus-test-symbol
'1': ab-testing
'2': acid-test
'3': advanced-training
'4': aids-test
'5': allergy-test
'6': animal-test
'7': animal-testing
'8': animal-training
'9': baby-train
'10': blood-count-test
'11': blood-test
'12': brain-training
'13': bullet-train
'14': cargo-train
'15': chemical-test-tube
'16': children-train
'17': circus-train-car
'18': color-blindness-test
'19': computer-test
'20': covid-test
'21': crash-test
'22': crash-testing-dummy-silhouette
'23': dev
'24': diabetes-test
'25': diesel-train
'26': dna-test
'27': dog-training
'28': dog-training-whistle
'29': driving-test
'30': drug-test
'31': dumbbell-training
'32': electric-train
'33': emissions-test
'34': employment-test
'35': evaluation
'36': experiment-test-tube
'37': eye-test
'38': failure-test
'39': fast-train
'40': filled-test-tube-with-a-drop
'41': final-test
'42': flight-training
'43': freight-train
'44': front-of-train
'45': front-train-on-tracks
'46': frontal-train
'47': frontal-train-and-rails
'48': genbeta-dev
'49': gmo-test
'50': hair-test
'51': hearing-test
'52': hemoglobin-test-meter
'53': high-speed-train
'54': hospital-test-tube
'55': image-split-testing
'56': inkblot-test
'57': ishihara-test
'58': medical-test
'59': medicine-liquid-in-a-test-tube-glass
'60': mini-train
'61': monitoring-test
'62': no-animal-testing
'63': no-test
'64': not-valid
'65': nutritional-test
'66': oil-train
'67': old-train
'68': online-driving-test
'69': online-test
'70': online-training
'71': optical-test
'72': ovulation-test
'73': papanicolau-test
'74': pass-test
'75': passenger-train
'76': pcr-test
'77': penetration-testing
'78': ph-test
'79': pregnancy-test
'80': pregnant-test
'81': print-test
'82': printing-test
'83': pulmonary-function-test
'84': quality-test
'85': rapid-test
'86': rorschach-test
'87': round-test-tube
'88': running-test
'89': science-experiment-hand-drawn-test-tubes-couple
'90': science-test-tube
'91': seo-training
'92': serology-test
'93': skin-prick-test
'94': skin-test
'95': speed-test
'96': stool-test
'97': stress-test
'98': test
'99': test-card
'100': test-cases
'101': test-exam
'102': test-flight
'103': test-pen
'104': test-quiz
'105': test-result-on-paper
'106': test-results
'107': test-tube
'108': test-tube-and-a-drop
'109': test-tube-and-drop
'110': test-tube-and-flask
'111': test-tube-brush
'112': test-tube-half-full
'113': test-tube-rack
'114': test-tube-with-cap
'115': test-tube-with-drop
'116': test-tube-with-liquid
'117': test-tube-with-liquid-outline
'118': test-tubes
'119': test-tubes-hand-drawn-science-tools
'120': test-tubes-hand-drawn-tools
'121': testing
'122': testing-glasses
'123': three-test-tube
'124': three-test-tubes
'125': toy-train
'126': train
'127': train-cargo
'128': train-engine
'129': train-front
'130': train-front-and-railroad
'131': train-front-view
'132': train-hand-drawn-outline
'133': train-icon
'134': train-in-a-tunnel
'135': train-locomotive-toy
'136': train-logo
'137': train-operator
'138': train-platform
'139': train-rails
'140': train-ride
'141': train-satation-location
'142': train-sign
'143': train-station
'144': train-station-location
'145': train-station-sign
'146': train-stop
'147': train-ticket
'148': train-times
'149': train-to-the-airport
'150': train-toy
'151': train-track
'152': train-tracks
'153': train-wagon
'154': training
'155': training-bag
'156': training-bottle
'157': training-course
'158': training-gear
'159': training-gloves
'160': training-mat
'161': training-pants
'162': training-phrase
'163': training-watch
'164': training-whistle
'165': turing-test
'166': turings-test
'167': two-test-tubes
'168': unit-testing
'169': urine-test
'170': user-evaluation
'171': valid
'172': valid-document
'173': validation
'174': velocity-test
'175': window-of-test-card
'176': x-ray-test
splits:
- name: train
num_bytes: 63080287.752
num_examples: 3976
download_size: 67589265
dataset_size: 63080287.752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "icons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 6,063 | [
[
-0.047027587890625,
-0.00833892822265625,
0.01117706298828125,
0.00945281982421875,
-0.0087432861328125,
0.00818634033203125,
0.02825927734375,
-0.0246429443359375,
0.06585693359375,
0.0299224853515625,
-0.058441162109375,
-0.0513916015625,
-0.04461669921875,
... |
newsmediabias/BIAS-CONLL | 2023-10-25T20:35:32.000Z | [
"region:us"
] | newsmediabias | null | null | 1 | 0 | 2023-10-13T15:35:27 |
# Hugging Face with Bias Data in CoNLL Format
## Introduction
This README provides guidance on how to use the Hugging Face platform with bias-tagged datasets in the CoNLL format.
Such datasets are essential for studying and mitigating bias in AI models.
This dataset is curated by **Shaina Raza**.
The methods and formatting discussed here are based on the seminal work "Nbias: A natural language processing framework for BIAS identification in text" by Raza et al. (2024) (see citation below).
## Prerequisites
- Install the Hugging Face `transformers` and `datasets` libraries:
```bash
pip install transformers datasets
```
## Data Format
Bias data in CoNLL format can be structured similarly to standard CoNLL, but with labels indicating bias instead of named entities:
```
The O
book O
written B-BIAS
by I-BIAS
egoist I-BIAS
women I-BIAS
is O
good O
. O
```
Here, `B-` prefixes indicate the beginning of a biased term,`I-` indicates inside biased terms, and `O` stands for outside any biased entity.
## Steps to Use with Hugging Face
1. **Loading Bias-tagged CoNLL Data with Hugging Face**
- If your bias-tagged dataset in CoNLL format is publicly available on the Hugging Face `datasets` hub, use:
```python
from datasets import load_dataset
dataset = load_dataset("newsmediabias/BIAS-CONLL")
```
- For custom datasets, ensure they are formatted correctly and use a local path to load them.
If the dataset is gated/private, make sure you have run huggingface-cli login
2. **Preprocessing the Data**
- Tokenization:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("YOUR_PREFERRED_MODEL_CHECKPOINT")
tokenized_input = tokenizer(dataset['train']['tokens'])
```
3. **Training a Model on Bias-tagged CoNLL Data**
- Depending on your task, you may fine-tune a model on the bias data using Hugging Face's `Trainer` class or native PyTorch/TensorFlow code.
4. **Evaluation**
- After training, evaluate the model's ability to recognize and possibly mitigate bias.
- This might involve measuring the model's precision, recall, and F1 score on recognizing bias in text.
5. **Deployment**
- Once satisfied with the model's performance, deploy it for real-world applications, always being mindful of its limitations and potential implications.
Please cite us if you use it.
**Reference to cite us**
```
@article{raza2024nbias,
title={Nbias: A natural language processing framework for BIAS identification in text},
author={Raza, Shaina and Garg, Muskan and Reji, Deepak John and Bashir, Syed Raza and Ding, Chen},
journal={Expert Systems with Applications},
volume={237},
pages={121542},
year={2024},
publisher={Elsevier}
}
``` | 2,799 | [
[
-0.054473876953125,
-0.05853271484375,
0.0034770965576171875,
0.031585693359375,
-0.01023101806640625,
-0.01543426513671875,
-0.01198577880859375,
-0.0421142578125,
0.035980224609375,
0.0305023193359375,
-0.07000732421875,
-0.034515380859375,
-0.0631103515625,
... |
JianhaoDYDY/Real-Fake | 2023-10-30T14:24:29.000Z | [
"task_categories:image-classification",
"language:en",
"license:mit",
"region:us"
] | JianhaoDYDY | null | null | 0 | 0 | 2023-10-13T15:42:28 | ---
license: mit
task_categories:
- image-classification
language:
- en
---
## Usage
1. Download from Huggingface
2. Run combine.sh to combined the piece into single dataset
The dataset is stored in the same format as ImageNet-1K. | 236 | [
[
-0.053863525390625,
-0.031890869140625,
-0.0220489501953125,
0.034912109375,
-0.04736328125,
-0.030975341796875,
0.004070281982421875,
-0.01885986328125,
0.07806396484375,
0.08233642578125,
-0.058868408203125,
-0.031890869140625,
-0.0305938720703125,
0.00278... |
gufi009/test | 2023-10-13T16:43:23.000Z | [
"region:us"
] | gufi009 | null | null | 0 | 0 | 2023-10-13T15:54:06 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01494598388671875,
0.057159423828125,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005077362060546875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.01494598388671875,
-0.06036376953125,
0.03... |
XienLynn/Transformers | 2023-10-13T16:10:39.000Z | [
"region:us"
] | XienLynn | null | null | 0 | 0 | 2023-10-13T16:10:39 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01494598388671875,
0.057159423828125,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005077362060546875,
0.051361083984375,
0.0170135498046875,
-0.05206298828125,
-0.01494598388671875,
-0.06036376953125,
0.03... |
sordonia/facts-text-davinci-003_clen128_maxD-1_maxC25 | 2023-10-13T18:09:46.000Z | [
"region:us"
] | sordonia | null | null | 0 | 0 | 2023-10-13T16:12:39 | ## model_name: text-davinci-003
## max_contexts_per_subject: 25
## max_documents_per_subject: -1
## max_context_length: 128
| 124 | [
[
-0.0345458984375,
-0.0379638671875,
0.055206298828125,
0.0357666015625,
-0.0435791015625,
-0.032928466796875,
0.0168304443359375,
0.01448822021484375,
-0.009918212890625,
0.036224365234375,
-0.058502197265625,
-0.036376953125,
-0.06781005859375,
0.0062904357... |
TrainingDataPro/computed-tomography-ct-of-the-brain | 2023-10-13T16:31:55.000Z | [
"task_categories:image-to-image",
"task_categories:image-segmentation",
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"biology",
"code",
"medical",
"region:us"
] | TrainingDataPro | null | null | 1 | 0 | 2023-10-13T16:29:23 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-to-image
- image-segmentation
- image-classification
language:
- en
tags:
- biology
- code
- medical
---
# Computed Tomography (CT) of the Brain
The dataset consists of CT brain scans with **cancer, tumor, and aneurysm**. Each scan represents a detailed image of a patient's brain taken using **CT (Computed Tomography)**. The data are presented in 2 different formats: **.jpg and .dcm**.
The dataset of CT brain scans is valuable for research in **neurology, radiology, and oncology**. It allows the development and evaluation of computer-based algorithms, machine learning models, and deep learning techniques for **automated detection, diagnosis, and classification** of these conditions.

### Types of brain diseases in the dataset:
- **cancer**
- **tumor**
- **aneurysm**
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=ct-of-the-brain) to discuss your requirements, learn about the price and buy the dataset.
# Content
### The folder "files" includes 3 folders:
- corresponding to name of the brain disease and including ct scans of people with this disease (**cancer, tumor or aneurysm**)
- including brain scans in 2 different formats: **.jpg and .dcm**.
### File with the extension .csv includes the following information for each media file:
- **dcm**: link to access the .dcm file,
- **jpg**: link to access the .jpg file,
- **type**: name of the brain disease on the ct
# Medical data might be collected in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=ct-of-the-brain)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 2,259 | [
[
-0.0143280029296875,
-0.055267333984375,
0.0443115234375,
0.003173828125,
-0.035308837890625,
-0.00214385986328125,
-0.007568359375,
-0.0241241455078125,
0.030731201171875,
0.057373046875,
-0.027587890625,
-0.06744384765625,
-0.06439208984375,
-0.00758361816... |
Globaly/familias_cleaned | 2023-10-13T16:40:30.000Z | [
"region:us"
] | Globaly | null | null | 0 | 0 | 2023-10-13T16:39:51 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dupa888/dataset-slang | 2023-10-13T16:40:17.000Z | [
"region:us"
] | dupa888 | null | null | 0 | 0 | 2023-10-13T16:40:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Globaly/classes_cleaned | 2023-10-13T16:41:02.000Z | [
"region:us"
] | Globaly | null | null | 0 | 0 | 2023-10-13T16:40:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Globaly/bricks_cleaned | 2023-10-13T16:41:33.000Z | [
"region:us"
] | Globaly | null | null | 0 | 0 | 2023-10-13T16:41:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
medieval-data/mgh-critical-edition-layout | 2023-10-13T16:53:55.000Z | [
"license:cc-by-nc-4.0",
"doi:10.57967/hf/1210",
"region:us"
] | medieval-data | null | null | 0 | 0 | 2023-10-13T16:48:13 | ---
license: cc-by-nc-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image_id
dtype: string
- name: image
dtype: image
- name: width
dtype: int64
- name: height
dtype: int64
- name: objects
struct:
- name: bbox
sequence:
sequence: float64
- name: category
sequence: int64
- name: id
sequence: 'null'
splits:
- name: train
num_bytes: 19639133.0
num_examples: 79
- name: val
num_bytes: 4967295.0
num_examples: 21
download_size: 24112875
dataset_size: 24606428.0
---
---
license: cc-by-nc-4.0
task_categories:
- object-detection
language:
- la
tags:
- object detection
- critical edition
- yolo
size_categories:
- n<1K
---
# MGH Layout Detection Dataset
## Dataset Description
### General Description
This dataset consists of scans from the MGH critical edition of Alcuin's letters, which were first edited by Ernestus Duemmler in 1895. The digital scans were sourced from the DMGH's repository, which can be accessed [here](https://www.dmgh.de/mgh_epp_4). The scans were annotated using CVAT, marking out two classes: the title of a letter and the body of the letter.
### Why was this dataset created?
The primary motivation behind the creation of this dataset was to enhance the downstream task of OCR. OCR often returns errors due to interferences like marginalia and footnotes present in the scanned pages. By having accurate annotations for the title and body of the letters, users can efficiently isolate the main content of the letters and possibly achieve better OCR results.
Future plans for this dataset include expanding the annotations to encompass footnotes and marginalia, thus further refining the demarcation between the main content and supplementary notes.
### Classes
Currently, the dataset has two annotated classes:
- Title of the letter
- Body of the letter
Planned future additions include:
- Footnotes
- Marginalia
## Sample Annotation

## Biographical Information
### About Alcuin
Alcuin of York (c. 735 – 804 AD) was an English scholar, clergyman, poet, and teacher. He was born in York and became a leading figure in the so-called "Carolingian renaissance." Alcuin made significant contributions to the educational and religious reforms initiated by Charlemagne, emphasizing the importance of classical studies.
### About Alcuin's Letters
Alcuin's letters provide a crucial insight into the Carolingian world, highlighting the intellectual and religious discourse of the time. They serve as invaluable resources for understanding the interactions between some of the important figures of Charlemagne's court, the challenges they faced, and the solutions they proposed. The letters also offer a window into Alcuin's own thoughts, his relationships with peers and, most importantly, his students, and his role as an advisor to Charlemagne.
## Dataset and Annotation Details
### Annotation Process
The scans of Alcuin's letters were annotated manually using the CVAT tool. The primary focus was to delineate the titles and bodies of the letters. This clear demarcation aids in improving the precision of OCR tools by allowing them to target specific regions in the scanned pages.
### Dataset Limitations
As the dataset currently focuses only on titles and bodies of the letters, it may not fully address the challenges posed by marginalia and footnotes in OCR tasks. However, the planned expansion to include these classes will provide a more comprehensive solution.
### Usage
Given the non-commercial restriction associated with the source scans, users of this dataset should be mindful of the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/) license under which it is distributed.
## Additional Information
For more details on the dataset and to access the digital scans, visit the DMGH repository link provided above. | 4,113 | [
[
-0.045196533203125,
-0.0263519287109375,
0.0391845703125,
-0.038848876953125,
-0.013946533203125,
-0.0006923675537109375,
0.006061553955078125,
-0.036102294921875,
0.011993408203125,
0.052337646484375,
-0.0272979736328125,
-0.060394287109375,
-0.0303497314453125... |
dataunitylab/json-schema-store | 2023-10-20T17:16:58.000Z | [
"size_categories:n<1K",
"language:en",
"json",
"region:us"
] | dataunitylab | null | null | 0 | 0 | 2023-10-13T16:51:04 | ---
language:
- en
tags:
- json
pretty_name: JSON Schema Store
size_categories:
- n<1K
---
This contains a set of schemas obtained via the [JSON Schema Store catalog](https://github.com/SchemaStore/schemastore/blob/master/src/api/json/catalog.json). | 249 | [
[
-0.0129852294921875,
-0.01763916015625,
0.01546478271484375,
0.00975799560546875,
0.0119476318359375,
0.047637939453125,
0.00337982177734375,
-0.0008044242858886719,
0.0391845703125,
0.07940673828125,
-0.065673828125,
-0.05816650390625,
0.0014095306396484375,
... |
ai-maker-space/medical_nonmedical | 2023-10-13T19:23:55.000Z | [
"region:us"
] | ai-maker-space | null | null | 0 | 0 | 2023-10-13T19:12:17 | ---
dataset_info:
features:
- name: is_medical
dtype: int64
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 25910847
num_examples: 14202
download_size: 0
dataset_size: 25910847
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "medical_nonmedical"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 526 | [
[
-0.018310546875,
-0.016876220703125,
0.0232391357421875,
0.0083770751953125,
-0.01508331298828125,
0.0005011558532714844,
0.024505615234375,
-0.018096923828125,
0.075439453125,
0.03363037109375,
-0.059539794921875,
-0.061187744140625,
-0.0570068359375,
-0.00... |
hrangel/MexLotMin | 2023-10-13T19:19:07.000Z | [
"region:us"
] | hrangel | null | null | 0 | 0 | 2023-10-13T19:19:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 415097.0
num_examples: 10
download_size: 337823
dataset_size: 415097.0
---
# Dataset Card for "MexLotMin"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 469 | [
[
-0.041046142578125,
-0.017791748046875,
0.01395416259765625,
0.01332855224609375,
-0.0210418701171875,
0.0120086669921875,
0.0200347900390625,
-0.00394439697265625,
0.07257080078125,
0.048858642578125,
-0.058441162109375,
-0.054229736328125,
-0.039520263671875,
... |
cuijian0819/jb | 2023-10-14T03:17:53.000Z | [
"region:us"
] | cuijian0819 | null | null | 0 | 0 | 2023-10-13T19:25:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
haseong8012/child-10k_sr-48k | 2023-10-13T21:29:14.000Z | [
"region:us"
] | haseong8012 | null | null | 0 | 0 | 2023-10-13T20:20:45 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
- name: audio
sequence: float32
splits:
- name: train
num_bytes: 6230798096
num_examples: 10000
download_size: 1789308102
dataset_size: 6230798096
---
# Dataset Card for "korean-child-command-voice_train-0-10000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 516 | [
[
-0.033203125,
0.0028591156005859375,
-0.00029659271240234375,
0.033843994140625,
-0.01096343994140625,
0.006053924560546875,
0.0105133056640625,
0.009796142578125,
0.04248046875,
0.037353515625,
-0.08502197265625,
-0.044769287109375,
-0.041168212890625,
-0.0... |
johananoa/dog_breed_images | 2023-10-13T20:47:00.000Z | [
"region:us"
] | johananoa | null | null | 0 | 0 | 2023-10-13T20:47:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-squad-plain_text-052f8a-94995146247 | 2023-10-13T20:48:36.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-13T20:48:32 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Globaly/bricks | 2023-10-13T22:43:52.000Z | [
"region:us"
] | Globaly | null | null | 0 | 0 | 2023-10-13T21:28:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-acronym_identification-default-d87697-95015146250 | 2023-10-13T23:39:17.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-13T23:35:51 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- acronym_identification
eval_info:
task: entity_extraction
model: lewtun/autotrain-acronym-identification-7324788
metrics: ['code_eval', 'lvwerra/ai4code']
dataset_name: acronym_identification
dataset_config: default
dataset_split: train
col_mapping:
tokens: tokens
tags: labels
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: lewtun/autotrain-acronym-identification-7324788
* Dataset: acronym_identification
* Config: default
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ebinum](https://huggingface.co/ebinum) for evaluating this model. | 936 | [
[
-0.029937744140625,
-0.0216827392578125,
0.00865936279296875,
0.0089569091796875,
-0.01432037353515625,
0.0054168701171875,
0.01427459716796875,
-0.039154052734375,
0.021820068359375,
0.01029205322265625,
-0.06414794921875,
-0.016326904296875,
-0.05621337890625,... |
varun4/AdventureTimeCaptions | 2023-10-14T21:14:58.000Z | [
"region:us"
] | varun4 | null | null | 0 | 0 | 2023-10-14T00:55:44 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 62319.0
num_examples: 3
download_size: 58529
dataset_size: 62319.0
---
# Dataset Card for "AdventureTimeCaptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 477 | [
[
-0.037750244140625,
-0.002002716064453125,
0.006610870361328125,
0.03118896484375,
-0.0099029541015625,
0.02880859375,
0.03192138671875,
-0.0288238525390625,
0.07806396484375,
0.03424072265625,
-0.0811767578125,
-0.046478271484375,
-0.0296783447265625,
-0.02... |
ItzYuuRz/TRS | 2023-10-14T01:16:45.000Z | [
"region:us"
] | ItzYuuRz | null | null | 0 | 0 | 2023-10-14T01:16:45 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-blog_authorship_corpus-blog_authorship_corpus-6e7ba8-95011146251 | 2023-10-14T01:59:20.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-14T01:59:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-blog_authorship_corpus-blog_authorship_corpus-6e7ba8-95011146252 | 2023-10-14T01:59:24.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-14T01:59:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-blog_authorship_corpus-blog_authorship_corpus-6e7ba8-95011146253 | 2023-10-14T01:59:29.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-14T01:59:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ashley-ng/lovelive-train-dataset | 2023-10-14T04:48:59.000Z | [
"region:us"
] | ashley-ng | null | null | 0 | 0 | 2023-10-14T02:44:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tqhuyen/DUT_Info | 2023-10-14T03:12:57.000Z | [
"region:us"
] | tqhuyen | null | null | 0 | 0 | 2023-10-14T03:12:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Zhijiao/FDDM_Lyric | 2023-10-14T03:53:47.000Z | [
"region:us"
] | Zhijiao | null | null | 0 | 0 | 2023-10-14T03:52:15 | ---
license: apache-2.0
---
FFDM Lyric by Zhijiao for OSS LZU
| 65 | [
[
-0.0271453857421875,
-0.02020263671875,
0.0219268798828125,
0.0689697265625,
-0.061004638671875,
-0.0362548828125,
0.0002455711364746094,
-0.040191650390625,
0.0026264190673828125,
0.08221435546875,
-0.072509765625,
-0.0221405029296875,
-0.041351318359375,
0... |
tipani/Shanghai-License-Plate-Auction | 2023-10-14T04:43:48.000Z | [
"task_categories:tabular-regression",
"task_categories:time-series-forecasting",
"language:en",
"license:mit",
"License Plate",
"Auction",
"Timeline",
"region:us"
] | tipani | null | null | 0 | 0 | 2023-10-14T04:31:22 | ---
language:
- "en"
pretty_name: "Shanghai License Plate Auction 2014-2021"
tags:
- "License Plate"
- "Auction"
- "Timeline"
license: "mit"
task_categories:
- "tabular-regression"
- "time-series-forecasting"
---
# Introduction
Second-by-second price updates from the last 60 seconds of the monthly license plate auction in Shanghai from 2014 to 2020, and a few months of 2021. The seconds data is given as a differential compared to the startprice. I managed to correctly predict and score a license plate on all three years that I worked on the project during 2018-2020. But it's not easy as there are lots of other factors affecting success on top of prediction accuracy.
# Read More
To learn the details about the auction process and why it is so darn hard, please read my [article series](https://www.linkedin.com/pulse/part-1-applied-ml-timeline-prediction-shanghai-license-tianyi-pan) on LinkedIn. | 907 | [
[
-0.024322509765625,
-0.01837158203125,
0.038360595703125,
0.0369873046875,
-0.02978515625,
-0.0186004638671875,
0.02496337890625,
-0.060272216796875,
0.000030517578125,
0.00814056396484375,
-0.032745361328125,
-0.00820159912109375,
-0.0116729736328125,
-0.03... |
autoevaluate/autoeval-eval-acronym_identification-default-14dffe-95035146264 | 2023-10-14T04:50:00.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-14T04:49:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
yusuf802/image-new-data | 2023-10-14T05:15:14.000Z | [
"region:us"
] | yusuf802 | null | null | 0 | 0 | 2023-10-14T05:15:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Melsc/p | 2023-10-14T07:53:59.000Z | [
"region:us"
] | Melsc | null | null | 0 | 0 | 2023-10-14T07:53:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sagorhishab/demo_data | 2023-10-14T08:06:10.000Z | [
"task_categories:text-generation",
"language:bn",
"license:mit",
"region:us"
] | sagorhishab | null | null | 0 | 0 | 2023-10-14T08:02:21 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
license: mit
task_categories:
- text-generation
language:
- bn
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,627 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
Greenvs/frzzz-test | 2023-10-14T08:53:58.000Z | [
"region:us"
] | Greenvs | null | null | 0 | 0 | 2023-10-14T08:50:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
EdisonBlack/wangzhe | 2023-10-14T09:21:15.000Z | [
"region:us"
] | EdisonBlack | null | null | 0 | 0 | 2023-10-14T09:20:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
meandyou200175/vitdata | 2023-10-25T03:06:08.000Z | [
"region:us"
] | meandyou200175 | null | null | 0 | 0 | 2023-10-14T09:20:19 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
manerushikesh/review | 2023-10-14T10:06:57.000Z | [
"region:us"
] | manerushikesh | null | null | 0 | 0 | 2023-10-14T10:06:57 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
manelreghima/challenge_db | 2023-10-14T12:44:48.000Z | [
"region:us"
] | manelreghima | null | null | 0 | 0 | 2023-10-14T12:17:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
JJYANG/PIRATE | 2023-10-14T12:49:05.000Z | [
"region:us"
] | JJYANG | null | null | 0 | 0 | 2023-10-14T12:49:05 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
DialogueCharacter/chinese_general_instruction_with_reward_score_judged_by_13B_baichuan2 | 2023-10-14T13:29:42.000Z | [
"region:us"
] | DialogueCharacter | null | null | 0 | 0 | 2023-10-14T13:21:11 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: reward_score
dtype: float64
splits:
- name: train
num_bytes: 1555344961
num_examples: 1122934
download_size: 944071681
dataset_size: 1555344961
---
# Dataset Card for "chinese_general_instruction_with_reward_score_judged_by_13B_baichuan2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 500 | [
[
-0.006099700927734375,
-0.0213165283203125,
0.0001342296600341797,
0.049468994140625,
-0.01537322998046875,
-0.022613525390625,
0.0018129348754882812,
0.002155303955078125,
0.034454345703125,
0.01519775390625,
-0.050262451171875,
-0.060943603515625,
-0.044433593... |
DialogueCharacter/chinese_dialogue_instruction_with_reward_score_judged_by_13B_baichuan2 | 2023-10-14T13:28:59.000Z | [
"region:us"
] | DialogueCharacter | null | null | 0 | 0 | 2023-10-14T13:28:54 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: reward_score
dtype: float64
splits:
- name: train
num_bytes: 144603592
num_examples: 110670
download_size: 83071987
dataset_size: 144603592
---
# Dataset Card for "chinese_dialogue_instruction_with_reward_score_judged_by_13B_baichuan2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 497 | [
[
-0.00994110107421875,
-0.024566650390625,
0.0023193359375,
0.05078125,
-0.0117950439453125,
-0.01434326171875,
-0.0019044876098632812,
0.005886077880859375,
0.0285186767578125,
0.0210723876953125,
-0.06121826171875,
-0.053131103515625,
-0.045196533203125,
-0... |
pythainlp/thai_usembassy | 2023-10-20T14:34:38.000Z | [
"task_categories:text-generation",
"task_categories:translation",
"language:th",
"language:en",
"license:cc0-1.0",
"region:us"
] | pythainlp | null | null | 0 | 0 | 2023-10-14T14:14:38 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: url
dtype: string
- name: th
dtype: string
- name: en
dtype: string
- name: title_en
dtype: string
- name: title_th
dtype: string
splits:
- name: train
num_bytes: 5060813
num_examples: 615
download_size: 2048306
dataset_size: 5060813
license: cc0-1.0
task_categories:
- text-generation
- translation
language:
- th
- en
---
# Dataset Card for "thai_usembassy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
This dataset collect all Thai & English news from [U.S. Embassy Bangkok](https://th.usembassy.gov/news-events/). | 776 | [
[
-0.0219879150390625,
-0.017974853515625,
0.017974853515625,
0.019500732421875,
-0.0496826171875,
-0.0022029876708984375,
0.0027561187744140625,
-0.014862060546875,
0.08001708984375,
0.058624267578125,
-0.041015625,
-0.058929443359375,
-0.036956787109375,
-0.... |
SAint7579/WMH_dataset | 2023-10-30T20:32:42.000Z | [
"region:us"
] | SAint7579 | null | null | 0 | 0 | 2023-10-14T14:44:38 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 36164162.0
num_examples: 430
download_size: 31785512
dataset_size: 36164162.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "WMH_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 478 | [
[
-0.0469970703125,
-0.017669677734375,
0.016265869140625,
0.0024700164794921875,
-0.01727294921875,
-0.002834320068359375,
0.023651123046875,
-0.01496124267578125,
0.0662841796875,
0.0318603515625,
-0.0662841796875,
-0.055084228515625,
-0.0452880859375,
-0.02... |
Starkate/squad | 2023-10-14T14:55:26.000Z | [
"region:us"
] | Starkate | null | null | 0 | 0 | 2023-10-14T14:55:26 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.028839111328125,
-0.0350341796875,
0.04656982421875,
0.052490234375,
0.00504302978515625,
0.0513916015625,
0.016998291015625,
-0.0521240234375,
-0.0149993896484375,
-0.06036376953125,
0.03790283... |
Falah/artwork_prompts | 2023-10-14T14:56:11.000Z | [
"region:us"
] | Falah | null | null | 0 | 0 | 2023-10-14T14:56:10 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 5594305
num_examples: 10000
download_size: 639738
dataset_size: 5594305
---
# Dataset Card for "artwork_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 360 | [
[
-0.04541015625,
-0.0204620361328125,
0.024749755859375,
0.0234222412109375,
-0.019378662109375,
-0.00751495361328125,
0.0167236328125,
-0.011199951171875,
0.059661865234375,
0.037628173828125,
-0.07867431640625,
-0.053924560546875,
-0.033447265625,
-0.003366... |
aisyahhrazak/crawl-ikram | 2023-10-22T01:19:43.000Z | [
"region:us"
] | aisyahhrazak | null | null | 0 | 0 | 2023-10-14T15:08:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
autoevaluate/autoeval-eval-acronym_identification-default-5c9c36-95114146279 | 2023-10-14T15:31:37.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-14T15:31:33 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
MartinKu/test_privacy | 2023-10-14T16:37:31.000Z | [
"region:us"
] | MartinKu | null | null | 0 | 0 | 2023-10-14T15:59:28 | ---
dataset_info:
features:
- name: NAME
dtype: float64
- name: CATEGORY
dtype: float64
- name: ADDRESS
dtype: float64
- name: AGE
dtype: float64
- name: CREDIT_DEBIT_CVV
dtype: float64
- name: CREDIT_DEBIT_EXPIRY
dtype: float64
- name: CREDIT_DEBIT_NUMBER
dtype: float64
- name: DRIVER_ID
dtype: float64
- name: PHONE
dtype: float64
- name: PASSWORD
dtype: float64
- name: BANK_ACCOUNT_NUMBER
dtype: float64
- name: PASSPORT_NUMBER
dtype: float64
- name: SSN
dtype: float64
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 3175
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test_privacy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 914 | [
[
-0.04010009765625,
-0.02569580078125,
0.00928497314453125,
0.0133819580078125,
-0.0026569366455078125,
-0.002025604248046875,
0.0157012939453125,
-0.01025390625,
0.03594970703125,
0.0285491943359375,
-0.05194091796875,
-0.062042236328125,
-0.0299530029296875,
... |
Tenan/pedeefs | 2023-10-14T16:00:01.000Z | [
"region:us"
] | Tenan | null | null | 0 | 0 | 2023-10-14T16:00:01 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
1899Deposit38-ECV/Leobatista | 2023-10-14T16:05:38.000Z | [
"region:us"
] | 1899Deposit38-ECV | null | null | 0 | 0 | 2023-10-14T16:05:21 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_golaxy__gogpt-560m | 2023-10-14T16:13:40.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-14T16:13:32 | ---
pretty_name: Evaluation run of golaxy/gogpt-560m
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [golaxy/gogpt-560m](https://huggingface.co/golaxy/gogpt-560m) on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_golaxy__gogpt-560m\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-14T16:13:28.692590](https://huggingface.co/datasets/open-llm-leaderboard/details_golaxy__gogpt-560m/blob/main/results_2023-10-14T16-13-28.692590.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0382760067114094,\n\
\ \"em_stderr\": 0.001964844510611307,\n \"f1\": 0.06699035234899327,\n\
\ \"f1_stderr\": 0.0021908023180713283,\n \"acc\": 0.2537490134175217,\n\
\ \"acc_stderr\": 0.00702545276061429\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0382760067114094,\n \"em_stderr\": 0.001964844510611307,\n\
\ \"f1\": 0.06699035234899327,\n \"f1_stderr\": 0.0021908023180713283\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5074980268350434,\n\
\ \"acc_stderr\": 0.01405090552122858\n }\n}\n```"
repo_url: https://huggingface.co/golaxy/gogpt-560m
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_14T16_13_28.692590
path:
- '**/details_harness|drop|3_2023-10-14T16-13-28.692590.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-14T16-13-28.692590.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_14T16_13_28.692590
path:
- '**/details_harness|gsm8k|5_2023-10-14T16-13-28.692590.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-14T16-13-28.692590.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_14T16_13_28.692590
path:
- '**/details_harness|winogrande|5_2023-10-14T16-13-28.692590.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-14T16-13-28.692590.parquet'
- config_name: results
data_files:
- split: 2023_10_14T16_13_28.692590
path:
- results_2023-10-14T16-13-28.692590.parquet
- split: latest
path:
- results_2023-10-14T16-13-28.692590.parquet
---
# Dataset Card for Evaluation run of golaxy/gogpt-560m
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/golaxy/gogpt-560m
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [golaxy/gogpt-560m](https://huggingface.co/golaxy/gogpt-560m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_golaxy__gogpt-560m",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-14T16:13:28.692590](https://huggingface.co/datasets/open-llm-leaderboard/details_golaxy__gogpt-560m/blob/main/results_2023-10-14T16-13-28.692590.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0382760067114094,
"em_stderr": 0.001964844510611307,
"f1": 0.06699035234899327,
"f1_stderr": 0.0021908023180713283,
"acc": 0.2537490134175217,
"acc_stderr": 0.00702545276061429
},
"harness|drop|3": {
"em": 0.0382760067114094,
"em_stderr": 0.001964844510611307,
"f1": 0.06699035234899327,
"f1_stderr": 0.0021908023180713283
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5074980268350434,
"acc_stderr": 0.01405090552122858
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,032 | [
[
-0.0201263427734375,
-0.05218505859375,
0.012847900390625,
0.0174713134765625,
-0.01398468017578125,
0.0031147003173828125,
-0.0221405029296875,
-0.0156402587890625,
0.035675048828125,
0.037506103515625,
-0.048095703125,
-0.05804443359375,
-0.04534912109375,
... |
Greenvs/frzzcp-test | 2023-10-14T16:27:13.000Z | [
"region:us"
] | Greenvs | null | null | 0 | 0 | 2023-10-14T16:19:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jonyroy/student-poratal | 2023-11-01T16:11:49.000Z | [
"region:us"
] | jonyroy | null | null | 0 | 0 | 2023-10-14T16:35:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ndesainer/Ndesainer-test | 2023-10-14T16:50:06.000Z | [
"region:us"
] | ndesainer | null | null | 0 | 0 | 2023-10-14T16:46:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
AchyuthGamer/ImMagician-FineTune-1 | 2023-10-14T16:49:41.000Z | [
"region:us"
] | AchyuthGamer | null | null | 0 | 0 | 2023-10-14T16:49:41 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sordonia/id-maxD1000 | 2023-10-14T17:00:25.000Z | [
"region:us"
] | sordonia | null | null | 0 | 0 | 2023-10-14T17:00:10 | ## max_context_length: 128
## max_documents_per_subject: 1000
| 62 | [
[
-0.037017822265625,
-0.0286407470703125,
0.056854248046875,
0.0750732421875,
-0.03857421875,
-0.04071044921875,
-0.01641845703125,
0.0075225830078125,
0.007099151611328125,
0.045562744140625,
-0.0169219970703125,
-0.04742431640625,
-0.06866455078125,
0.00650... |
xBarti/Konopsky | 2023-10-14T17:45:49.000Z | [
"region:us"
] | xBarti | null | null | 0 | 0 | 2023-10-14T17:42:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
k5shin3/sd-configs-1 | 2023-10-27T07:28:50.000Z | [
"region:us"
] | k5shin3 | null | null | 0 | 0 | 2023-10-14T18:03:44 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
garcianacho/BindingDB | 2023-10-14T19:16:05.000Z | [
"region:us"
] | garcianacho | null | null | 0 | 0 | 2023-10-14T19:11:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
KpoperBr/Jennie | 2023-10-14T19:21:26.000Z | [
"region:us"
] | KpoperBr | null | null | 0 | 0 | 2023-10-14T19:16:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.