id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
open-llm-leaderboard/details_meta-llama__Llama-2-13b-chat-hf | 2023-10-14T19:39:38.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-14T19:39:30 | ---
pretty_name: Evaluation run of meta-llama/Llama-2-13b-chat-hf
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_meta-llama__Llama-2-13b-chat-hf\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-14T19:39:26.636545](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-13b-chat-hf/blob/main/results_2023-10-14T19-39-26.636545.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.1782718120805369,\n\
\ \"em_stderr\": 0.003919630092588375,\n \"f1\": 0.2387195889261742,\n\
\ \"f1_stderr\": 0.003944947017182046,\n \"acc\": 0.448727630233375,\n\
\ \"acc_stderr\": 0.011074189612085313\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.1782718120805369,\n \"em_stderr\": 0.003919630092588375,\n\
\ \"f1\": 0.2387195889261742,\n \"f1_stderr\": 0.003944947017182046\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.15238817285822592,\n \
\ \"acc_stderr\": 0.009899572254794204\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.745067087608524,\n \"acc_stderr\": 0.012248806969376422\n\
\ }\n}\n```"
repo_url: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_14T19_39_26.636545
path:
- '**/details_harness|drop|3_2023-10-14T19-39-26.636545.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-14T19-39-26.636545.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_14T19_39_26.636545
path:
- '**/details_harness|gsm8k|5_2023-10-14T19-39-26.636545.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-14T19-39-26.636545.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_14T19_39_26.636545
path:
- '**/details_harness|winogrande|5_2023-10-14T19-39-26.636545.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-14T19-39-26.636545.parquet'
- config_name: results
data_files:
- split: 2023_10_14T19_39_26.636545
path:
- results_2023-10-14T19-39-26.636545.parquet
- split: latest
path:
- results_2023-10-14T19-39-26.636545.parquet
---
# Dataset Card for Evaluation run of meta-llama/Llama-2-13b-chat-hf
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_meta-llama__Llama-2-13b-chat-hf",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-14T19:39:26.636545](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-13b-chat-hf/blob/main/results_2023-10-14T19-39-26.636545.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.1782718120805369,
"em_stderr": 0.003919630092588375,
"f1": 0.2387195889261742,
"f1_stderr": 0.003944947017182046,
"acc": 0.448727630233375,
"acc_stderr": 0.011074189612085313
},
"harness|drop|3": {
"em": 0.1782718120805369,
"em_stderr": 0.003919630092588375,
"f1": 0.2387195889261742,
"f1_stderr": 0.003944947017182046
},
"harness|gsm8k|5": {
"acc": 0.15238817285822592,
"acc_stderr": 0.009899572254794204
},
"harness|winogrande|5": {
"acc": 0.745067087608524,
"acc_stderr": 0.012248806969376422
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,251 | [
[
-0.028717041015625,
-0.0521240234375,
0.016571044921875,
0.0252227783203125,
-0.0192718505859375,
0.0199432373046875,
-0.023651123046875,
-0.0193023681640625,
0.0391845703125,
0.037872314453125,
-0.05804443359375,
-0.0682373046875,
-0.05450439453125,
0.01940... |
Nadinegp/pharoh4 | 2023-10-14T20:29:40.000Z | [
"region:us"
] | Nadinegp | null | null | 0 | 0 | 2023-10-14T20:29:18 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
zhen-dong-nexusflow/toolllm_multiapi | 2023-10-14T20:54:37.000Z | [
"region:us"
] | zhen-dong-nexusflow | null | null | 0 | 0 | 2023-10-14T20:30:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
JWBickel/NewTestament_Pericopes | 2023-10-26T15:35:01.000Z | [
"size_categories:1K<n<10K",
"language:en",
"KJV Bible New Testament NT Pericope Chunked",
"region:us"
] | JWBickel | null | null | 0 | 0 | 2023-10-14T20:33:16 | ---
language:
- en
tags:
- KJV Bible New Testament NT Pericope Chunked
pretty_name: KJV NT by Pericope - Chunked
size_categories:
- 1K<n<10K
---
This is the KJV New Testament in JSON. It's grouped by pericope, and it's chunked to roughly 400 characters.
A similar file has the verse text all together. | 301 | [
[
-0.0222015380859375,
-0.037017822265625,
0.034027099609375,
0.038970947265625,
-0.053924560546875,
0.038330078125,
0.0145263671875,
-0.0218963623046875,
0.049835205078125,
0.0665283203125,
-0.031036376953125,
-0.030609130859375,
-0.056304931640625,
0.0438232... |
PhucDucAnh/Scannet200_subset10 | 2023-10-14T21:05:31.000Z | [
"region:us"
] | PhucDucAnh | null | null | 0 | 0 | 2023-10-14T21:05:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M | 2023-10-14T22:11:19.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-14T22:11:11 | ---
pretty_name: Evaluation run of cerebras/Cerebras-GPT-590M
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [cerebras/Cerebras-GPT-590M](https://huggingface.co/cerebras/Cerebras-GPT-590M)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-14T22:11:07.408754](https://huggingface.co/datasets/open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M/blob/main/results_2023-10-14T22-11-07.408754.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001153523489932886,\n\
\ \"em_stderr\": 0.00034761798968571054,\n \"f1\": 0.039916107382550345,\n\
\ \"f1_stderr\": 0.001153929680724628,\n \"acc\": 0.24300057504519282,\n\
\ \"acc_stderr\": 0.007948184376446\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.001153523489932886,\n \"em_stderr\": 0.00034761798968571054,\n\
\ \"f1\": 0.039916107382550345,\n \"f1_stderr\": 0.001153929680724628\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.004548900682335102,\n \
\ \"acc_stderr\": 0.0018535550440036204\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.48145224940805054,\n \"acc_stderr\": 0.014042813708888378\n\
\ }\n}\n```"
repo_url: https://huggingface.co/cerebras/Cerebras-GPT-590M
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_14T22_11_07.408754
path:
- '**/details_harness|drop|3_2023-10-14T22-11-07.408754.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-14T22-11-07.408754.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_14T22_11_07.408754
path:
- '**/details_harness|gsm8k|5_2023-10-14T22-11-07.408754.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-14T22-11-07.408754.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_14T22_11_07.408754
path:
- '**/details_harness|winogrande|5_2023-10-14T22-11-07.408754.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-14T22-11-07.408754.parquet'
- config_name: results
data_files:
- split: 2023_10_14T22_11_07.408754
path:
- results_2023-10-14T22-11-07.408754.parquet
- split: latest
path:
- results_2023-10-14T22-11-07.408754.parquet
---
# Dataset Card for Evaluation run of cerebras/Cerebras-GPT-590M
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/cerebras/Cerebras-GPT-590M
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [cerebras/Cerebras-GPT-590M](https://huggingface.co/cerebras/Cerebras-GPT-590M) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-14T22:11:07.408754](https://huggingface.co/datasets/open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M/blob/main/results_2023-10-14T22-11-07.408754.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968571054,
"f1": 0.039916107382550345,
"f1_stderr": 0.001153929680724628,
"acc": 0.24300057504519282,
"acc_stderr": 0.007948184376446
},
"harness|drop|3": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968571054,
"f1": 0.039916107382550345,
"f1_stderr": 0.001153929680724628
},
"harness|gsm8k|5": {
"acc": 0.004548900682335102,
"acc_stderr": 0.0018535550440036204
},
"harness|winogrande|5": {
"acc": 0.48145224940805054,
"acc_stderr": 0.014042813708888378
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,233 | [
[
-0.0290374755859375,
-0.05078125,
0.0178985595703125,
0.020904541015625,
-0.00611114501953125,
0.0093536376953125,
-0.031494140625,
-0.00966644287109375,
0.0296478271484375,
0.0396728515625,
-0.0472412109375,
-0.0673828125,
-0.0535888671875,
0.00655746459960... |
Aijackpot/NewDark | 2023-10-14T23:15:47.000Z | [
"region:us"
] | Aijackpot | null | null | 0 | 0 | 2023-10-14T23:15:47 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ContextualAI/squad_v2_neighbors | 2023-10-14T23:37:40.000Z | [
"region:us"
] | ContextualAI | null | null | 0 | 0 | 2023-10-14T23:37:35 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: neighbor
dtype: string
splits:
- name: validation
num_bytes: 15990227
num_examples: 11873
download_size: 3943454
dataset_size: 15990227
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
# Dataset Card for "squad_v2_neighbors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 719 | [
[
-0.03546142578125,
-0.01445770263671875,
0.0190277099609375,
0.0243377685546875,
-0.004161834716796875,
0.01177215576171875,
0.0325927734375,
-0.0255889892578125,
0.056304931640625,
0.0256195068359375,
-0.07708740234375,
-0.0428466796875,
-0.033905029296875,
... |
trappy/ditspsyditsduck | 2023-10-15T02:22:28.000Z | [
"region:us"
] | trappy | null | null | 0 | 0 | 2023-10-15T02:22:28 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_meta-llama__Llama-2-7b-chat-hf | 2023-10-15T02:34:27.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T02:34:19 | ---
pretty_name: Evaluation run of meta-llama/Llama-2-7b-chat-hf
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_meta-llama__Llama-2-7b-chat-hf\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T02:34:15.484281](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-7b-chat-hf/blob/main/results_2023-10-15T02-34-15.484281.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.06763842281879194,\n\
\ \"em_stderr\": 0.0025717489509556085,\n \"f1\": 0.13085570469798627,\n\
\ \"f1_stderr\": 0.0028825856446422905,\n \"acc\": 0.39549166962367155,\n\
\ \"acc_stderr\": 0.009921949302668327\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.06763842281879194,\n \"em_stderr\": 0.0025717489509556085,\n\
\ \"f1\": 0.13085570469798627,\n \"f1_stderr\": 0.0028825856446422905\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07354056103108415,\n \
\ \"acc_stderr\": 0.0071898357543652685\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7174427782162589,\n \"acc_stderr\": 0.012654062850971384\n\
\ }\n}\n```"
repo_url: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T02_34_15.484281
path:
- '**/details_harness|drop|3_2023-10-15T02-34-15.484281.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T02-34-15.484281.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T02_34_15.484281
path:
- '**/details_harness|gsm8k|5_2023-10-15T02-34-15.484281.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T02-34-15.484281.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T02_34_15.484281
path:
- '**/details_harness|winogrande|5_2023-10-15T02-34-15.484281.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T02-34-15.484281.parquet'
- config_name: results
data_files:
- split: 2023_10_15T02_34_15.484281
path:
- results_2023-10-15T02-34-15.484281.parquet
- split: latest
path:
- results_2023-10-15T02-34-15.484281.parquet
---
# Dataset Card for Evaluation run of meta-llama/Llama-2-7b-chat-hf
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_meta-llama__Llama-2-7b-chat-hf",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T02:34:15.484281](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Llama-2-7b-chat-hf/blob/main/results_2023-10-15T02-34-15.484281.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.06763842281879194,
"em_stderr": 0.0025717489509556085,
"f1": 0.13085570469798627,
"f1_stderr": 0.0028825856446422905,
"acc": 0.39549166962367155,
"acc_stderr": 0.009921949302668327
},
"harness|drop|3": {
"em": 0.06763842281879194,
"em_stderr": 0.0025717489509556085,
"f1": 0.13085570469798627,
"f1_stderr": 0.0028825856446422905
},
"harness|gsm8k|5": {
"acc": 0.07354056103108415,
"acc_stderr": 0.0071898357543652685
},
"harness|winogrande|5": {
"acc": 0.7174427782162589,
"acc_stderr": 0.012654062850971384
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,263 | [
[
-0.0289459228515625,
-0.051300048828125,
0.0169677734375,
0.0250701904296875,
-0.021209716796875,
0.0199127197265625,
-0.0229949951171875,
-0.0193939208984375,
0.038482666015625,
0.039031982421875,
-0.055816650390625,
-0.0687255859375,
-0.054656982421875,
0.... |
erhwenkuo/squad-cmrc2018-zhtw | 2023-10-15T04:52:32.000Z | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-sa-4.0",
"region:us"
] | erhwenkuo | null | null | 0 | 0 | 2023-10-15T03:22:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 14839890
num_examples: 10142
- name: validation
num_bytes: 4976411
num_examples: 3219
- name: test
num_bytes: 1534360
num_examples: 1002
download_size: 4781898
dataset_size: 21350661
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- zh
size_categories:
- 10K<n<100K
---
# Dataset Card for "squad-cmrc2018-zhtw"
## 資料集摘要
[CMRC 2018](https://hfl-rc.github.io/cmrc2018/) 是第二屆「訊飛盃」中文機器閱讀理解頒獎研討會(CMRC 2018)中相關競賽所使用的資料集。
它主要用於中文機器閱讀理解的跨度提取資料集,以增加該領域的語言多樣性。該資料集由人類專家在維基百科段落上註釋的近 20,000 個真實問題組成。
同時它也註釋了一個挑戰集,其中包含需要在整個上下文中進行全面理解和多句推理的問題。
原始資料來源:
- https://hfl-rc.github.io/cmrc2018/
- https://github.com/ymcui/cmrc2018
## 資料下載清理
1. 下載 [cmrc2018](https://huggingface.co/datasets/cmrc2018) 資料集
2. 使用 [OpenCC](https://github.com/yichen0831/opencc-python) 來進行簡繁轉換
3. 使用 Python 正規表示式來清理一些殘留在 `context`, `question`, `answer` 的不必要字元
4. 根據 `answers.text` 來重新計算 `answers.answer_start` 的字元位置
5. 使用 Huggingface Datasets 來上傳至 Huggingface Hub
## 資料集結構
範例如下:
```
{
"id":"DEV_1889_QUERY_0",
"context":"巴士底廣場是法國首都巴黎的一個廣場是法國大革命的重要紀念地方。過去是巴士底獄所在地直到攻佔巴士底獄隨後在法國革命期間的1789年7月14日到1790年7月14日之間被徹底破壞沒有留下任何痕跡。這個廣場跨巴黎市的3個區:第四區、第十一區和第十二區。這個廣場和周邊地區簡稱為“巴士底”。立於廣場中心的七月圓柱由路易-菲利普一世興建於1833年到1840年是為了紀念1830年的七月革命。其他顯著的特徵包括巴士底歌劇院、巴士底地鐵站以及一段聖馬丁運河。在1984年以前歌劇院所在的地方曾經是巴士底火車站。這個廣場經常舉辦音樂會或類似活動。巴士底的東北部擁有許多咖啡館、酒吧、夜總會和音樂廳夜生活頗為熱鬧。由於這個廣場具有相當的歷史意義也經常用於政治示威包括大規模的2006年3月28日法國勞工抗議。在巴士底廣場交匯的道路有聖安託萬路、聖安託萬市郊路、亨利四世大道、里昂路、勒努瓦大道、博馬舍大道等。",
"question":"巴士底廣場是哪場革命的重要紀念地方?",
"answers":{
"text":[
"法國大革命"
],
"answer_start":[
18
]
}
}
```
## 資料欄位
所有配置(Split)的資料欄位都是相同的:
- `id`: (string) 編號
- `context`: (string) 問題內容的上下文
- `question`: (string) 問題
- `answers`: 問題回答(基於內容的上下文來提取), 在SQuAD的結構裡, `text` 與 `answer_start` 是一個 list 列表
- `text`: list(string) 問題的答案
- `answer_start`: list(int) 問題的答案位於 `context` 上下文中的位置
## 資料分割
這個資料集總有下列的分割(split)子集:
- `train`: 10,142 筆
- `test`: 1,002 筆
- `validation`: 3,219 筆
## 如何使用
```python
from datasets import load_dataset
# 請使用 `split="train"` 參數來指定要使用的分割(split)
dataset = load_dataset("erhwenkuo/squad-cmrc2018-zhtw", split="train")
```
詳細的教學可參考:
- [NLP 課程-問答系統](https://huggingface.co/learn/nlp-course/zh-TW/chapter7/7?fw=pt)
## 許可資訊
CC BY-SA 4.0
## 論文引用
```
@inproceedings{cui-emnlp2019-cmrc2018,
title = "A Span-Extraction Dataset for {C}hinese Machine Reading Comprehension",
author = "Cui, Yiming and
Liu, Ting and
Che, Wanxiang and
Xiao, Li and
Chen, Zhipeng and
Ma, Wentao and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-1600",
doi = "10.18653/v1/D19-1600",
pages = "5886--5891",
}
```
| 3,472 | [
[
-0.053436279296875,
-0.059234619140625,
0.0177764892578125,
0.020843505859375,
-0.0271759033203125,
-0.01416778564453125,
-0.0278167724609375,
-0.032440185546875,
0.030914306640625,
0.018402099609375,
-0.06024169921875,
-0.045867919921875,
-0.0244140625,
0.0... |
bzy-080408/LTFLS-DATA | 2023-10-15T03:39:00.000Z | [
"region:us"
] | bzy-080408 | null | null | 0 | 0 | 2023-10-15T03:39:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
adityarra07/train_data_15000 | 2023-10-15T04:18:56.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 0 | 2023-10-15T04:17:41 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 2527685083.524488
num_examples: 15000
- name: test
num_bytes: 33702566.98032651
num_examples: 200
download_size: 2525375368
dataset_size: 2561387650.5048146
---
# Dataset Card for "train_data_15000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 562 | [
[
-0.04583740234375,
-0.00365447998046875,
0.01178741455078125,
0.03564453125,
-0.004230499267578125,
-0.0143280029296875,
0.0273590087890625,
-0.006008148193359375,
0.05670166015625,
0.0272979736328125,
-0.05303955078125,
-0.030059814453125,
-0.039581298828125,
... |
adityarra07/train_data_25000 | 2023-10-15T04:22:33.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 0 | 2023-10-15T04:20:33 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 4212813572.5408134
num_examples: 25000
- name: test
num_bytes: 33702421.98032651
num_examples: 200
download_size: 4159760175
dataset_size: 4246515994.52114
---
# Dataset Card for "train_data_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.04559326171875,
0.002475738525390625,
0.01151275634765625,
0.033355712890625,
-0.00655364990234375,
-0.0131072998046875,
0.0259246826171875,
-0.0059051513671875,
0.05145263671875,
0.0285491943359375,
-0.051422119140625,
-0.03155517578125,
-0.04302978515625,
... |
adityarra07/train_data_30000 | 2023-10-15T04:25:05.000Z | [
"region:us"
] | adityarra07 | null | null | 0 | 0 | 2023-10-15T04:22:33 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 5055383607.048976
num_examples: 30000
- name: test
num_bytes: 33702525.98032651
num_examples: 200
download_size: 4975038674
dataset_size: 5089086133.029303
---
# Dataset Card for "train_data_30000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 561 | [
[
-0.0458984375,
0.004772186279296875,
0.015167236328125,
0.034942626953125,
-0.0009150505065917969,
-0.01568603515625,
0.0278472900390625,
-0.0027637481689453125,
0.051788330078125,
0.024749755859375,
-0.054718017578125,
-0.0310211181640625,
-0.037353515625,
... |
crumb/textbook-codex-oai-0 | 2023-10-15T05:12:26.000Z | [
"region:us"
] | crumb | null | null | 0 | 0 | 2023-10-15T05:10:55 | ---
dataset_info:
features:
- name: text
dtype: string
- name: src
dtype: string
- name: src_col
dtype: string
- name: model
dtype: string
splits:
- name: train
num_bytes: 100059225.10238275
num_examples: 29265
download_size: 521517482
dataset_size: 100059225.10238275
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "textbook-codex-oai-0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 575 | [
[
-0.0419921875,
-0.01102447509765625,
0.013702392578125,
-0.009796142578125,
-0.00914764404296875,
-0.02178955078125,
0.02276611328125,
-0.00960540771484375,
0.0482177734375,
0.03497314453125,
-0.047119140625,
-0.05841064453125,
-0.0197906494140625,
-0.021133... |
open-llm-leaderboard/details_beaugogh__Llama2-7b-openorca-mc-v1 | 2023-10-15T05:51:42.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T05:51:34 | ---
pretty_name: Evaluation run of beaugogh/Llama2-7b-openorca-mc-v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [beaugogh/Llama2-7b-openorca-mc-v1](https://huggingface.co/beaugogh/Llama2-7b-openorca-mc-v1)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_beaugogh__Llama2-7b-openorca-mc-v1\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T05:51:30.480988](https://huggingface.co/datasets/open-llm-leaderboard/details_beaugogh__Llama2-7b-openorca-mc-v1/blob/main/results_2023-10-15T05-51-30.480988.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0012583892617449664,\n\
\ \"em_stderr\": 0.00036305608931189984,\n \"f1\": 0.054172609060402964,\n\
\ \"f1_stderr\": 0.0013304749578777586,\n \"acc\": 0.38787336798763505,\n\
\ \"acc_stderr\": 0.0089323131312436\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0012583892617449664,\n \"em_stderr\": 0.00036305608931189984,\n\
\ \"f1\": 0.054172609060402964,\n \"f1_stderr\": 0.0013304749578777586\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04094010614101592,\n \
\ \"acc_stderr\": 0.0054580767962943404\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7348066298342542,\n \"acc_stderr\": 0.01240654946619286\n\
\ }\n}\n```"
repo_url: https://huggingface.co/beaugogh/Llama2-7b-openorca-mc-v1
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T05_51_30.480988
path:
- '**/details_harness|drop|3_2023-10-15T05-51-30.480988.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T05-51-30.480988.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T05_51_30.480988
path:
- '**/details_harness|gsm8k|5_2023-10-15T05-51-30.480988.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T05-51-30.480988.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T05_51_30.480988
path:
- '**/details_harness|winogrande|5_2023-10-15T05-51-30.480988.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T05-51-30.480988.parquet'
- config_name: results
data_files:
- split: 2023_10_15T05_51_30.480988
path:
- results_2023-10-15T05-51-30.480988.parquet
- split: latest
path:
- results_2023-10-15T05-51-30.480988.parquet
---
# Dataset Card for Evaluation run of beaugogh/Llama2-7b-openorca-mc-v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/beaugogh/Llama2-7b-openorca-mc-v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [beaugogh/Llama2-7b-openorca-mc-v1](https://huggingface.co/beaugogh/Llama2-7b-openorca-mc-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_beaugogh__Llama2-7b-openorca-mc-v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T05:51:30.480988](https://huggingface.co/datasets/open-llm-leaderboard/details_beaugogh__Llama2-7b-openorca-mc-v1/blob/main/results_2023-10-15T05-51-30.480988.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0012583892617449664,
"em_stderr": 0.00036305608931189984,
"f1": 0.054172609060402964,
"f1_stderr": 0.0013304749578777586,
"acc": 0.38787336798763505,
"acc_stderr": 0.0089323131312436
},
"harness|drop|3": {
"em": 0.0012583892617449664,
"em_stderr": 0.00036305608931189984,
"f1": 0.054172609060402964,
"f1_stderr": 0.0013304749578777586
},
"harness|gsm8k|5": {
"acc": 0.04094010614101592,
"acc_stderr": 0.0054580767962943404
},
"harness|winogrande|5": {
"acc": 0.7348066298342542,
"acc_stderr": 0.01240654946619286
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,321 | [
[
-0.03204345703125,
-0.053070068359375,
0.0147552490234375,
0.0182342529296875,
-0.009368896484375,
0.01021575927734375,
-0.0249786376953125,
-0.0169219970703125,
0.035125732421875,
0.0426025390625,
-0.048858642578125,
-0.07012939453125,
-0.04376220703125,
0.... |
weiiiii0622/HW1_part1 | 2023-10-15T06:09:23.000Z | [
"region:us"
] | weiiiii0622 | null | null | 0 | 0 | 2023-10-15T06:09:23 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Pampkinus/Volodymyr-Zelenskyj | 2023-10-15T07:16:18.000Z | [
"license:openrail",
"region:us"
] | Pampkinus | null | null | 0 | 0 | 2023-10-15T06:48:28 | ---
license: openrail
---
Faceset of the current prezident of Ukraine, 8480 aligned pictures (JPG) of his face from the latest UN meating
https://cs.wikipedia.org/wiki/Volodymyr_Zelenskyj | 187 | [
[
-0.04522705078125,
-0.019989013671875,
0.0295562744140625,
-0.0013141632080078125,
-0.040374755859375,
0.0005826950073242188,
0.0310211181640625,
-0.0191192626953125,
0.0296783447265625,
0.061309814453125,
-0.058624267578125,
-0.037322998046875,
-0.0093307495117... |
yuchengFang/ADL-HW1 | 2023-10-15T07:22:12.000Z | [
"region:us"
] | yuchengFang | null | null | 0 | 0 | 2023-10-15T07:19:17 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01494598388671875,
0.057159423828125,
0.028839111328125,
-0.0350341796875,
0.04656982421875,
0.052490234375,
0.00504302978515625,
0.0513916015625,
0.016998291015625,
-0.0521240234375,
-0.0149993896484375,
-0.06036376953125,
0.03790283... |
kopyl/llama2-7b-classifier | 2023-10-15T07:21:19.000Z | [
"region:us"
] | kopyl | null | null | 0 | 0 | 2023-10-15T07:21:19 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mhenrichsen/sql | 2023-10-15T07:32:47.000Z | [
"region:us"
] | mhenrichsen | null | null | 0 | 0 | 2023-10-15T07:32:44 | ---
dataset_info:
features:
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 17385628
num_examples: 78356
download_size: 7203703
dataset_size: 17385628
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "sql"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 509 | [
[
-0.035308837890625,
-0.032867431640625,
0.018218994140625,
0.0119476318359375,
-0.0218505859375,
0.003429412841796875,
0.0163116455078125,
-0.00861358642578125,
0.06390380859375,
0.04180908203125,
-0.0640869140625,
-0.0552978515625,
-0.0198822021484375,
-0.0... |
chompk/tydiqa-goldp-th | 2023-10-15T07:58:21.000Z | [
"region:us"
] | chompk | null | null | 0 | 0 | 2023-10-15T07:48:48 | # TyDiQA-GoldP-Th
This dataset contains a removed Thai TyDiQA dataset obtained from [Khalidalt's TyDiQA Dataset](https://huggingface.co/datasets/khalidalt/tydiqa-goldp).
This dataset version does the following additional preprocessing to the dataset
1. Convert byte-level index into character-level index
2. Fix any mismatch text between answer span and actual text
3. Re-split train/development set such that there's no leakage in context passage
4. Deduplicate questions from the same context passage
## Dataset Format
The dataset is formatted to make it compatible to [XTREME benchmark](https://github.com/google-research/xtreme) format. The data is formatted as the following pattern:
```json
{
"version": "TyDiQA-GoldP-1.1-for-SQuAD-1.1",
"data": [
{
"paragrahs": [{
"context": [PASSAGE CONTEXT HERE],
"qas": [{
"answers": [{
"answer_start": [CONTEXT START CHAR INDEX OF ANSWER],
"text": [TEXT SPAN FROM CONTEXT],
}],
"question": [QUESTION],
"id": [ID]
}]
}],
},
...
]
}
```
## Author
Chompakorn Chaksangchaichot | 1,137 | [
[
-0.0263824462890625,
-0.032379150390625,
0.007572174072265625,
0.0173797607421875,
-0.0212860107421875,
0.02496337890625,
-0.0016803741455078125,
-0.0170135498046875,
0.032989501953125,
0.0625,
-0.07177734375,
-0.054534912109375,
-0.0237579345703125,
0.02044... |
gianma/eur-lex-sum-llama2-32k-jsonl | 2023-10-15T07:59:03.000Z | [
"region:us"
] | gianma | null | null | 0 | 0 | 2023-10-15T07:57:06 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bobbybelajar/AmazonMixedNoTest | 2023-10-15T08:01:40.000Z | [
"region:us"
] | bobbybelajar | null | null | 0 | 0 | 2023-10-15T08:01:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
jubalm/ethers-doc | 2023-10-15T08:47:44.000Z | [
"region:us"
] | jubalm | null | null | 0 | 0 | 2023-10-15T08:47:44 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
GGital/CAI_ENG_NEW_01 | 2023-10-15T09:12:30.000Z | [
"arxiv:1910.09700",
"region:us"
] | GGital | null | null | 0 | 0 | 2023-10-15T09:12:05 | ---
library_name: peft
base_model: decapoda-research/llama-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
| 5,452 | [
[
-0.044586181640625,
-0.043365478515625,
0.0302734375,
0.006214141845703125,
-0.0193328857421875,
-0.022064208984375,
0.004848480224609375,
-0.0408935546875,
0.00325775146484375,
0.04608154296875,
-0.05487060546875,
-0.048065185546875,
-0.04193115234375,
-0.0... |
olaaaiap/tsar-2022 | 2023-10-15T09:16:00.000Z | [
"region:us"
] | olaaaiap | null | null | 0 | 0 | 2023-10-15T09:15:34 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_beaugogh__Llama2-13b-sharegpt4 | 2023-10-15T09:30:50.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T09:30:41 | ---
pretty_name: Evaluation run of beaugogh/Llama2-13b-sharegpt4
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [beaugogh/Llama2-13b-sharegpt4](https://huggingface.co/beaugogh/Llama2-13b-sharegpt4)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_beaugogh__Llama2-13b-sharegpt4\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T09:30:37.851108](https://huggingface.co/datasets/open-llm-leaderboard/details_beaugogh__Llama2-13b-sharegpt4/blob/main/results_2023-10-15T09-30-37.851108.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001153523489932886,\n\
\ \"em_stderr\": 0.00034761798968571027,\n \"f1\": 0.05843015939597327,\n\
\ \"f1_stderr\": 0.0013137444686186492,\n \"acc\": 0.4200579473220307,\n\
\ \"acc_stderr\": 0.009967774108676528\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.001153523489932886,\n \"em_stderr\": 0.00034761798968571027,\n\
\ \"f1\": 0.05843015939597327,\n \"f1_stderr\": 0.0013137444686186492\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08794541319181198,\n \
\ \"acc_stderr\": 0.007801162197487711\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7521704814522494,\n \"acc_stderr\": 0.012134386019865348\n\
\ }\n}\n```"
repo_url: https://huggingface.co/beaugogh/Llama2-13b-sharegpt4
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T09_30_37.851108
path:
- '**/details_harness|drop|3_2023-10-15T09-30-37.851108.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T09-30-37.851108.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T09_30_37.851108
path:
- '**/details_harness|gsm8k|5_2023-10-15T09-30-37.851108.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T09-30-37.851108.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T09_30_37.851108
path:
- '**/details_harness|winogrande|5_2023-10-15T09-30-37.851108.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T09-30-37.851108.parquet'
- config_name: results
data_files:
- split: 2023_10_15T09_30_37.851108
path:
- results_2023-10-15T09-30-37.851108.parquet
- split: latest
path:
- results_2023-10-15T09-30-37.851108.parquet
---
# Dataset Card for Evaluation run of beaugogh/Llama2-13b-sharegpt4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/beaugogh/Llama2-13b-sharegpt4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [beaugogh/Llama2-13b-sharegpt4](https://huggingface.co/beaugogh/Llama2-13b-sharegpt4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_beaugogh__Llama2-13b-sharegpt4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T09:30:37.851108](https://huggingface.co/datasets/open-llm-leaderboard/details_beaugogh__Llama2-13b-sharegpt4/blob/main/results_2023-10-15T09-30-37.851108.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968571027,
"f1": 0.05843015939597327,
"f1_stderr": 0.0013137444686186492,
"acc": 0.4200579473220307,
"acc_stderr": 0.009967774108676528
},
"harness|drop|3": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968571027,
"f1": 0.05843015939597327,
"f1_stderr": 0.0013137444686186492
},
"harness|gsm8k|5": {
"acc": 0.08794541319181198,
"acc_stderr": 0.007801162197487711
},
"harness|winogrande|5": {
"acc": 0.7521704814522494,
"acc_stderr": 0.012134386019865348
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,267 | [
[
-0.0285797119140625,
-0.05255126953125,
0.018218994140625,
0.0212249755859375,
-0.008453369140625,
0.01238250732421875,
-0.026123046875,
-0.01593017578125,
0.031402587890625,
0.037628173828125,
-0.051513671875,
-0.0662841796875,
-0.04925537109375,
0.01359558... |
weiiiii0622/HW1_Part2 | 2023-10-15T10:04:41.000Z | [
"region:us"
] | weiiiii0622 | null | null | 0 | 0 | 2023-10-15T09:44:57 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.0149993896484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.005069732666015625,
0.051361083984375,
0.01702880859375,
-0.0521240234375,
-0.01494598388671875,
-0.06036376953125,
0.0379028320... |
khangmacon/cyberwiki | 2023-10-15T17:04:27.000Z | [
"region:us"
] | khangmacon | null | null | 0 | 0 | 2023-10-15T10:20:48 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 344199475.0
num_examples: 31170
download_size: 193770769
dataset_size: 344199475.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cyberwiki"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.0606689453125,
-0.0213470458984375,
0.0142669677734375,
0.01181793212890625,
-0.018218994140625,
0.004024505615234375,
0.009033203125,
-0.0265655517578125,
0.0709228515625,
0.0310821533203125,
-0.07147216796875,
-0.042022705078125,
-0.035736083984375,
-0.... |
open-llm-leaderboard/details_Yhyu13__chimera-inst-chat-13b-hf | 2023-10-15T10:30:44.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T10:30:36 | ---
pretty_name: Evaluation run of Yhyu13/chimera-inst-chat-13b-hf
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Yhyu13/chimera-inst-chat-13b-hf](https://huggingface.co/Yhyu13/chimera-inst-chat-13b-hf)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Yhyu13__chimera-inst-chat-13b-hf\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T10:30:32.183057](https://huggingface.co/datasets/open-llm-leaderboard/details_Yhyu13__chimera-inst-chat-13b-hf/blob/main/results_2023-10-15T10-30-32.183057.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.006606543624161074,\n\
\ \"em_stderr\": 0.0008296357389921881,\n \"f1\": 0.08297609060402691,\n\
\ \"f1_stderr\": 0.0018006483858768888,\n \"acc\": 0.4107112190060514,\n\
\ \"acc_stderr\": 0.009943586099857618\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.006606543624161074,\n \"em_stderr\": 0.0008296357389921881,\n\
\ \"f1\": 0.08297609060402691,\n \"f1_stderr\": 0.0018006483858768888\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.08188021228203184,\n \
\ \"acc_stderr\": 0.00755233852771695\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.739542225730071,\n \"acc_stderr\": 0.012334833671998287\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Yhyu13/chimera-inst-chat-13b-hf
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T10_30_32.183057
path:
- '**/details_harness|drop|3_2023-10-15T10-30-32.183057.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T10-30-32.183057.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T10_30_32.183057
path:
- '**/details_harness|gsm8k|5_2023-10-15T10-30-32.183057.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T10-30-32.183057.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T10_30_32.183057
path:
- '**/details_harness|winogrande|5_2023-10-15T10-30-32.183057.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T10-30-32.183057.parquet'
- config_name: results
data_files:
- split: 2023_10_15T10_30_32.183057
path:
- results_2023-10-15T10-30-32.183057.parquet
- split: latest
path:
- results_2023-10-15T10-30-32.183057.parquet
---
# Dataset Card for Evaluation run of Yhyu13/chimera-inst-chat-13b-hf
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Yhyu13/chimera-inst-chat-13b-hf
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Yhyu13/chimera-inst-chat-13b-hf](https://huggingface.co/Yhyu13/chimera-inst-chat-13b-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Yhyu13__chimera-inst-chat-13b-hf",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T10:30:32.183057](https://huggingface.co/datasets/open-llm-leaderboard/details_Yhyu13__chimera-inst-chat-13b-hf/blob/main/results_2023-10-15T10-30-32.183057.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.006606543624161074,
"em_stderr": 0.0008296357389921881,
"f1": 0.08297609060402691,
"f1_stderr": 0.0018006483858768888,
"acc": 0.4107112190060514,
"acc_stderr": 0.009943586099857618
},
"harness|drop|3": {
"em": 0.006606543624161074,
"em_stderr": 0.0008296357389921881,
"f1": 0.08297609060402691,
"f1_stderr": 0.0018006483858768888
},
"harness|gsm8k|5": {
"acc": 0.08188021228203184,
"acc_stderr": 0.00755233852771695
},
"harness|winogrande|5": {
"acc": 0.739542225730071,
"acc_stderr": 0.012334833671998287
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,283 | [
[
-0.0296783447265625,
-0.05743408203125,
0.0122222900390625,
0.0202484130859375,
-0.01306915283203125,
0.00873565673828125,
-0.02935791015625,
-0.0197601318359375,
0.03399658203125,
0.035919189453125,
-0.055206298828125,
-0.06549072265625,
-0.048065185546875,
... |
open-llm-leaderboard/details_TehVenom__DiffMerge_Pygmalion_Main-onto-V8P4 | 2023-10-15T10:36:08.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T10:36:00 | ---
pretty_name: Evaluation run of TehVenom/DiffMerge_Pygmalion_Main-onto-V8P4
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TehVenom/DiffMerge_Pygmalion_Main-onto-V8P4](https://huggingface.co/TehVenom/DiffMerge_Pygmalion_Main-onto-V8P4)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TehVenom__DiffMerge_Pygmalion_Main-onto-V8P4\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T10:35:56.777835](https://huggingface.co/datasets/open-llm-leaderboard/details_TehVenom__DiffMerge_Pygmalion_Main-onto-V8P4/blob/main/results_2023-10-15T10-35-56.777835.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002726510067114094,\n\
\ \"em_stderr\": 0.0005340111700415918,\n \"f1\": 0.05529781879194656,\n\
\ \"f1_stderr\": 0.0013448797167935412,\n \"acc\": 0.31823545497683364,\n\
\ \"acc_stderr\": 0.008263105361288367\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.002726510067114094,\n \"em_stderr\": 0.0005340111700415918,\n\
\ \"f1\": 0.05529781879194656,\n \"f1_stderr\": 0.0013448797167935412\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.011372251705837756,\n \
\ \"acc_stderr\": 0.002920666198788727\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6250986582478295,\n \"acc_stderr\": 0.013605544523788008\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TehVenom/DiffMerge_Pygmalion_Main-onto-V8P4
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T10_35_56.777835
path:
- '**/details_harness|drop|3_2023-10-15T10-35-56.777835.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T10-35-56.777835.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T10_35_56.777835
path:
- '**/details_harness|gsm8k|5_2023-10-15T10-35-56.777835.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T10-35-56.777835.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T10_35_56.777835
path:
- '**/details_harness|winogrande|5_2023-10-15T10-35-56.777835.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T10-35-56.777835.parquet'
- config_name: results
data_files:
- split: 2023_10_15T10_35_56.777835
path:
- results_2023-10-15T10-35-56.777835.parquet
- split: latest
path:
- results_2023-10-15T10-35-56.777835.parquet
---
# Dataset Card for Evaluation run of TehVenom/DiffMerge_Pygmalion_Main-onto-V8P4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TehVenom/DiffMerge_Pygmalion_Main-onto-V8P4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TehVenom/DiffMerge_Pygmalion_Main-onto-V8P4](https://huggingface.co/TehVenom/DiffMerge_Pygmalion_Main-onto-V8P4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TehVenom__DiffMerge_Pygmalion_Main-onto-V8P4",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T10:35:56.777835](https://huggingface.co/datasets/open-llm-leaderboard/details_TehVenom__DiffMerge_Pygmalion_Main-onto-V8P4/blob/main/results_2023-10-15T10-35-56.777835.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002726510067114094,
"em_stderr": 0.0005340111700415918,
"f1": 0.05529781879194656,
"f1_stderr": 0.0013448797167935412,
"acc": 0.31823545497683364,
"acc_stderr": 0.008263105361288367
},
"harness|drop|3": {
"em": 0.002726510067114094,
"em_stderr": 0.0005340111700415918,
"f1": 0.05529781879194656,
"f1_stderr": 0.0013448797167935412
},
"harness|gsm8k|5": {
"acc": 0.011372251705837756,
"acc_stderr": 0.002920666198788727
},
"harness|winogrande|5": {
"acc": 0.6250986582478295,
"acc_stderr": 0.013605544523788008
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,435 | [
[
-0.03314208984375,
-0.046234130859375,
0.012939453125,
0.01419830322265625,
-0.012939453125,
0.007293701171875,
-0.029571533203125,
-0.0163726806640625,
0.0291748046875,
0.0299224853515625,
-0.049072265625,
-0.0618896484375,
-0.057037353515625,
0.02081298828... |
open-llm-leaderboard/details_stabilityai__StableBeluga2 | 2023-10-15T10:41:15.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T10:41:07 | ---
pretty_name: Evaluation run of stabilityai/StableBeluga2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [stabilityai/StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_stabilityai__StableBeluga2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T10:41:03.838240](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__StableBeluga2/blob/main/results_2023-10-15T10-41-03.838240.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4326761744966443,\n\
\ \"em_stderr\": 0.005073838660621812,\n \"f1\": 0.5027527265100691,\n\
\ \"f1_stderr\": 0.0048086605803724005,\n \"acc\": 0.5940617757706712,\n\
\ \"acc_stderr\": 0.01188966924347996\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.4326761744966443,\n \"em_stderr\": 0.005073838660621812,\n\
\ \"f1\": 0.5027527265100691,\n \"f1_stderr\": 0.0048086605803724005\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.35860500379075055,\n \
\ \"acc_stderr\": 0.013210317364134026\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.829518547750592,\n \"acc_stderr\": 0.010569021122825897\n\
\ }\n}\n```"
repo_url: https://huggingface.co/stabilityai/StableBeluga2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T10_41_03.838240
path:
- '**/details_harness|drop|3_2023-10-15T10-41-03.838240.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T10-41-03.838240.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T10_41_03.838240
path:
- '**/details_harness|gsm8k|5_2023-10-15T10-41-03.838240.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T10-41-03.838240.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T10_41_03.838240
path:
- '**/details_harness|winogrande|5_2023-10-15T10-41-03.838240.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T10-41-03.838240.parquet'
- config_name: results
data_files:
- split: 2023_10_15T10_41_03.838240
path:
- results_2023-10-15T10-41-03.838240.parquet
- split: latest
path:
- results_2023-10-15T10-41-03.838240.parquet
---
# Dataset Card for Evaluation run of stabilityai/StableBeluga2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/stabilityai/StableBeluga2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [stabilityai/StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_stabilityai__StableBeluga2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T10:41:03.838240](https://huggingface.co/datasets/open-llm-leaderboard/details_stabilityai__StableBeluga2/blob/main/results_2023-10-15T10-41-03.838240.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4326761744966443,
"em_stderr": 0.005073838660621812,
"f1": 0.5027527265100691,
"f1_stderr": 0.0048086605803724005,
"acc": 0.5940617757706712,
"acc_stderr": 0.01188966924347996
},
"harness|drop|3": {
"em": 0.4326761744966443,
"em_stderr": 0.005073838660621812,
"f1": 0.5027527265100691,
"f1_stderr": 0.0048086605803724005
},
"harness|gsm8k|5": {
"acc": 0.35860500379075055,
"acc_stderr": 0.013210317364134026
},
"harness|winogrande|5": {
"acc": 0.829518547750592,
"acc_stderr": 0.010569021122825897
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,195 | [
[
-0.03021240234375,
-0.048431396484375,
0.00800323486328125,
0.0233612060546875,
-0.01556396484375,
0.004291534423828125,
-0.0297393798828125,
-0.018280029296875,
0.0270843505859375,
0.033843994140625,
-0.047607421875,
-0.06256103515625,
-0.051910400390625,
0... |
geoava/prostate128_nnUNet_3d_fullres_50_epoch | 2023-10-15T19:19:20.000Z | [
"region:us"
] | geoava | null | null | 0 | 0 | 2023-10-15T11:37:34 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
atom92/medical_healthwa_3.0 | 2023-10-15T12:53:16.000Z | [
"region:us"
] | atom92 | null | null | 0 | 0 | 2023-10-15T12:53:13 | ---
dataset_info:
features:
- name: text
struct:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2710809
num_examples: 7360
download_size: 586464
dataset_size: 2710809
---
# Dataset Card for "medical_healthwa_3.0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 392 | [
[
-0.022064208984375,
-0.01326751708984375,
0.034027099609375,
0.009246826171875,
-0.01270294189453125,
-0.017303466796875,
0.03814697265625,
-0.030517578125,
0.06353759765625,
0.040374755859375,
-0.052642822265625,
-0.06097412109375,
-0.04754638671875,
-0.012... |
DigirentEnterprise/Translate_all_mixed_dataset | 2023-10-15T13:16:23.000Z | [
"region:us"
] | DigirentEnterprise | null | null | 0 | 0 | 2023-10-15T13:05:25 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input
dtype: string
- name: ouput
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 1543490120
num_examples: 3370045
download_size: 950032312
dataset_size: 1543490120
---
# Dataset Card for "Translate_all_mixed_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 576 | [
[
-0.032928466796875,
-0.0267486572265625,
0.01549530029296875,
0.03717041015625,
-0.029815673828125,
0.00489044189453125,
-0.0053863525390625,
-0.01116943359375,
0.0692138671875,
0.039825439453125,
-0.04833984375,
-0.06512451171875,
-0.0628662109375,
-0.01268... |
orgcatorg/russia-ukraine-cnbc | 2023-10-16T20:11:24.000Z | [
"region:us"
] | orgcatorg | null | null | 0 | 0 | 2023-10-15T13:27:37 | ---
dataset_info:
features:
- name: '@type'
dtype: string
- name: headline
dtype: string
- name: url
dtype: string
- name: dateModified
dtype: string
- name: datePublished
dtype: string
- name: mainEntityOfPage
dtype: string
- name: articleBody
dtype: string
- name: publisher
dtype: string
- name: image
dtype: string
- name: thumbnailUrl
dtype: string
- name: video
dtype: string
splits:
- name: train
num_bytes: 6035507
num_examples: 2757
download_size: 0
dataset_size: 6035507
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "russia-ukraine-cnbc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 828 | [
[
-0.032073974609375,
-0.00812530517578125,
0.037384033203125,
0.0250396728515625,
-0.04278564453125,
0.0150909423828125,
0.0110931396484375,
-0.01216888427734375,
0.0596923828125,
0.0264739990234375,
-0.060882568359375,
-0.07708740234375,
-0.036651611328125,
... |
Kia43/ipadapter | 2023-10-15T13:38:42.000Z | [
"region:us"
] | Kia43 | null | null | 0 | 0 | 2023-10-15T13:38:42 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
nekofura/Project_Terra | 2023-10-22T16:35:47.000Z | [
"region:us"
] | nekofura | null | null | 0 | 0 | 2023-10-15T14:13:52 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
aburns4/WikiWeb2M | 2023-10-15T16:48:48.000Z | [
"license:cc-by-sa-3.0",
"arxiv:2305.03668",
"region:us"
] | aburns4 | null | null | 0 | 0 | 2023-10-15T14:45:20 | ---
license: cc-by-sa-3.0
---
# The Wikipedia Webpage 2M (WikiWeb2M) Dataset
We present the WikiWeb2M dataset consisting of over 2 million English
Wikipedia articles. Our released dataset includes all of the text content on
each page, links to the images present, and structure metadata such as which
section each text and image element comes from.
This dataset is a contribution from our [paper](https://arxiv.org/abs/2305.03668)
`A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding`.
The dataset is stored as gzipped TFRecord files which can be downloaded here or on our [GitHub repository](https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md).
## WikiWeb2M Statistics
WikiWeb2M is the first multimodal open source dataset to include all page
content in a unified format. Here we provide aggregate information about the
WikiWeb2M dataset as well as the number of samples available with each of the
fine-tuning tasks we design from it.
| Number of | Train | Validation | Test |
| ---- | ---- | ---- | ---- |
| Pages | 1,803,225 | 100,475 | 100,833 |
| Sections | 10,519,294 | 585,651 | 588,552 |
| Unique Images | 3,867,277 | 284,975 | 286,390 |
| Total Images | 5,340,708 | 299,057 | 300,666 |
Our data processing and filtering choices for each fine-tuning task are
described in the paper.
| Downstream Task Samples | Train | Validation | Test |
| ---- | ---- | ---- | ---- |
| Page Description Generation | 1,435,263 | 80,103 | 80,339 |
| Section Summarization | 3,082,031 | 172,984 | 173,591 |
| Contextual Image Captioning | 2,222,814 | 124,703 | 124,188 |
## Data and Task Examples
Here we illustrate how a single webpage can be processed into the three tasks we
study: page description generation, section summarization, and contextual image
captioning. The paper includes multiple Wikipedia article examples.

## Usage
### TFRecord Features
Here we provide the names of the fields included in the dataset, their
tensorflow Sequence Example type, their data type, and a brief description.
| Feature | Sequence Example Type | DType | Description |
| ---- | ---- | ---- | ---- |
| `split` | Context | string | Dataset split this page contributes to (e.g., train, val, or test) |
| `page_url` | Context | string | Wikipeda page URL |
| `page_title` | Context | string | Wikipedia page title, title of the article |
| `raw_page_description` | Context | string | Wikipedia page description, which is typically the same or very similar to the content of the first (root) section of the article |
| `clean_page_description` | Context | string | `raw_page_description` but with newline and tab characters removed; this provides the exact target text for our page description generation task |
| `page_contains_images` | Context | int64 | Whether the Wikipedia page has images after our cleaning and processing steps |
| `page_content_sections_without_table_list` | Context | int64 | Number of content sections with text or images that do not contain a list or table. This field can be used to reproduce data filtering for page description generation |
| `is_page_description_sample` | Context | int64 | Whether a page is used as a sample for the page description fine-tuning task |
| `section_title` | Sequence | string | Titles of each section on the Wikipedia page, in order |
| `section_index` | Sequence | int64 | Index of each section on the Wikipedia page, in order |
| `section_depth` | Sequence | int64 | Depth of each section on the Wikipedia page, in order |
| `section_heading_level` | Sequence | int64 | Heading level of each section on the Wikipedia page, in order |
| `section_subsection_index` | Sequence | int64 | Subsection indices, grouped by section in order |
| `section_parent_index` | Sequence | int64 | The parent section index of each section, in order |
| `section_text` | Sequence | string | The body text of each section, in order |
| `is_section_summarization_sample` | Sequence | int64 | Whether a section is used as a sample for the section summarization fine-tuning task |
| `section_raw_1st_sentence` | Sequence | string | The processed out first sentence of each section, in order |
| `section_clean_1st_sentence` | Sequence | string | The same as `section_raw_1st_sentence` but with newline and tab characters removed. This provides the exact target text for our section summarization task |
| `section_rest_sentence` | Sequence | string | The processed out sentences following the first sentence of each section, in order |
| `section_contains_table_or_list` | Sequence | int64 | Whether section content contains a table or list; this field is needed to be able to reproduce sample filtering for section summarization |
| `section_contains_images` | Sequence | int64 | Whether each section has images after our cleaning and processing steps, in order |
| `is_image_caption_sample` | Sequence | int64 | Whether an image is used as a sample for the image captioning fine-tuning task |
| `section_image_url` | Sequence | string | Image URLs, grouped by section in order |
| `section_image_mime_type` | Sequence | string | Image mime type, grouped by section in order |
| `section_image_width` | Sequence | int64 | Image width, grouped by section in order |
| `section_image_height` | Sequence | int64 | Image height, grouped by section in order |
| `section_image_in_wit` | Sequence | int64 | Whether an image was originally contained in the WIT dataset, grouped by section in order |
| `section_image_raw_attr_desc` | Sequence | string | Image attribution description, grouped by section in order |
| `section_image_clean_attr_desc` | Sequence | string | The English only processed portions of the attribution description |
| `section_image_raw_ref_desc` | Sequence | string | Image reference description, grouped by section in order |
| `section_image_clean_ref_desc` | Sequence | string | The same as `section_image_raw_ref_desc` but with newline and tab characters removed; this provides the exact target text for our image captioning task |
| `section_image_alt_text` | Sequence | string | Image alt-text, grouped by section in order |
| `section_image_captions` | Sequence | string | Comma separated concatenated text from alt-text, attribution, and reference descriptions; this is how captions are formatted as input text when used |
### Loading the Data
Here we provide a small code snippet for how to load the TFRecord files. First,
load any necessary packages.
```python
import numpy as np
import glob
import tensorflow.compat.v1 as tf
from collections import defaultdict
```
Next, define a data parser class.
```python
class DataParser():
def __init__(self,
filepath: str = 'wikiweb2m-*',
path: str):
self.filepath = filepath
self.path = path
self.data = defaultdict(list)
def parse_data(self):
context_feature_description = {
'split': tf.io.FixedLenFeature([], dtype=tf.string),
'page_title': tf.io.FixedLenFeature([], dtype=tf.string),
'page_url': tf.io.FixedLenFeature([], dtype=tf.string),
'clean_page_description': tf.io.FixedLenFeature([], dtype=tf.string),
'raw_page_description': tf.io.FixedLenFeature([], dtype=tf.string),
'is_page_description_sample': tf.io.FixedLenFeature([], dtype=tf.int64),
'page_contains_images': tf.io.FixedLenFeature([], dtype=tf.int64),
'page_content_sections_without_table_list': tf.io.FixedLenFeature([] , dtype=tf.int64)
}
sequence_feature_description = {
'is_section_summarization_sample': tf.io.VarLenFeature(dtype=tf.int64),
'section_title': tf.io.VarLenFeature(dtype=tf.string),
'section_index': tf.io.VarLenFeature(dtype=tf.int64),
'section_depth': tf.io.VarLenFeature(dtype=tf.int64),
'section_heading_level': tf.io.VarLenFeature(dtype=tf.int64),
'section_subsection_index': tf.io.VarLenFeature(dtype=tf.int64),
'section_parent_index': tf.io.VarLenFeature(dtype=tf.int64),
'section_text': tf.io.VarLenFeature(dtype=tf.string),
'section_clean_1st_sentence': tf.io.VarLenFeature(dtype=tf.string),
'section_raw_1st_sentence': tf.io.VarLenFeature(dtype=tf.string),
'section_rest_sentence': tf.io.VarLenFeature(dtype=tf.string),
'is_image_caption_sample': tf.io.VarLenFeature(dtype=tf.int64),
'section_image_url': tf.io.VarLenFeature(dtype=tf.string),
'section_image_mime_type': tf.io.VarLenFeature(dtype=tf.string),
'section_image_width': tf.io.VarLenFeature(dtype=tf.int64),
'section_image_height': tf.io.VarLenFeature(dtype=tf.int64),
'section_image_in_wit': tf.io.VarLenFeature(dtype=tf.int64),
'section_contains_table_or_list': tf.io.VarLenFeature(dtype=tf.int64),
'section_image_captions': tf.io.VarLenFeature(dtype=tf.string),
'section_image_alt_text': tf.io.VarLenFeature(dtype=tf.string),
'section_image_raw_attr_desc': tf.io.VarLenFeature(dtype=tf.string),
'section_image_clean_attr_desc': tf.io.VarLenFeature(dtype=tf.string),
'section_image_raw_ref_desc': tf.io.VarLenFeature(dtype=tf.string),
'section_image_clean_ref_desc': tf.io.VarLenFeature(dtype=tf.string),
'section_contains_images': tf.io.VarLenFeature(dtype=tf.int64)
}
def _parse_function(example_proto):
return tf.io.parse_single_sequence_example(example_proto,
context_feature_description,
sequence_feature_description)
suffix = '.tfrecord*'
data_path = glob.Glob(self.path + self.filepath + suffix)
raw_dataset = tf.data.TFRecordDataset(data_path, compression_type='GZIP')
parsed_dataset = raw_dataset.map(_parse_function)
for d in parsed_dataset:
split = d[0]['split'].numpy().decode()
self.data[split].append(d)
```
Then you can run the following to parse the dataset.
```python
parser = DataParser()
parser.parse_data()
print((len(parser.data['train']), len(parser.data['val']), len(parser.data['test'])))
```
### Models
Our full attention, transient global, and prefix global experiments were run
using the [LongT5](https://github.com/google-research/longt5) code base.
## How to Cite
If you extend or use this work, please cite the [paper](https://arxiv.org/abs/2305.03668) where it was
introduced:
```
@inproceedings{
burns2023wiki,
title={A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding},
author={Andrea Burns and Krishna Srinivasan and Joshua Ainslie and Geoff Brown and Bryan A. Plummer and Kate Saenko and Jianmo Ni and Mandy Guo},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year={2023},
url={https://openreview.net/forum?id=rwcLHjtUmn}
}
``` | 11,007 | [
[
-0.05255126953125,
-0.042022705078125,
0.01261138916015625,
0.01416015625,
-0.0286102294921875,
-0.0207366943359375,
-0.019927978515625,
-0.01490020751953125,
0.007083892822265625,
0.042083740234375,
-0.044281005859375,
-0.060577392578125,
-0.03228759765625,
... |
hails/llema_math_majk_outputs | 2023-10-16T14:36:28.000Z | [
"region:us"
] | hails | null | null | 0 | 0 | 2023-10-15T15:00:25 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
toybox2019/chihaya_v11 | 2023-10-15T15:13:41.000Z | [
"region:us"
] | toybox2019 | null | null | 0 | 0 | 2023-10-15T15:12:21 | Entry not found | 15 | [
[
-0.021392822265625,
-0.0149688720703125,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.046539306640625,
0.052520751953125,
0.005046844482421875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.01495361328125,
-0.060333251953125,
0.03... |
autoevaluate/autoeval-eval-acronym_identification-default-a39997-95250146317 | 2023-10-15T15:17:11.000Z | [
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-15T15:17:07 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
toybox2019/chihaya_v12 | 2023-10-15T15:26:10.000Z | [
"region:us"
] | toybox2019 | null | null | 0 | 0 | 2023-10-15T15:25:54 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
orgcatorg/israel-hamas-gaza-cnbc | 2023-10-16T20:12:35.000Z | [
"region:us"
] | orgcatorg | null | null | 0 | 0 | 2023-10-15T15:32:36 | ---
dataset_info:
features:
- name: '@type'
dtype: string
- name: headline
dtype: string
- name: url
dtype: string
- name: dateModified
dtype: string
- name: datePublished
dtype: string
- name: mainEntityOfPage
dtype: string
- name: articleBody
dtype: string
- name: publisher
dtype: string
- name: image
dtype: string
- name: thumbnailUrl
dtype: string
- name: video
dtype: string
splits:
- name: train
num_bytes: 668826
num_examples: 335
download_size: 0
dataset_size: 668826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "israel-hamas-gaza-cnbc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 828 | [
[
-0.037322998046875,
-0.002490997314453125,
0.0268707275390625,
0.02789306640625,
-0.0372314453125,
0.006717681884765625,
0.0168914794921875,
-0.007694244384765625,
0.0634765625,
0.0269012451171875,
-0.04986572265625,
-0.0718994140625,
-0.055816650390625,
-0.... |
ostapeno/qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl | 2023-10-15T15:38:20.000Z | [
"region:us"
] | ostapeno | null | null | 0 | 0 | 2023-10-15T15:38:04 | ---
dataset_info:
features:
- name: id
dtype: string
- name: context
dtype: string
- name: docno
dtype: string
- name: subject
dtype: string
- name: icl_examples
sequence: string
- name: author_instr
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: author_response
dtype: string
- name: normalized_cumul_logprob_response
dtype: float64
splits:
- name: formal_logic
num_bytes: 16064538.691369945
num_examples: 5673
- name: machine_learning
num_bytes: 20632157.395614564
num_examples: 7286
- name: global_facts
num_bytes: 22234929.984952725
num_examples: 7852
- name: abstract_algebra
num_bytes: 24030261.82530678
num_examples: 8486
- name: high_school_physics
num_bytes: 22147145.62051901
num_examples: 7821
- name: college_biology
num_bytes: 20867192.95200161
num_examples: 7369
- name: high_school_government_and_politics
num_bytes: 21133377.798994165
num_examples: 7463
- name: prehistory
num_bytes: 22368022.408449005
num_examples: 7899
- name: security_studies
num_bytes: 19454147.85998793
num_examples: 6870
- name: sociology
num_bytes: 22217939.462804265
num_examples: 7846
download_size: 42555653
dataset_size: 211149713.99999994
---
# Dataset Card for "qa-platy_icl5_clen128_maxD-1_maxC10000_0.jsonl"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,547 | [
[
-0.04986572265625,
-0.0019168853759765625,
0.0166778564453125,
0.030242919921875,
-0.023040771484375,
0.009490966796875,
0.02178955078125,
-0.0001150369644165039,
0.04266357421875,
0.04180908203125,
-0.04443359375,
-0.06927490234375,
-0.041748046875,
0.01018... |
1aurent/BACH | 2023-10-15T17:07:11.000Z | [
"task_categories:image-classification",
"size_categories:n<1K",
"license:cc-by-nc-nd-4.0",
"biology",
"Histopathology",
"Histology",
"Digital Pathology",
"Breast Cancer",
"region:us"
] | 1aurent | null | null | 0 | 0 | 2023-10-15T15:53:43 | ---
license: cc-by-nc-nd-4.0
size_categories:
- n<1K
task_categories:
- image-classification
tags:
- biology
- Histopathology
- Histology
- Digital Pathology
- Breast Cancer
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Benign
'1': InSitu
'2': Invasive
'3': Normal
'4': Unknown
splits:
- name: train
num_bytes: 7370596186.0
num_examples: 400
- name: test
num_bytes: 1887476013.0
num_examples: 100
download_size: 7727410763
dataset_size: 9258072199.0
---
[](https://doi.org/10.5281/zenodo.3632035)
# BACH Dataset : Grand Challenge on Breast Cancer Histology images
**Homepage**: https://zenodo.org/records/3632035 \
**Homepage**: https://iciar2018-challenge.grand-challenge.org/ \
**Publication Date**: 2019-05-31 \
**License**: [Creative Commons Attribution Non Commercial No Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode) \
**Citation**:
```bibtex
@dataset{polonia_2020_3632035,
author = {Polónia, António and Eloy, Catarina and Aguiar, Paulo},
title = {{BACH Dataset : Grand Challenge on Breast Cancer Histology images}},
month = jan,
year = 2020,
publisher = {Zenodo}
}
```
## Description
The dataset is composed of Hematoxylin and eosin (H&E) stained breast histology microscopy images.
Microscopy images are labelled as normal, benign, in situ carcinoma or invasive carcinoma according to the predominant cancer type in each image.
The annotation was performed by two medical experts and images where there was disagreement were discarded.
Images have the following specifications:
* Color model: R(ed)G(reen)B(lue)
* Size: 2048 x 1536 pixels
* Pixel scale: 0.42 µm x 0.42 µm
* Memory space: 10-20 MB (approx.)
* Type of label: image-wise | 2,072 | [
[
-0.015869140625,
-0.00496673583984375,
0.04052734375,
0.0158233642578125,
-0.047088623046875,
-0.011322021484375,
0.0174560546875,
-0.019500732421875,
0.0169525146484375,
0.045166015625,
-0.04339599609375,
-0.07373046875,
-0.03271484375,
0.016876220703125,
... |
djulian13/Swadesh-list-tagged-East-Slavic | 2023-10-15T16:20:08.000Z | [
"region:us"
] | djulian13 | null | null | 0 | 0 | 2023-10-15T16:06:06 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
fishytorts/taylor_swift_mini_2 | 2023-10-15T16:16:27.000Z | [
"region:us"
] | fishytorts | null | null | 0 | 0 | 2023-10-15T16:16:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b | 2023-10-15T16:26:47.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T16:26:39 | ---
pretty_name: Evaluation run of TehVenom/Dolly_Shygmalion-6b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TehVenom/Dolly_Shygmalion-6b](https://huggingface.co/TehVenom/Dolly_Shygmalion-6b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T16:26:35.787063](https://huggingface.co/datasets/open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b/blob/main/results_2023-10-15T16-26-35.787063.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0008389261744966443,\n\
\ \"em_stderr\": 0.0002964962989801232,\n \"f1\": 0.049329907718121055,\n\
\ \"f1_stderr\": 0.001207499751606471,\n \"acc\": 0.33737021840348064,\n\
\ \"acc_stderr\": 0.008672111270767138\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0008389261744966443,\n \"em_stderr\": 0.0002964962989801232,\n\
\ \"f1\": 0.049329907718121055,\n \"f1_stderr\": 0.001207499751606471\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.02122820318423048,\n \
\ \"acc_stderr\": 0.003970449129848635\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6535122336227308,\n \"acc_stderr\": 0.01337377341168564\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TehVenom/Dolly_Shygmalion-6b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T16_26_35.787063
path:
- '**/details_harness|drop|3_2023-10-15T16-26-35.787063.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T16-26-35.787063.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T16_26_35.787063
path:
- '**/details_harness|gsm8k|5_2023-10-15T16-26-35.787063.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T16-26-35.787063.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T16_26_35.787063
path:
- '**/details_harness|winogrande|5_2023-10-15T16-26-35.787063.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T16-26-35.787063.parquet'
- config_name: results
data_files:
- split: 2023_10_15T16_26_35.787063
path:
- results_2023-10-15T16-26-35.787063.parquet
- split: latest
path:
- results_2023-10-15T16-26-35.787063.parquet
---
# Dataset Card for Evaluation run of TehVenom/Dolly_Shygmalion-6b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TehVenom/Dolly_Shygmalion-6b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TehVenom/Dolly_Shygmalion-6b](https://huggingface.co/TehVenom/Dolly_Shygmalion-6b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T16:26:35.787063](https://huggingface.co/datasets/open-llm-leaderboard/details_TehVenom__Dolly_Shygmalion-6b/blob/main/results_2023-10-15T16-26-35.787063.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0008389261744966443,
"em_stderr": 0.0002964962989801232,
"f1": 0.049329907718121055,
"f1_stderr": 0.001207499751606471,
"acc": 0.33737021840348064,
"acc_stderr": 0.008672111270767138
},
"harness|drop|3": {
"em": 0.0008389261744966443,
"em_stderr": 0.0002964962989801232,
"f1": 0.049329907718121055,
"f1_stderr": 0.001207499751606471
},
"harness|gsm8k|5": {
"acc": 0.02122820318423048,
"acc_stderr": 0.003970449129848635
},
"harness|winogrande|5": {
"acc": 0.6535122336227308,
"acc_stderr": 0.01337377341168564
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,255 | [
[
-0.021728515625,
-0.050567626953125,
0.01343536376953125,
0.01593017578125,
-0.0108184814453125,
0.004974365234375,
-0.02679443359375,
-0.01323699951171875,
0.033111572265625,
0.038848876953125,
-0.050262451171875,
-0.07275390625,
-0.051239013671875,
0.01811... |
autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-1f754a-95278146333 | 2023-10-15T16:51:38.000Z | [
"autotrain",
"evaluation",
"region:us"
] | autoevaluate | null | null | 0 | 0 | 2023-10-15T16:50:47 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: Crepot/distilbert-base-uncased-finetuned-squad
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Crepot/distilbert-base-uncased-finetuned-squad
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@emmermarcell](https://huggingface.co/emmermarcell) for evaluating this model. | 1,010 | [
[
-0.039642333984375,
-0.043731689453125,
0.0171051025390625,
0.005237579345703125,
0.00792694091796875,
0.005268096923828125,
0.00780487060546875,
-0.023834228515625,
0.005157470703125,
0.032318115234375,
-0.0855712890625,
-0.01238250732421875,
-0.046478271484375... |
katielink/healthsearchqa_answers | 2023-10-15T17:14:07.000Z | [
"region:us"
] | katielink | null | null | 0 | 0 | 2023-10-15T17:14:06 | ---
dataset_info:
features:
- name: question
dtype: string
- name: gpt-3.5-turbo_response
dtype: string
splits:
- name: train
num_bytes: 182952
num_examples: 140
download_size: 102812
dataset_size: 182952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "healthsearchqa_answers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 501 | [
[
-0.0299224853515625,
-0.01526641845703125,
0.028839111328125,
-0.006805419921875,
-0.004871368408203125,
-0.0016164779663085938,
0.037017822265625,
-0.01309967041015625,
0.06561279296875,
0.0311431884765625,
-0.05694580078125,
-0.046539306640625,
-0.034729003906... |
jackboi/research_assist_2022_2023 | 2023-10-15T18:36:05.000Z | [
"task_categories:text-generation",
"task_categories:feature-extraction",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | jackboi | null | null | 0 | 0 | 2023-10-15T17:19:45 | ---
license: mit
task_categories:
- text-generation
- feature-extraction
language:
- en
size_categories:
- 10K<n<100K
---
# Dataset Card for Research Publications (Alpaca Format)
This dataset card describes the structured data points encompassing research titles, summaries, and publication dates in the realm of artificial intelligence (AI), machine learning (ML), computer vision and pattern recognition, and neural and evolutionary computing. The data spans research published from early 2022 to October 2023.
## Dataset Details
### Dataset Description
This dataset provides structured data points, capturing research titles, summaries, and publication dates in areas of artificial intelligence, machine learning, computer vision and pattern recognition, and neural and evolutionary computing. The dataset spans publications from early 2022 to October 2023.
- **Curated by:** Jack W.
- **Funded by:** Self
- **Language(s) (NLP):** English
- **License:** MIT
## Uses
### Direct Use
This dataset is designed for fine-tuning machine learning models, specifically in the Llama2 (LoRa) context. The data can be utilized for understanding and summarizing research articles within the mentioned categories, aiding researchers in quickly obtaining insights.
### Out-of-Scope Use
The dataset is not intended for general natural language processing tasks unrelated to the specific research topics covered.
## Dataset Structure
The dataset uses the Alpaca format suitable for Llama2 finetuning. Each data entry is a JSON object containing fields: `instruction`, `input`, and `output`.
## Dataset Creation
### Curation Rationale
The dataset was created to augment a researcher's ability to sift through vast amounts of research data efficiently, providing insights, summaries, and overviews of research topics.
### Source Data
#### Data Collection and Processing
The data was collected from various research publications in the realm of AI, ML, computer vision, and neural computing from early 2022 to October 2023 - all information comes from Arxiv API.
Thank you to arXiv for use of its open access interoperability.
#### Who are the source data producers?
Research institutions and researchers produce articles in the specified domains.
### Annotations
Annotations were not provided as part of this dataset.
## Bias, Risks, and Limitations
The dataset may have biases inherent to the selection and summarization of research articles. It might not cover all research in the specified domains or time frame.
### Recommendations
Users should be aware of potential biases and ensure they use the dataset in contexts relevant to the research domains covered.
## Citation
**Arxiv:** https://arxiv.org/
## Glossary
- **Alpaca Format:** A data structure format suitable for Llama2 finetuning.
- **Llama2 (LoRa):** Reference to the machine learning model or platform being used.
## More Information
https://github.com/j-webtek
## Dataset Card Authors
Jack W.
## Dataset Card Contact
**TBD** | 3,016 | [
[
-0.02777099609375,
-0.049957275390625,
0.008148193359375,
0.0174102783203125,
-0.0238189697265625,
-0.0244140625,
-0.0016107559204101562,
-0.041717529296875,
0.0270843505859375,
0.04217529296875,
-0.040496826171875,
-0.061065673828125,
-0.047943115234375,
0.... |
baebee/mojo-code-test | 2023-10-15T17:29:21.000Z | [
"region:us"
] | baebee | null | null | 0 | 0 | 2023-10-15T17:29:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
mesolitica/translated-code-instructions-122k | 2023-10-15T17:31:33.000Z | [
"region:us"
] | mesolitica | null | null | 0 | 0 | 2023-10-15T17:29:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
bongo2112/sunbank-Video-Outputs_v1 | 2023-10-15T18:33:59.000Z | [
"region:us"
] | bongo2112 | null | null | 0 | 0 | 2023-10-15T18:28:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16 | 2023-10-15T19:12:51.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T19:12:38 | ---
pretty_name: Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T19:12:34.050776](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16/blob/main/results_2023-10-15T19-12-34.050776.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.03544463087248322,\n\
\ \"em_stderr\": 0.0018935573437954016,\n \"f1\": 0.08440436241610706,\n\
\ \"f1_stderr\": 0.002470333585036359,\n \"acc\": 0.2841357537490134,\n\
\ \"acc_stderr\": 0.0069604360550053574\n },\n \"harness|drop|3\":\
\ {\n \"em\": 0.03544463087248322,\n \"em_stderr\": 0.0018935573437954016,\n\
\ \"f1\": 0.08440436241610706,\n \"f1_stderr\": 0.002470333585036359\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\"\
: 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5682715074980268,\n\
\ \"acc_stderr\": 0.013920872110010715\n }\n}\n```"
repo_url: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T19_12_34.050776
path:
- '**/details_harness|drop|3_2023-10-15T19-12-34.050776.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T19-12-34.050776.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T19_12_34.050776
path:
- '**/details_harness|gsm8k|5_2023-10-15T19-12-34.050776.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T19-12-34.050776.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T19_12_34.050776
path:
- '**/details_harness|winogrande|5_2023-10-15T19-12-34.050776.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T19-12-34.050776.parquet'
- config_name: results
data_files:
- split: 2023_10_15T19_12_34.050776
path:
- results_2023-10-15T19-12-34.050776.parquet
- split: latest
path:
- results_2023-10-15T19-12-34.050776.parquet
---
# Dataset Card for Evaluation run of bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-PI-8192-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T19:12:34.050776](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airoboros-33b-gpt4-1.4.1-PI-8192-fp16/blob/main/results_2023-10-15T19-12-34.050776.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.03544463087248322,
"em_stderr": 0.0018935573437954016,
"f1": 0.08440436241610706,
"f1_stderr": 0.002470333585036359,
"acc": 0.2841357537490134,
"acc_stderr": 0.0069604360550053574
},
"harness|drop|3": {
"em": 0.03544463087248322,
"em_stderr": 0.0018935573437954016,
"f1": 0.08440436241610706,
"f1_stderr": 0.002470333585036359
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5682715074980268,
"acc_stderr": 0.013920872110010715
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,402 | [
[
-0.03125,
-0.04986572265625,
0.01546478271484375,
0.0161285400390625,
-0.0108642578125,
0.00771331787109375,
-0.0296478271484375,
-0.01377105712890625,
0.02593994140625,
0.0343017578125,
-0.05194091796875,
-0.0640869140625,
-0.051300048828125,
0.013031005859... |
open-llm-leaderboard/details_chargoddard__llama2-22b | 2023-10-15T19:23:20.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T19:23:11 | ---
pretty_name: Evaluation run of chargoddard/llama2-22b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [chargoddard/llama2-22b](https://huggingface.co/chargoddard/llama2-22b) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_chargoddard__llama2-22b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T19:23:07.867810](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama2-22b/blob/main/results_2023-10-15T19-23-07.867810.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0020973154362416107,\n\
\ \"em_stderr\": 0.00046850650303682974,\n \"f1\": 0.06078334731543612,\n\
\ \"f1_stderr\": 0.0013790362682380892,\n \"acc\": 0.4312689350534026,\n\
\ \"acc_stderr\": 0.010092981888945675\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0020973154362416107,\n \"em_stderr\": 0.00046850650303682974,\n\
\ \"f1\": 0.06078334731543612,\n \"f1_stderr\": 0.0013790362682380892\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09931766489764973,\n \
\ \"acc_stderr\": 0.008238371412683961\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7632202052091555,\n \"acc_stderr\": 0.011947592365207389\n\
\ }\n}\n```"
repo_url: https://huggingface.co/chargoddard/llama2-22b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T19_23_07.867810
path:
- '**/details_harness|drop|3_2023-10-15T19-23-07.867810.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T19-23-07.867810.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T19_23_07.867810
path:
- '**/details_harness|gsm8k|5_2023-10-15T19-23-07.867810.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T19-23-07.867810.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T19_23_07.867810
path:
- '**/details_harness|winogrande|5_2023-10-15T19-23-07.867810.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T19-23-07.867810.parquet'
- config_name: results
data_files:
- split: 2023_10_15T19_23_07.867810
path:
- results_2023-10-15T19-23-07.867810.parquet
- split: latest
path:
- results_2023-10-15T19-23-07.867810.parquet
---
# Dataset Card for Evaluation run of chargoddard/llama2-22b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/chargoddard/llama2-22b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [chargoddard/llama2-22b](https://huggingface.co/chargoddard/llama2-22b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_chargoddard__llama2-22b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T19:23:07.867810](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__llama2-22b/blob/main/results_2023-10-15T19-23-07.867810.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0020973154362416107,
"em_stderr": 0.00046850650303682974,
"f1": 0.06078334731543612,
"f1_stderr": 0.0013790362682380892,
"acc": 0.4312689350534026,
"acc_stderr": 0.010092981888945675
},
"harness|drop|3": {
"em": 0.0020973154362416107,
"em_stderr": 0.00046850650303682974,
"f1": 0.06078334731543612,
"f1_stderr": 0.0013790362682380892
},
"harness|gsm8k|5": {
"acc": 0.09931766489764973,
"acc_stderr": 0.008238371412683961
},
"harness|winogrande|5": {
"acc": 0.7632202052091555,
"acc_stderr": 0.011947592365207389
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,187 | [
[
-0.0257415771484375,
-0.049072265625,
0.0180511474609375,
0.0189971923828125,
-0.01387786865234375,
0.0166168212890625,
-0.025177001953125,
-0.0184173583984375,
0.0312347412109375,
0.04345703125,
-0.0546875,
-0.0682373046875,
-0.050201416015625,
0.0102920532... |
open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b | 2023-10-15T20:22:39.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T20:22:31 | ---
pretty_name: Evaluation run of ziqingyang/chinese-alpaca-2-13b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ziqingyang/chinese-alpaca-2-13b](https://huggingface.co/ziqingyang/chinese-alpaca-2-13b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T20:22:27.142442](https://huggingface.co/datasets/open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b/blob/main/results_2023-10-15T20-22-27.142442.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.32728607382550334,\n\
\ \"em_stderr\": 0.004805279168508311,\n \"f1\": 0.4106134647651026,\n\
\ \"f1_stderr\": 0.004650726360819101,\n \"acc\": 0.4307653965208868,\n\
\ \"acc_stderr\": 0.010243166856230161\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.32728607382550334,\n \"em_stderr\": 0.004805279168508311,\n\
\ \"f1\": 0.4106134647651026,\n \"f1_stderr\": 0.004650726360819101\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10462471569370735,\n \
\ \"acc_stderr\": 0.008430668082029278\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7569060773480663,\n \"acc_stderr\": 0.012055665630431043\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ziqingyang/chinese-alpaca-2-13b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T20_22_27.142442
path:
- '**/details_harness|drop|3_2023-10-15T20-22-27.142442.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T20-22-27.142442.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T20_22_27.142442
path:
- '**/details_harness|gsm8k|5_2023-10-15T20-22-27.142442.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T20-22-27.142442.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T20_22_27.142442
path:
- '**/details_harness|winogrande|5_2023-10-15T20-22-27.142442.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T20-22-27.142442.parquet'
- config_name: results
data_files:
- split: 2023_10_15T20_22_27.142442
path:
- results_2023-10-15T20-22-27.142442.parquet
- split: latest
path:
- results_2023-10-15T20-22-27.142442.parquet
---
# Dataset Card for Evaluation run of ziqingyang/chinese-alpaca-2-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/ziqingyang/chinese-alpaca-2-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [ziqingyang/chinese-alpaca-2-13b](https://huggingface.co/ziqingyang/chinese-alpaca-2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T20:22:27.142442](https://huggingface.co/datasets/open-llm-leaderboard/details_ziqingyang__chinese-alpaca-2-13b/blob/main/results_2023-10-15T20-22-27.142442.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.32728607382550334,
"em_stderr": 0.004805279168508311,
"f1": 0.4106134647651026,
"f1_stderr": 0.004650726360819101,
"acc": 0.4307653965208868,
"acc_stderr": 0.010243166856230161
},
"harness|drop|3": {
"em": 0.32728607382550334,
"em_stderr": 0.004805279168508311,
"f1": 0.4106134647651026,
"f1_stderr": 0.004650726360819101
},
"harness|gsm8k|5": {
"acc": 0.10462471569370735,
"acc_stderr": 0.008430668082029278
},
"harness|winogrande|5": {
"acc": 0.7569060773480663,
"acc_stderr": 0.012055665630431043
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,271 | [
[
-0.030914306640625,
-0.047210693359375,
0.01337432861328125,
0.0231170654296875,
-0.01666259765625,
0.0028400421142578125,
-0.0273590087890625,
-0.021820068359375,
0.032440185546875,
0.034088134765625,
-0.053619384765625,
-0.0684814453125,
-0.04998779296875,
... |
snazzydoa/Tiny-CropNet | 2023-10-15T20:27:52.000Z | [
"region:us"
] | snazzydoa | null | null | 0 | 0 | 2023-10-15T20:27:40 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
quixcloudy/Siyeon | 2023-10-15T20:39:33.000Z | [
"region:us"
] | quixcloudy | null | null | 0 | 0 | 2023-10-15T20:36:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sola1ree/AnneStokes | 2023-10-15T23:23:21.000Z | [
"region:us"
] | sola1ree | null | null | 0 | 0 | 2023-10-15T20:40:23 | This is a dataset that is based off of the works of Anne Stokes, it's made using Pirsus Artstation which is trained off of SD 1.5 ...the images have been cropped, touched up, and resized to SD 1.5's base resolutions...512x768 and 768x512.
...you should be able to use kohya or dreambooth to train a lora using this. | 316 | [
[
-0.039520263671875,
-0.02264404296875,
0.005748748779296875,
-0.0007119178771972656,
-0.031280517578125,
-0.0188751220703125,
0.024261474609375,
-0.04791259765625,
0.08544921875,
0.0831298828125,
-0.07025146484375,
-0.0192108154296875,
-0.031585693359375,
-0... |
Dip0323/CLMTokenizer | 2023-10-15T20:48:34.000Z | [
"region:us"
] | Dip0323 | null | null | 0 | 0 | 2023-10-15T20:48:06 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 710073276
num_examples: 1376111
- name: valid
num_bytes: 7016052
num_examples: 13597
download_size: 314934179
dataset_size: 717089328
---
# Dataset Card for "CLMTokenizer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 559 | [
[
-0.056182861328125,
-0.0219879150390625,
0.00733184814453125,
0.005779266357421875,
-0.03228759765625,
0.00009340047836303711,
-0.004474639892578125,
-0.0059967041015625,
0.0611572265625,
0.03643798828125,
-0.0638427734375,
-0.060882568359375,
-0.048828125,
... |
JeffersonMusic/Weekndv1 | 2023-10-15T21:11:33.000Z | [
"region:us"
] | JeffersonMusic | null | null | 0 | 0 | 2023-10-15T21:05:12 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
ppppppps/eee | 2023-10-15T21:07:50.000Z | [
"region:us"
] | ppppppps | null | null | 0 | 0 | 2023-10-15T21:07:30 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant | 2023-10-15T22:09:24.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T22:09:15 | ---
pretty_name: Evaluation run of Yhyu13/llama-30B-hf-openassitant
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Yhyu13/llama-30B-hf-openassitant](https://huggingface.co/Yhyu13/llama-30B-hf-openassitant)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T22:09:11.828298](https://huggingface.co/datasets/open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant/blob/main/results_2023-10-15T22-09-11.828298.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n\
\ \"em_stderr\": 0.0003921042190298701,\n \"f1\": 0.06332634228187943,\n\
\ \"f1_stderr\": 0.0013742294190200051,\n \"acc\": 0.47445656434133393,\n\
\ \"acc_stderr\": 0.010516415781576863\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298701,\n\
\ \"f1\": 0.06332634228187943,\n \"f1_stderr\": 0.0013742294190200051\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14859742228961334,\n \
\ \"acc_stderr\": 0.009797503180527876\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8003157063930545,\n \"acc_stderr\": 0.011235328382625849\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Yhyu13/llama-30B-hf-openassitant
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T22_09_11.828298
path:
- '**/details_harness|drop|3_2023-10-15T22-09-11.828298.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T22-09-11.828298.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T22_09_11.828298
path:
- '**/details_harness|gsm8k|5_2023-10-15T22-09-11.828298.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T22-09-11.828298.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T22_09_11.828298
path:
- '**/details_harness|winogrande|5_2023-10-15T22-09-11.828298.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T22-09-11.828298.parquet'
- config_name: results
data_files:
- split: 2023_10_15T22_09_11.828298
path:
- results_2023-10-15T22-09-11.828298.parquet
- split: latest
path:
- results_2023-10-15T22-09-11.828298.parquet
---
# Dataset Card for Evaluation run of Yhyu13/llama-30B-hf-openassitant
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Yhyu13/llama-30B-hf-openassitant
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Yhyu13/llama-30B-hf-openassitant](https://huggingface.co/Yhyu13/llama-30B-hf-openassitant) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T22:09:11.828298](https://huggingface.co/datasets/open-llm-leaderboard/details_Yhyu13__llama-30B-hf-openassitant/blob/main/results_2023-10-15T22-09-11.828298.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298701,
"f1": 0.06332634228187943,
"f1_stderr": 0.0013742294190200051,
"acc": 0.47445656434133393,
"acc_stderr": 0.010516415781576863
},
"harness|drop|3": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298701,
"f1": 0.06332634228187943,
"f1_stderr": 0.0013742294190200051
},
"harness|gsm8k|5": {
"acc": 0.14859742228961334,
"acc_stderr": 0.009797503180527876
},
"harness|winogrande|5": {
"acc": 0.8003157063930545,
"acc_stderr": 0.011235328382625849
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,305 | [
[
-0.030426025390625,
-0.050689697265625,
0.0168304443359375,
0.0180511474609375,
-0.01192474365234375,
0.00829315185546875,
-0.0266571044921875,
-0.0167083740234375,
0.033447265625,
0.0399169921875,
-0.0531005859375,
-0.0682373046875,
-0.045135498046875,
0.01... |
marceloboemeke/manofthemoney | 2023-10-15T22:21:55.000Z | [
"region:us"
] | marceloboemeke | null | null | 0 | 0 | 2023-10-15T22:15:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B | 2023-10-15T22:16:23.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T22:16:14 | ---
pretty_name: Evaluation run of togethercomputer/GPT-JT-Moderation-6B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [togethercomputer/GPT-JT-Moderation-6B](https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T22:16:11.352297](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B/blob/main/results_2023-10-15T22-16-11.352297.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.004089765100671141,\n\
\ \"em_stderr\": 0.0006535802669912847,\n \"f1\": 0.041537332214765195,\n\
\ \"f1_stderr\": 0.0012446539419451222,\n \"acc\": 0.3182665708457473,\n\
\ \"acc_stderr\": 0.008157539670038592\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.004089765100671141,\n \"em_stderr\": 0.0006535802669912847,\n\
\ \"f1\": 0.041537332214765195,\n \"f1_stderr\": 0.0012446539419451222\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.009855951478392721,\n \
\ \"acc_stderr\": 0.0027210765770416634\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6266771902131019,\n \"acc_stderr\": 0.013594002763035523\n\
\ }\n}\n```"
repo_url: https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T22_16_11.352297
path:
- '**/details_harness|drop|3_2023-10-15T22-16-11.352297.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T22-16-11.352297.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T22_16_11.352297
path:
- '**/details_harness|gsm8k|5_2023-10-15T22-16-11.352297.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T22-16-11.352297.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T22_16_11.352297
path:
- '**/details_harness|winogrande|5_2023-10-15T22-16-11.352297.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T22-16-11.352297.parquet'
- config_name: results
data_files:
- split: 2023_10_15T22_16_11.352297
path:
- results_2023-10-15T22-16-11.352297.parquet
- split: latest
path:
- results_2023-10-15T22-16-11.352297.parquet
---
# Dataset Card for Evaluation run of togethercomputer/GPT-JT-Moderation-6B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [togethercomputer/GPT-JT-Moderation-6B](https://huggingface.co/togethercomputer/GPT-JT-Moderation-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T22:16:11.352297](https://huggingface.co/datasets/open-llm-leaderboard/details_togethercomputer__GPT-JT-Moderation-6B/blob/main/results_2023-10-15T22-16-11.352297.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.004089765100671141,
"em_stderr": 0.0006535802669912847,
"f1": 0.041537332214765195,
"f1_stderr": 0.0012446539419451222,
"acc": 0.3182665708457473,
"acc_stderr": 0.008157539670038592
},
"harness|drop|3": {
"em": 0.004089765100671141,
"em_stderr": 0.0006535802669912847,
"f1": 0.041537332214765195,
"f1_stderr": 0.0012446539419451222
},
"harness|gsm8k|5": {
"acc": 0.009855951478392721,
"acc_stderr": 0.0027210765770416634
},
"harness|winogrande|5": {
"acc": 0.6266771902131019,
"acc_stderr": 0.013594002763035523
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,367 | [
[
-0.033416748046875,
-0.0592041015625,
0.021575927734375,
0.00984954833984375,
-0.0175933837890625,
0.0100250244140625,
-0.033111572265625,
-0.00998687744140625,
0.031524658203125,
0.03912353515625,
-0.049163818359375,
-0.0679931640625,
-0.05535888671875,
0.0... |
open-llm-leaderboard/details_Corianas__590m | 2023-10-15T22:43:40.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-15T22:43:32 | ---
pretty_name: Evaluation run of Corianas/590m
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Corianas/590m](https://huggingface.co/Corianas/590m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Corianas__590m\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-15T22:43:28.791779](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__590m/blob/main/results_2023-10-15T22-43-28.791779.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.010276845637583893,\n\
\ \"em_stderr\": 0.0010328242665282278,\n \"f1\": 0.0602705536912752,\n\
\ \"f1_stderr\": 0.0016432009705513089,\n \"acc\": 0.24228909873484075,\n\
\ \"acc_stderr\": 0.0074016381223505675\n },\n \"harness|drop|3\":\
\ {\n \"em\": 0.010276845637583893,\n \"em_stderr\": 0.0010328242665282278,\n\
\ \"f1\": 0.0602705536912752,\n \"f1_stderr\": 0.0016432009705513089\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.000758150113722517,\n \
\ \"acc_stderr\": 0.0007581501137225333\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.48382004735595896,\n \"acc_stderr\": 0.014045126130978601\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Corianas/590m
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_15T22_43_28.791779
path:
- '**/details_harness|drop|3_2023-10-15T22-43-28.791779.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-15T22-43-28.791779.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_15T22_43_28.791779
path:
- '**/details_harness|gsm8k|5_2023-10-15T22-43-28.791779.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-15T22-43-28.791779.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_15T22_43_28.791779
path:
- '**/details_harness|winogrande|5_2023-10-15T22-43-28.791779.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-15T22-43-28.791779.parquet'
- config_name: results
data_files:
- split: 2023_10_15T22_43_28.791779
path:
- results_2023-10-15T22-43-28.791779.parquet
- split: latest
path:
- results_2023-10-15T22-43-28.791779.parquet
---
# Dataset Card for Evaluation run of Corianas/590m
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Corianas/590m
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Corianas/590m](https://huggingface.co/Corianas/590m) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Corianas__590m",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T22:43:28.791779](https://huggingface.co/datasets/open-llm-leaderboard/details_Corianas__590m/blob/main/results_2023-10-15T22-43-28.791779.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.010276845637583893,
"em_stderr": 0.0010328242665282278,
"f1": 0.0602705536912752,
"f1_stderr": 0.0016432009705513089,
"acc": 0.24228909873484075,
"acc_stderr": 0.0074016381223505675
},
"harness|drop|3": {
"em": 0.010276845637583893,
"em_stderr": 0.0010328242665282278,
"f1": 0.0602705536912752,
"f1_stderr": 0.0016432009705513089
},
"harness|gsm8k|5": {
"acc": 0.000758150113722517,
"acc_stderr": 0.0007581501137225333
},
"harness|winogrande|5": {
"acc": 0.48382004735595896,
"acc_stderr": 0.014045126130978601
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,072 | [
[
-0.0296478271484375,
-0.049224853515625,
0.0160980224609375,
0.0214080810546875,
-0.01209259033203125,
0.00951385498046875,
-0.02508544921875,
-0.01194000244140625,
0.040557861328125,
0.038330078125,
-0.05810546875,
-0.07025146484375,
-0.045806884765625,
0.0... |
riquinho21/voz-valentino | 2023-10-15T23:39:16.000Z | [
"region:us"
] | riquinho21 | null | null | 0 | 0 | 2023-10-15T23:38:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sproos/arxiv_embeddings_480k | 2023-10-16T00:04:32.000Z | [
"region:us"
] | sproos | null | null | 0 | 0 | 2023-10-15T23:54:40 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: abstract
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 6351419194
num_examples: 481271
download_size: 6014930006
dataset_size: 6351419194
---
# Dataset Card for "arxiv_embeddings_480k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 537 | [
[
-0.046630859375,
-0.0027217864990234375,
0.0105133056640625,
0.01837158203125,
-0.02691650390625,
-0.00647735595703125,
0.0243072509765625,
0.006977081298828125,
0.0537109375,
0.039031982421875,
-0.0284576416015625,
-0.067626953125,
-0.05523681640625,
-0.007... |
twodgirl/haremlit | 2023-10-16T00:18:31.000Z | [
"language:en",
"conversational",
"adventure",
"fantasy",
"fiction",
"novel",
"not-for-all-audiences",
"region:us"
] | twodgirl | null | null | 0 | 0 | 2023-10-16T00:00:31 | ---
language:
- en
tags:
- conversational
- adventure
- fantasy
- fiction
- novel
- not-for-all-audiences
---
All conversations are made up by Mistral 7B. The theme is adventure, haremlit, men's adventure. | 206 | [
[
-0.041015625,
-0.032501220703125,
0.02783203125,
0.0523681640625,
-0.0231781005859375,
-0.00792694091796875,
0.0040435791015625,
-0.0243988037109375,
0.0750732421875,
0.06964111328125,
-0.07171630859375,
-0.0272369384765625,
-0.0097808837890625,
0.0042686462... |
intilabs/runasimi | 2023-10-16T00:27:14.000Z | [
"region:us"
] | intilabs | null | null | 1 | 0 | 2023-10-16T00:27:14 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
open-llm-leaderboard/details_aiplanet__effi-7b | 2023-10-16T00:39:06.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-16T00:38:58 | ---
pretty_name: Evaluation run of aiplanet/effi-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [aiplanet/effi-7b](https://huggingface.co/aiplanet/effi-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_aiplanet__effi-7b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-16T00:38:54.872293](https://huggingface.co/datasets/open-llm-leaderboard/details_aiplanet__effi-7b/blob/main/results_2023-10-16T00-38-54.872293.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n\
\ \"em_stderr\": 0.0003921042190298541,\n \"f1\": 0.06146078020134238,\n\
\ \"f1_stderr\": 0.0013862861484435665,\n \"acc\": 0.37858887140948305,\n\
\ \"acc_stderr\": 0.008690432281689055\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298541,\n\
\ \"f1\": 0.06146078020134238,\n \"f1_stderr\": 0.0013862861484435665\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03184230477634572,\n \
\ \"acc_stderr\": 0.004836348558260928\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7253354380426204,\n \"acc_stderr\": 0.012544516005117185\n\
\ }\n}\n```"
repo_url: https://huggingface.co/aiplanet/effi-7b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_16T00_38_54.872293
path:
- '**/details_harness|drop|3_2023-10-16T00-38-54.872293.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-16T00-38-54.872293.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_16T00_38_54.872293
path:
- '**/details_harness|gsm8k|5_2023-10-16T00-38-54.872293.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-16T00-38-54.872293.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_16T00_38_54.872293
path:
- '**/details_harness|winogrande|5_2023-10-16T00-38-54.872293.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-16T00-38-54.872293.parquet'
- config_name: results
data_files:
- split: 2023_10_16T00_38_54.872293
path:
- results_2023-10-16T00-38-54.872293.parquet
- split: latest
path:
- results_2023-10-16T00-38-54.872293.parquet
---
# Dataset Card for Evaluation run of aiplanet/effi-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/aiplanet/effi-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [aiplanet/effi-7b](https://huggingface.co/aiplanet/effi-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_aiplanet__effi-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T00:38:54.872293](https://huggingface.co/datasets/open-llm-leaderboard/details_aiplanet__effi-7b/blob/main/results_2023-10-16T00-38-54.872293.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298541,
"f1": 0.06146078020134238,
"f1_stderr": 0.0013862861484435665,
"acc": 0.37858887140948305,
"acc_stderr": 0.008690432281689055
},
"harness|drop|3": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298541,
"f1": 0.06146078020134238,
"f1_stderr": 0.0013862861484435665
},
"harness|gsm8k|5": {
"acc": 0.03184230477634572,
"acc_stderr": 0.004836348558260928
},
"harness|winogrande|5": {
"acc": 0.7253354380426204,
"acc_stderr": 0.012544516005117185
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,108 | [
[
-0.032073974609375,
-0.04730224609375,
0.01580810546875,
0.0171661376953125,
-0.00911712646484375,
0.0129547119140625,
-0.022796630859375,
-0.0137176513671875,
0.037017822265625,
0.034637451171875,
-0.054534912109375,
-0.06268310546875,
-0.049163818359375,
0... |
open-llm-leaderboard/details_psyche__kollama2-7b-v2 | 2023-10-16T01:12:57.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-16T01:12:48 | ---
pretty_name: Evaluation run of psyche/kollama2-7b-v2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [psyche/kollama2-7b-v2](https://huggingface.co/psyche/kollama2-7b-v2) on the [Open\
\ LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_psyche__kollama2-7b-v2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-16T01:12:44.878519](https://huggingface.co/datasets/open-llm-leaderboard/details_psyche__kollama2-7b-v2/blob/main/results_2023-10-16T01-12-44.878519.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.01740771812080537,\n\
\ \"em_stderr\": 0.0013393597649753845,\n \"f1\": 0.10400272651006709,\n\
\ \"f1_stderr\": 0.0021202520572007394,\n \"acc\": 0.41065886057278334,\n\
\ \"acc_stderr\": 0.009434613134114641\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.01740771812080537,\n \"em_stderr\": 0.0013393597649753845,\n\
\ \"f1\": 0.10400272651006709,\n \"f1_stderr\": 0.0021202520572007394\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.06520090978013647,\n \
\ \"acc_stderr\": 0.006800302989321092\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7561168113654302,\n \"acc_stderr\": 0.012068923278908189\n\
\ }\n}\n```"
repo_url: https://huggingface.co/psyche/kollama2-7b-v2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_16T01_12_44.878519
path:
- '**/details_harness|drop|3_2023-10-16T01-12-44.878519.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-16T01-12-44.878519.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_16T01_12_44.878519
path:
- '**/details_harness|gsm8k|5_2023-10-16T01-12-44.878519.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-16T01-12-44.878519.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_16T01_12_44.878519
path:
- '**/details_harness|winogrande|5_2023-10-16T01-12-44.878519.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-16T01-12-44.878519.parquet'
- config_name: results
data_files:
- split: 2023_10_16T01_12_44.878519
path:
- results_2023-10-16T01-12-44.878519.parquet
- split: latest
path:
- results_2023-10-16T01-12-44.878519.parquet
---
# Dataset Card for Evaluation run of psyche/kollama2-7b-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/psyche/kollama2-7b-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [psyche/kollama2-7b-v2](https://huggingface.co/psyche/kollama2-7b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_psyche__kollama2-7b-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T01:12:44.878519](https://huggingface.co/datasets/open-llm-leaderboard/details_psyche__kollama2-7b-v2/blob/main/results_2023-10-16T01-12-44.878519.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.01740771812080537,
"em_stderr": 0.0013393597649753845,
"f1": 0.10400272651006709,
"f1_stderr": 0.0021202520572007394,
"acc": 0.41065886057278334,
"acc_stderr": 0.009434613134114641
},
"harness|drop|3": {
"em": 0.01740771812080537,
"em_stderr": 0.0013393597649753845,
"f1": 0.10400272651006709,
"f1_stderr": 0.0021202520572007394
},
"harness|gsm8k|5": {
"acc": 0.06520090978013647,
"acc_stderr": 0.006800302989321092
},
"harness|winogrande|5": {
"acc": 0.7561168113654302,
"acc_stderr": 0.012068923278908189
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,165 | [
[
-0.0273590087890625,
-0.04412841796875,
0.0288543701171875,
0.0254364013671875,
-0.01512908935546875,
0.006229400634765625,
-0.0284271240234375,
-0.01641845703125,
0.032196044921875,
0.043731689453125,
-0.059356689453125,
-0.07281494140625,
-0.050018310546875,
... |
umm-maybe/ai_images | 2023-10-16T02:00:48.000Z | [
"region:us"
] | umm-maybe | null | null | 0 | 0 | 2023-10-16T02:00:19 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': train_dataset
- name: text
dtype: string
splits:
- name: train
num_bytes: 540439882.0
num_examples: 304
download_size: 540208895
dataset_size: 540439882.0
---
# Dataset Card for "ai_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 569 | [
[
-0.046173095703125,
-0.0162506103515625,
0.01081085205078125,
0.01129913330078125,
-0.01824951171875,
-0.00783538818359375,
0.027191162109375,
-0.0228271484375,
0.051849365234375,
0.026824951171875,
-0.04986572265625,
-0.0579833984375,
-0.051177978515625,
-0... |
open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B | 2023-10-16T02:07:33.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-16T02:07:25 | ---
pretty_name: Evaluation run of TaylorAI/Flash-Llama-13B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TaylorAI/Flash-Llama-13B](https://huggingface.co/TaylorAI/Flash-Llama-13B) on\
\ the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-16T02:07:21.607373](https://huggingface.co/datasets/open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B/blob/main/results_2023-10-16T02-07-21.607373.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n\
\ \"em_stderr\": 0.00039210421902982666,\n \"f1\": 0.0607822986577181,\n\
\ \"f1_stderr\": 0.0013583957676382913,\n \"acc\": 0.43739636770101,\n\
\ \"acc_stderr\": 0.010228023491905505\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.00039210421902982666,\n\
\ \"f1\": 0.0607822986577181,\n \"f1_stderr\": 0.0013583957676382913\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10841546626231995,\n \
\ \"acc_stderr\": 0.008563852506627487\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7663772691397001,\n \"acc_stderr\": 0.011892194477183524\n\
\ }\n}\n```"
repo_url: https://huggingface.co/TaylorAI/Flash-Llama-13B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_16T02_07_21.607373
path:
- '**/details_harness|drop|3_2023-10-16T02-07-21.607373.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-16T02-07-21.607373.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_16T02_07_21.607373
path:
- '**/details_harness|gsm8k|5_2023-10-16T02-07-21.607373.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-16T02-07-21.607373.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_16T02_07_21.607373
path:
- '**/details_harness|winogrande|5_2023-10-16T02-07-21.607373.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-16T02-07-21.607373.parquet'
- config_name: results
data_files:
- split: 2023_10_16T02_07_21.607373
path:
- results_2023-10-16T02-07-21.607373.parquet
- split: latest
path:
- results_2023-10-16T02-07-21.607373.parquet
---
# Dataset Card for Evaluation run of TaylorAI/Flash-Llama-13B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TaylorAI/Flash-Llama-13B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [TaylorAI/Flash-Llama-13B](https://huggingface.co/TaylorAI/Flash-Llama-13B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T02:07:21.607373](https://huggingface.co/datasets/open-llm-leaderboard/details_TaylorAI__Flash-Llama-13B/blob/main/results_2023-10-16T02-07-21.607373.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0014681208053691276,
"em_stderr": 0.00039210421902982666,
"f1": 0.0607822986577181,
"f1_stderr": 0.0013583957676382913,
"acc": 0.43739636770101,
"acc_stderr": 0.010228023491905505
},
"harness|drop|3": {
"em": 0.0014681208053691276,
"em_stderr": 0.00039210421902982666,
"f1": 0.0607822986577181,
"f1_stderr": 0.0013583957676382913
},
"harness|gsm8k|5": {
"acc": 0.10841546626231995,
"acc_stderr": 0.008563852506627487
},
"harness|winogrande|5": {
"acc": 0.7663772691397001,
"acc_stderr": 0.011892194477183524
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,203 | [
[
-0.02490234375,
-0.044647216796875,
0.01384735107421875,
0.02264404296875,
-0.01236724853515625,
0.01396942138671875,
-0.023223876953125,
-0.0178070068359375,
0.031829833984375,
0.0287933349609375,
-0.054229736328125,
-0.06610107421875,
-0.048919677734375,
0... |
riquinho21/voz-knightley | 2023-10-16T02:24:01.000Z | [
"region:us"
] | riquinho21 | null | null | 0 | 0 | 2023-10-16T02:23:18 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
ceia-nlp/truthful_qa_portuguese | 2023-10-16T03:55:30.000Z | [
"region:us"
] | ceia-nlp | null | null | 0 | 0 | 2023-10-16T03:55:11 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
MohamedAzizBhouri/MF_RPN_convection_super_param_CAM5_SPCAM5 | 2023-10-16T22:15:37.000Z | [
"license:mit",
"region:us"
] | MohamedAzizBhouri | null | null | 0 | 0 | 2023-10-16T04:35:11 | ---
license: mit
---
## Probabilistic Multi-fidelity climate model parameterization for better generalization and extrapolation
Code and data accompanying the manuscript titled "Multi-fidelity climate model parameterization for better generalization and extrapolation", authored by Mohamed Aziz Bhouri, Liran Peng, Michael S Pritchard and Pierre Gentine.
## Abstract
Machine-learning-based parameterizations (i.e. representation of sub-grid processes) of global climate models or turbulent simulations have recently been proposed as a powerful alternative to physical, but empirical, representations, offering a lower computational cost and higher accuracy. Yet, those approaches still suffer from a lack of generalization and extrapolation beyond the training data, which is however critical to projecting climate change or unobserved regimes of turbulence. Here we show that a multi-fidelity approach, which integrates datasets of different accuracy and abundance, can provide the best of both worlds: the capacity to extrapolate to warmer climates leveraging abundant low-fidelity data and a higher accuracy using resolving high-fidelity data. In an application to climate modeling, the multi-fidelity framework yields more accurate climate projections without requiring major increase in computational resources, while providing trustworthy uncertainty quantification across a wide range of scenarios. Our approach paves the way for the use of machine-learning based methods that can optimally leverage historical observations or high-fidelity simulations and extrapolate to unseen regimes such as climate change.
## Citation
@article{Bhouri2023MF_RPN_cv_param,
title = {Multi-fidelity climate model parameterization for better generalization and extrapolation},
author = {Bhouri, Mohamed Aziz and Peng, Liran and Pritchard, Michael S. and Gentine, Pierre },
journal = {arXiv preprint arXiv:2309.10231},
doi = {https://doi.org/10.48550/arXiv.2309.10231},
year = {2023},
}
- The code was tested using the jax version 0.3.13, the jaxlib version 0.3.10, the numpy version 1.20.1 and the scipy version 1.7.2.
- All codes names intentionally start with numbers in order to make the processing order needed to run them easier to follow:
#####################################################################################################
1. Files "0_data_process_CAM5.py" and "0_data_process_SPCAM5.py" process the raw data generated by CESM2.1.3 CAM5 and SPCAM5 models. In particular, chosen variables given the problem of interest are kept and a temporal subsampling of factor 2 is implemented. In addition, data is concatenated over several days in order to reduce the number of final files. The number of days considered for concatenation is determined by how much memory is available for the hardware on which the scripts are run. "0_data_process_CAM5.py" is used to process CAM5 +4K and +8K data and the resulting files are saved under folders "data_CAM5_4K" and "data_CAM5_8K" respectively. "0_data_process_SPCAM5.py" is used to process SPCAM5 historical and +4K data and the resulting files are saved under folders "data_SPCAM5_hist" and "data_SPCAM5_4K" respectively.
#####################################################################################################
2. File "1_create_train_test.py" creates train and test datasets with only the final relevant variables for the convection parameterization (see manuscript). Datasets are concatenated along the whole time period. Scripts in step 1 are needed since these codes are run on all GCM outputs which are relatively expensive in terms of memory. Hence a concatenation over several months by directly loading all GCM outputs is not doable given our available hardware. Therefore we needed this two-step approach for data concatenation. "1_create_train_test.py" creates the high-fidelity training (SPCAM5 historical run for 3 month) and testing (SPCAM5 +4K for a year) datasets. It also creates the two candidate low-fidelity training datasets (CAM5 +4K and +8K for a year).
#####################################################################################################
3. File "2_candle_plots_data_distr.py" shows the data distribution for the 5 pressure levels 137, 259, 494, 761 and 958 hPa, for the heat tendency and specific humidity, and for the highest pressure level (lowest altitude) for the moisture tendency. It creates the candle plots corresponding to these data distributions and available in the manuscript ("candle_plots_5_pr_lvls_heat_tend_and_spec_hum.png" and "candle_plots_1st_lvl_SS_moist_tend.png").
#####################################################################################################
4. File "2_norm.py" computes and saves the mean and standard deviation for parameterization inputs and outputs based on low-fidelity training data (CAM5 +8K simulation of a year) and high-fidelity training data (SPCAM historical run for a period of three months). The results are saved in folder "norm".
#####################################################################################################
5. Files" "3_train_RPN_MF.py" and "3_train_RPN_SF.py" train the multi- and single-fidelity models and save their parameters in folders "MF_param" and "SF_param" respectively. The number of models to be trained in parallel by running any of the scripts once is fixed by the variable "ensemble_size". Given the available hardware, we had to use "ensemble_size=1" since we could only access singular nodes and we varied "n_run_param" from 0 to 127. However, we were able to access multiple single nodes independently and hence the training is conducted in parallel ultimately. "3_train_RPN_SF.py" is also used to train the deterministic model by making the variable "N_rpn_SF" equal to "N_tot_SF" in order to use all training data and by changing the subfolder within "SF_param" where the parameters are saved.
#####################################################################################################
6. File "4_concat_param.py" concatenates the parameters so that it corresponds to parameters that would be saved if 128 NNs are trained with a singular run of the scripts detailed in point 5. The size of resulting individual files can go up to 134 mb which prevents uploading them into github directly but we wanted to show how a concise parameters representation for RPN is doable. Subsequent scripts use the parameters that were saved separately for each individual RPN member (resulting from point 5 above).
#####################################################################################################
7. File "4_pred_RPN_det.py" computes and saves the deterministic prediction for the test dataset. Files "4_pred_RPN_SF.py", "4_pred_RPN_LF.py" and "4_pred_RPN_MF.py" compute and save predictions for the test dataset obtained for each individual member of SF-RPN, LF-RPN and MF-RPN. We had to perform this step since our hardware did not have enough virtual memory to make the ensemble predictions for 128 million test datapoints. If memory allows, the ensemble predictions can be performed at once by changing the variable "ensemble_size" to the actual ensemble size and then compute related statistics (mean, standard deviation, higher-order moments, etc).
#####################################################################################################
8. Files "5_mean_std_RPN_SF.py", "5_mean_std_RPN_LF.py" and "5_mean_std_RPN_MF.py" compute and save the mean and standard deviation of the ensemble predictions for the test dataset computed and saved in point 7 above. As mentioned above, if memory allows the points 7 and 8 are merged into one step.
#####################################################################################################
9. File "6_reshape_pred_RPN.py" reshapes and saves the deterministic NN prediction for the test dataset, and the mean and standard deviation of the ensemble predictions for the test dataset for SF-RPN, LF-RPN and MF-RPN models. It uses the saved prediction from step 8 and from running the script "4_pred_RPN_det.py" in step 7. File "6_reshape_pred_RPN.py" also reshapes and saves the actual test dataset output. The reshaped tensors are in shape [dim_y x Nt x lat xlon], where dim_y=48 is the output dimension, Nt the total number of time steps for the test dataset, lat=96 the number of latitude points and lon = 144 the number of longitude points. These results are saved in folders "data_SPCAM5_4K", "MF_param" and "SF_param".
#####################################################################################################
10. File "7_global_errors_temporal_errors.py" computes and saves global (if is_glob_err = 1)and temporal errors (if is_temp_MAE = 1 and/oris_temp_r2 = 1) for all models (det NN, SF-RPN, MF-RPN and LF-RPN). Global errors are saved in folder "glob_errors". Temporal errors are plotted and saved in folder "temp_plots". File "7_global_errors_temporal_errors.py" uses the results obtained in point 9.
#####################################################################################################
11. File "7_global_crps.py" computes and saves the CRPS scores for SF-RPN, MF-RPN and LF-RPN. Individual predictions within the ensemble for each of the models need to be reshaped by setting "is_reshape_single_pred = 1", then the corresponding CRPS score is computed and saved in folder "glob_errors' by setting "is_reshape_single_pred = 0".
#####################################################################################################
12. File "7_long_lat_errors.py" computes and saves the longitude-latitude variations of MAE and R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) in folders "MF_results" and "SF_results" using the results obtained in point 9.
#####################################################################################################
13. File "7_pressure_lat_errors" computes and saves the pressure(altitude)-latitude variations of MAE and R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) in folders "MF_results" and "SF_results" using the results obtained in point 9.
#####################################################################################################
14. File "8_plot_global_errors.py" creates the plots for the global errors (MAE, R2 and CRPS) for all models (det NN, SF-RPN, MF-RPN and LF-RPN) using the results obtained in points 10 and 11. The plots are saved in folder "glob_errors".
#####################################################################################################
15. File "8_long_lat_plots.py" creates and saves the plots for the longitude-latitude variations of MAE and R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) in folder "long_lat_plots" if variable "is_uncert = 0". These plots are based on the results obtained in point 12. File "8_long_lat_plots.py" also creates the plots for the longitude-latitude variations of the uncertainty for SF-RPN, MF-RPN and LF-RPN models if variable "is_uncert = 1". These plots are saved in folder "long_lat_uncert_plots" and are based on results obtained in point 9.
#####################################################################################################
16. File "8_pressure_lat_plots" creates and saves the plots for the pressure(altitude)-latitude variations of R2 for all models (det NN, SF-RPN, MF-RPN and LF-RPN) under the names "r2_press_lat_heat.png" and "r2_press_lat_moist.png" for heat and moisture tendencies respectively. These plots are based on the results obtained in point 13.
#####################################################################################################
17. File "8_uncertainty_density_plot" creates the plots for the density of uncertainty as a function of error for SF-RPN, MF-RPN and LF-RPN models. These plots are saved in folder "uncertainty_density_plots" and are based on results obtained in point 9.
#####################################################################################################
18. File "9_uncertainty_video.py" creates and saves the videos of complete spatio-temporal evolution of MAEs and returned uncertainties for the heat and moisture tendencies by different models (MF-RPN, LF-RPN adn SF-HF-RPN) at vertical levels 259, 494 and 761 hPa. The videos are saved in folders "videos". File "9_uncertainty_video.py" uses the results obtained in point 9.
#####################################################################################################
19. File "9_uncertainty_video_daily.py" creates and saves the videos of spatio-temporal evolution of MAEs based on daily-averaged predictions and daily-averaged returned uncertainties for the heat and moisture tendencies by different models (MF-RPN, LF-RPN adn SF-HF-RPN) at vertical levels 259, 494 and 761 hPa. The videos are saved in folders "videos". File "9_uncertainty_video_daily.py" uses the results obtained in point 9.
| 13,017 | [
[
-0.045745849609375,
-0.042755126953125,
0.0198974609375,
0.0234222412109375,
-0.0262603759765625,
-0.033721923828125,
-0.0205841064453125,
-0.01371002197265625,
0.01285552978515625,
0.0269012451171875,
-0.052581787109375,
-0.03228759765625,
-0.043487548828125,
... |
hugsom/ecopromptdetails | 2023-10-16T04:45:33.000Z | [
"region:us"
] | hugsom | null | null | 0 | 0 | 2023-10-16T04:44:03 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
MichiganNLP/TID-8 | 2023-10-30T18:18:31.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:sentiment-analysis",
"task_ids:hate-speech-detection",
"annotations_creators:crowdsourced",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<200K",
"source_datasets:extended|other",... | MichiganNLP | null | null | 0 | 0 | 2023-10-16T04:50:43 | ---
annotations_creators:
- crowdsourced
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<200K
source_datasets:
- extended|other
task_categories:
- text-classification
task_ids:
- natural-language-inference
- sentiment-analysis
- hate-speech-detection
paperswithcode_id: placeholder
pretty_name: TID-8
tags:
- tid8
- annotation disagreement
dataset_info:
- config_name: commitmentbank-ann
features:
- name: HitID
dtype: string
- name: Verb
dtype: string
- name: Context
dtype: string
- name: Prompt
dtype: string
- name: Target
dtype: string
- name: ModalType
dtype: string
- name: Embedding
dtype: string
- name: MatTense
dtype: string
- name: weak_labels
sequence: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '-3'
'5': '-1'
'6': '-2'
splits:
- name: train
num_bytes: 7153364
num_examples: 7816
- name: test
num_bytes: 3353745
num_examples: 3729
download_size: 3278616
dataset_size: 10507109
- config_name: commitmentbank-atr
features:
- name: HitID
dtype: string
- name: Verb
dtype: string
- name: Context
dtype: string
- name: Prompt
dtype: string
- name: Target
dtype: string
- name: ModalType
dtype: string
- name: Embedding
dtype: string
- name: MatTense
dtype: string
- name: weak_labels
sequence: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '-3'
'5': '-1'
'6': '-2'
splits:
- name: train
num_bytes: 6636145
num_examples: 7274
- name: test
num_bytes: 3870964
num_examples: 4271
download_size: 3301698
dataset_size: 10507109
- config_name: friends_qia-ann
features:
- name: Season
dtype: string
- name: Episode
dtype: string
- name: Category
dtype: string
- name: Q_person
dtype: string
- name: A_person
dtype: string
- name: Q_original
dtype: string
- name: Q_modified
dtype: string
- name: A_modified
dtype: string
- name: Annotation_1
dtype: string
- name: Annotation_2
dtype: string
- name: Annotation_3
dtype: string
- name: Goldstandard
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
splits:
- name: validation
num_bytes: 687135
num_examples: 1872
- name: train
num_bytes: 4870170
num_examples: 13113
- name: test
num_bytes: 693033
num_examples: 1872
download_size: 1456765
dataset_size: 6250338
- config_name: friends_qia-atr
features:
- name: Season
dtype: string
- name: Episode
dtype: string
- name: Category
dtype: string
- name: Q_person
dtype: string
- name: A_person
dtype: string
- name: Q_original
dtype: string
- name: Q_modified
dtype: string
- name: A_modified
dtype: string
- name: Annotation_1
dtype: string
- name: Annotation_2
dtype: string
- name: Annotation_3
dtype: string
- name: Goldstandard
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
splits:
- name: train
num_bytes: 4166892
num_examples: 11238
- name: test
num_bytes: 2083446
num_examples: 5619
download_size: 3445839
dataset_size: 6250338
- config_name: goemotions-ann
features:
- name: author
dtype: string
- name: subreddit
dtype: string
- name: link_id
dtype: string
- name: parent_id
dtype: string
- name: created_utc
dtype: string
- name: rater_id
dtype: string
- name: example_very_unclear
dtype: string
- name: admiration
dtype: string
- name: amusement
dtype: string
- name: anger
dtype: string
- name: annoyance
dtype: string
- name: approval
dtype: string
- name: caring
dtype: string
- name: confusion
dtype: string
- name: curiosity
dtype: string
- name: desire
dtype: string
- name: disappointment
dtype: string
- name: disapproval
dtype: string
- name: disgust
dtype: string
- name: embarrassment
dtype: string
- name: excitement
dtype: string
- name: fear
dtype: string
- name: gratitude
dtype: string
- name: grief
dtype: string
- name: joy
dtype: string
- name: love
dtype: string
- name: nervousness
dtype: string
- name: optimism
dtype: string
- name: pride
dtype: string
- name: realization
dtype: string
- name: relief
dtype: string
- name: remorse
dtype: string
- name: sadness
dtype: string
- name: surprise
dtype: string
- name: neutral
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': positive
'1': ambiguous
'2': negative
'3': neutral
splits:
- name: train
num_bytes: 46277072
num_examples: 135504
- name: test
num_bytes: 19831033
num_examples: 58129
download_size: 24217871
dataset_size: 66108105
- config_name: goemotions-atr
features:
- name: author
dtype: string
- name: subreddit
dtype: string
- name: link_id
dtype: string
- name: parent_id
dtype: string
- name: created_utc
dtype: string
- name: rater_id
dtype: string
- name: example_very_unclear
dtype: string
- name: admiration
dtype: string
- name: amusement
dtype: string
- name: anger
dtype: string
- name: annoyance
dtype: string
- name: approval
dtype: string
- name: caring
dtype: string
- name: confusion
dtype: string
- name: curiosity
dtype: string
- name: desire
dtype: string
- name: disappointment
dtype: string
- name: disapproval
dtype: string
- name: disgust
dtype: string
- name: embarrassment
dtype: string
- name: excitement
dtype: string
- name: fear
dtype: string
- name: gratitude
dtype: string
- name: grief
dtype: string
- name: joy
dtype: string
- name: love
dtype: string
- name: nervousness
dtype: string
- name: optimism
dtype: string
- name: pride
dtype: string
- name: realization
dtype: string
- name: relief
dtype: string
- name: remorse
dtype: string
- name: sadness
dtype: string
- name: surprise
dtype: string
- name: neutral
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': positive
'1': ambiguous
'2': negative
'3': neutral
splits:
- name: train
num_bytes: 44856233
num_examples: 131395
- name: test
num_bytes: 21251872
num_examples: 62238
download_size: 24228953
dataset_size: 66108105
- config_name: hs_brexit-ann
features:
- name: other annotations
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': hate_speech
'1': not_hate_speech
splits:
- name: train
num_bytes: 1039008
num_examples: 4704
- name: test
num_bytes: 222026
num_examples: 1008
download_size: 144072
dataset_size: 1261034
- config_name: hs_brexit-atr
features:
- name: other annotations
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': hate_speech
'1': not_hate_speech
splits:
- name: train
num_bytes: 986132
num_examples: 4480
- name: test
num_bytes: 495738
num_examples: 2240
download_size: 604516
dataset_size: 1481870
- config_name: humor-ann
features:
- name: text_a
dtype: string
- name: text_b
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': B
'1': X
'2': A
splits:
- name: train
num_bytes: 28524839
num_examples: 98735
- name: test
num_bytes: 12220621
num_examples: 42315
download_size: 24035118
dataset_size: 40745460
- config_name: humor-atr
features:
- name: text_a
dtype: string
- name: text_b
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': B
'1': X
'2': A
splits:
- name: train
num_bytes: 28161248
num_examples: 97410
- name: test
num_bytes: 12584212
num_examples: 43640
download_size: 24099282
dataset_size: 40745460
- config_name: md-agreement-ann
features:
- name: task
dtype: string
- name: original_id
dtype: string
- name: domain
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': offensive_speech
'1': not_offensive_speech
splits:
- name: train
num_bytes: 7794988
num_examples: 32960
- name: test
num_bytes: 2498445
num_examples: 10553
download_size: 1606671
dataset_size: 10293433
- config_name: md-agreement-atr
features:
- name: task
dtype: string
- name: original_id
dtype: string
- name: domain
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': offensive_speech
'1': not_offensive_speech
splits:
- name: train
num_bytes: 8777085
num_examples: 37077
- name: test
num_bytes: 3957021
num_examples: 16688
download_size: 5766114
dataset_size: 12734106
- config_name: pejorative-ann
features:
- name: pejor_word
dtype: string
- name: word_definition
dtype: string
- name: annotator-1
dtype: string
- name: annotator-2
dtype: string
- name: annotator-3
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': pejorative
'1': non-pejorative
'2': undecided
splits:
- name: train
num_bytes: 350734
num_examples: 1535
- name: test
num_bytes: 150894
num_examples: 659
download_size: 168346
dataset_size: 501628
- config_name: pejorative-atr
features:
- name: pejor_word
dtype: string
- name: word_definition
dtype: string
- name: annotator-1
dtype: string
- name: annotator-2
dtype: string
- name: annotator-3
dtype: string
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': pejorative
'1': non-pejorative
'2': undecided
splits:
- name: train
num_bytes: 254138
num_examples: 1112
- name: test
num_bytes: 247490
num_examples: 1082
download_size: 188229
dataset_size: 501628
- config_name: sentiment-ann
features:
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': Neutral
'1': Somewhat positive
'2': Very negative
'3': Somewhat negative
'4': Very positive
splits:
- name: train
num_bytes: 9350333
num_examples: 59235
- name: test
num_bytes: 235013
num_examples: 1419
download_size: 4906597
dataset_size: 9585346
- config_name: sentiment-atr
features:
- name: question
dtype: string
- name: uid
dtype: string
- name: id
dtype: int32
- name: annotator_id
dtype: string
- name: answer
dtype: string
- name: answer_label
dtype:
class_label:
names:
'0': Neutral
'1': Somewhat positive
'2': Very negative
'3': Somewhat negative
'4': Very positive
splits:
- name: train
num_bytes: 6712084
num_examples: 42439
- name: test
num_bytes: 2873262
num_examples: 18215
download_size: 4762021
dataset_size: 9585346
configs:
- config_name: commitmentbank-ann
data_files:
- split: train
path: commitmentbank-ann/train-*
- split: test
path: commitmentbank-ann/test-*
- config_name: commitmentbank-atr
data_files:
- split: train
path: commitmentbank-atr/train-*
- split: test
path: commitmentbank-atr/test-*
- config_name: friends_qia-ann
data_files:
- split: validation
path: friends_qia-ann/validation-*
- split: train
path: friends_qia-ann/train-*
- split: test
path: friends_qia-ann/test-*
- config_name: friends_qia-atr
data_files:
- split: train
path: friends_qia-atr/train-*
- split: test
path: friends_qia-atr/test-*
- config_name: goemotions-ann
data_files:
- split: train
path: goemotions-ann/train-*
- split: test
path: goemotions-ann/test-*
- config_name: goemotions-atr
data_files:
- split: train
path: goemotions-atr/train-*
- split: test
path: goemotions-atr/test-*
- config_name: hs_brexit-ann
data_files:
- split: train
path: hs_brexit-ann/train-*
- split: test
path: hs_brexit-ann/test-*
- config_name: hs_brexit-atr
data_files:
- split: train
path: hs_brexit-atr/train-*
- split: test
path: hs_brexit-atr/test-*
- config_name: humor-ann
data_files:
- split: train
path: humor-ann/train-*
- split: test
path: humor-ann/test-*
- config_name: humor-atr
data_files:
- split: train
path: humor-atr/train-*
- split: test
path: humor-atr/test-*
- config_name: md-agreement-ann
data_files:
- split: train
path: md-agreement-ann/train-*
- split: test
path: md-agreement-ann/test-*
- config_name: md-agreement-atr
data_files:
- split: train
path: md-agreement-atr/train-*
- split: test
path: md-agreement-atr/test-*
- config_name: pejorative-ann
data_files:
- split: train
path: pejorative-ann/train-*
- split: test
path: pejorative-ann/test-*
- config_name: pejorative-atr
data_files:
- split: train
path: pejorative-atr/train-*
- split: test
path: pejorative-atr/test-*
- config_name: sentiment-ann
data_files:
- split: train
path: sentiment-ann/train-*
- split: test
path: sentiment-ann/test-*
- config_name: sentiment-atr
data_files:
- split: train
path: sentiment-atr/train-*
- split: test
path: sentiment-atr/test-*
---
# Dataset Card for "TID-8"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** placeholder
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
TID-8 is a new aggregated benchmark focused on the task of letting models learn from data that has inherent disagreement proposed in [link](https://arxiv.org/pdf/2305.14663.pdf) at Findings of EMNLP 2023.
In the paper, we focus on the inherent disagreement and let the model directly learn from data that has such disagreement.
We provide two split for TID-8.
*Annotation Split*
We split the annotations for each annotator into train and test set.
In other words, the same set of annotators appear in both train, (val),
and test sets.
For datasets that have splits originally, we follow the original split and remove
datapoints in test sets that are annotated by an annotator who is not in
the training set.
For datasets that do not have splits originally, we split the data into
train and test set for convenience, you may further split the train set
into a train and val set.
*Annotator Split*
We split annotators into train and test set.
In other words, a different set of annotators would appear in train and test sets.
We split the data into train and test set for convenience, you may consider
further splitting the train set into a train and val set for performance validation.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
### Data Fields
The data fields are the same among all splits.
See aforementioned information.
### Data Splits
See aforementioned information.
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{deng2023tid8,
title={You Are What You Annotate: Towards Better Models through Annotator Representations},
author={Deng, Naihao and Liu, Siyang and Zhang, Frederick Xinliang and Wu, Winston and Wang, Lu and Mihalcea, Rada},
booktitle={Findings of EMNLP 2023},
year={2023}
}
Note that each TID-8 dataset has its own citation. Please see the source to
get the correct citation for each contained dataset.
```
| 22,576 | [
[
-0.050201416015625,
-0.050048828125,
0.012542724609375,
0.006900787353515625,
-0.01435089111328125,
-0.0012836456298828125,
-0.0158538818359375,
-0.034698486328125,
0.044952392578125,
0.03204345703125,
-0.041473388671875,
-0.05609130859375,
-0.04180908203125,
... |
open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b | 2023-10-16T05:28:21.000Z | [
"region:us"
] | open-llm-leaderboard | null | null | 0 | 0 | 2023-10-16T05:28:13 | ---
pretty_name: Evaluation run of The-Face-Of-Goonery/Huginn-v3-13b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [The-Face-Of-Goonery/Huginn-v3-13b](https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-16T05:28:09.073903](https://huggingface.co/datasets/open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b/blob/main/results_2023-10-16T05-28-09.073903.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.06354865771812081,\n\
\ \"em_stderr\": 0.002498247436471722,\n \"f1\": 0.14479865771812028,\n\
\ \"f1_stderr\": 0.002890194024794147,\n \"acc\": 0.3913161593683,\n\
\ \"acc_stderr\": 0.009083920481175163\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.06354865771812081,\n \"em_stderr\": 0.002498247436471722,\n\
\ \"f1\": 0.14479865771812028,\n \"f1_stderr\": 0.002890194024794147\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04624715693707354,\n \
\ \"acc_stderr\": 0.005784991662691864\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7363851617995264,\n \"acc_stderr\": 0.01238284929965846\n\
\ }\n}\n```"
repo_url: https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_16T05_28_09.073903
path:
- '**/details_harness|drop|3_2023-10-16T05-28-09.073903.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-16T05-28-09.073903.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_16T05_28_09.073903
path:
- '**/details_harness|gsm8k|5_2023-10-16T05-28-09.073903.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-16T05-28-09.073903.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_16T05_28_09.073903
path:
- '**/details_harness|winogrande|5_2023-10-16T05-28-09.073903.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-16T05-28-09.073903.parquet'
- config_name: results
data_files:
- split: 2023_10_16T05_28_09.073903
path:
- results_2023-10-16T05-28-09.073903.parquet
- split: latest
path:
- results_2023-10-16T05-28-09.073903.parquet
---
# Dataset Card for Evaluation run of The-Face-Of-Goonery/Huginn-v3-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [The-Face-Of-Goonery/Huginn-v3-13b](https://huggingface.co/The-Face-Of-Goonery/Huginn-v3-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T05:28:09.073903](https://huggingface.co/datasets/open-llm-leaderboard/details_The-Face-Of-Goonery__Huginn-v3-13b/blob/main/results_2023-10-16T05-28-09.073903.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.06354865771812081,
"em_stderr": 0.002498247436471722,
"f1": 0.14479865771812028,
"f1_stderr": 0.002890194024794147,
"acc": 0.3913161593683,
"acc_stderr": 0.009083920481175163
},
"harness|drop|3": {
"em": 0.06354865771812081,
"em_stderr": 0.002498247436471722,
"f1": 0.14479865771812028,
"f1_stderr": 0.002890194024794147
},
"harness|gsm8k|5": {
"acc": 0.04624715693707354,
"acc_stderr": 0.005784991662691864
},
"harness|winogrande|5": {
"acc": 0.7363851617995264,
"acc_stderr": 0.01238284929965846
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 7,291 | [
[
-0.0257720947265625,
-0.051483154296875,
0.0178375244140625,
0.01361083984375,
-0.006710052490234375,
0.0015773773193359375,
-0.022430419921875,
-0.0183868408203125,
0.035675048828125,
0.04119873046875,
-0.0545654296875,
-0.05963134765625,
-0.04779052734375,
... |
saahith/EMSContExt_audio | 2023-10-16T05:47:57.000Z | [
"region:us"
] | saahith | null | null | 0 | 0 | 2023-10-16T05:47:47 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcript
dtype: string
- name: duration
dtype: float64
splits:
- name: test
num_bytes: 90269560.0
num_examples: 109
download_size: 89515897
dataset_size: 90269560.0
---
# Dataset Card for "uva-human-val-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 440 | [
[
-0.044525146484375,
-0.0286102294921875,
0.00952911376953125,
0.01061248779296875,
-0.0164642333984375,
-0.007781982421875,
0.049957275390625,
-0.01511383056640625,
0.059356689453125,
0.045440673828125,
-0.06304931640625,
-0.05517578125,
-0.0221710205078125,
... |
pbaoo2705/cpgqa_processed_eval-2 | 2023-10-16T10:29:58.000Z | [
"region:us"
] | pbaoo2705 | null | null | 0 | 0 | 2023-10-16T06:02:40 | ---
dataset_info:
features:
- name: title
dtype: string
- name: id
dtype: int64
- name: question
dtype: string
- name: answer_text
dtype: string
- name: answer_start
dtype: int64
- name: context
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: answer
dtype: string
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: test
num_bytes: 1247857
num_examples: 109
download_size: 48016
dataset_size: 1247857
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "cpgqa_processed_eval-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 831 | [
[
-0.029937744140625,
-0.034698486328125,
0.026092529296875,
0.0218505859375,
-0.01885986328125,
0.0167999267578125,
0.01473236083984375,
-0.0017709732055664062,
0.031524658203125,
0.040618896484375,
-0.044464111328125,
-0.043182373046875,
-0.0416259765625,
-0... |
expanso/sea_creatures | 2023-10-16T06:27:06.000Z | [
"region:us"
] | expanso | null | null | 0 | 0 | 2023-10-16T06:11:19 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
zhk/wiki-edits | 2023-10-16T07:22:11.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:apache-2.0",
"region:us"
] | zhk | null | null | 0 | 0 | 2023-10-16T06:17:59 | ---
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- 100K<n<1M
---
The pre-training dataset of paper "G-SPEED: General SParse Efficient Editing MoDel".
Visit https://github.com/Banner-Z/G-SPEED.git for more details. | 257 | [
[
-0.014495849609375,
-0.05718994140625,
0.045928955078125,
-0.009979248046875,
-0.01255035400390625,
-0.00847625732421875,
-0.01128387451171875,
-0.0236663818359375,
0.0350341796875,
0.0174713134765625,
-0.0631103515625,
-0.037994384765625,
-0.047454833984375,
... |
liangyuch/laion2B-en-aesthetic-seed | 2023-10-16T06:49:47.000Z | [
"region:us"
] | liangyuch | null | null | 1 | 0 | 2023-10-16T06:46:39 | ---
dataset_info:
features:
- name: URL
dtype: string
- name: TEXT
dtype: string
- name: WIDTH
dtype: float64
- name: HEIGHT
dtype: float64
- name: similarity
dtype: float64
- name: hash
dtype: int64
- name: punsafe
dtype: float32
- name: pwatermark
dtype: float32
- name: aesthetic
dtype: float32
- name: SEED
sequence: int64
splits:
- name: train
num_bytes: 3164015506
num_examples: 6435280
download_size: 1545264197
dataset_size: 3164015506
---
# Dataset Card for "laion2B-en-aesthetic-seed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 705 | [
[
-0.0172119140625,
-0.01171112060546875,
0.01126861572265625,
0.04022216796875,
-0.0162506103515625,
-0.0058441162109375,
0.0189971923828125,
-0.01128387451171875,
0.05908203125,
0.03912353515625,
-0.055938720703125,
-0.05963134765625,
-0.03814697265625,
-0.0... |
pranaykoppula/winsletdb | 2023-10-16T07:25:16.000Z | [
"region:us"
] | pranaykoppula | null | null | 0 | 0 | 2023-10-16T07:09:55 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.035064697265625,
0.0465087890625,
0.052490234375,
0.00505828857421875,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01497650146484375,
-0.060302734375,
0.03790283203... |
vietlegalqa/visquad_masked | 2023-10-16T07:42:35.000Z | [
"region:us"
] | vietlegalqa | null | null | 0 | 0 | 2023-10-16T07:41:54 | ---
dataset_info:
features:
- name: doc
dtype: string
- name: doc_masked
dtype: string
- name: qs
dtype: string
- name: ans
dtype: string
splits:
- name: train
num_bytes: 474445339
num_examples: 130319
download_size: 40990532
dataset_size: 474445339
---
# Dataset Card for "visquad_masked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 464 | [
[
-0.045074462890625,
-0.0189056396484375,
0.004913330078125,
0.01195526123046875,
-0.0276947021484375,
0.01512908935546875,
0.0257110595703125,
-0.021026611328125,
0.068603515625,
0.045806884765625,
-0.0478515625,
-0.057098388671875,
-0.04010009765625,
-0.033... |
unicord/outputx | 2023-10-16T07:43:38.000Z | [
"region:us"
] | unicord | null | null | 0 | 0 | 2023-10-16T07:43:38 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
indiejoseph/cc100-yue | 2023-10-17T19:40:14.000Z | [
"region:us"
] | indiejoseph | null | null | 1 | 0 | 2023-10-16T07:46:39 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 32135136
num_examples: 176047
download_size: 23579906
dataset_size: 32135136
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "cc100-yue"
The Filtered Cantonese Dataset is a subset of the larger CC100 corpus that has been filtered to include only Cantonese language content. It is designed to facilitate various NLP tasks, such as text classification, sentiment analysis, named entity recognition, and machine translation, among others.
## Filtering Process
The filtering process is according to article [Building a Hong Kongese Language Identifier](https://medium.com/@kyubi_fox/building-a-hong-kongese-language-identifier-5e20fd221323) by ToastyNews
| 827 | [
[
-0.03729248046875,
-0.03204345703125,
0.01482391357421875,
0.0298614501953125,
-0.022552490234375,
0.008148193359375,
-0.0127105712890625,
-0.0187225341796875,
0.0234375,
0.076171875,
-0.03985595703125,
-0.04132080078125,
-0.036529541015625,
0.01210784912109... |
masterwu/OPV2V | 2023-10-16T07:53:06.000Z | [
"region:us"
] | masterwu | null | null | 0 | 0 | 2023-10-16T07:53:06 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
ostapeno/qa-openai_icl5_clen128_maxD-1_maxC8000_0 | 2023-10-16T08:11:21.000Z | [
"region:us"
] | ostapeno | null | null | 0 | 0 | 2023-10-16T08:11:10 | ## model_setting: openai
## max_context_length: 128
## max_tokens_instruction: 128
## max_tokens_response: 1024
## top_p: 0.9
## num_iterations: 1
## temperature: 0.7
## max_documents_per_subject: -1
## max_contexts_per_subject: 8000
## icl_examples: 5
## icl_dataset: lukaemon/mmlu
## icl_split: validation
## icl_use_options: True
| 333 | [
[
-0.041412353515625,
-0.0248260498046875,
0.013458251953125,
0.039459228515625,
-0.040435791015625,
-0.0232391357421875,
-0.003253936767578125,
-0.000812530517578125,
-0.004581451416015625,
0.020050048828125,
-0.041595458984375,
-0.044342041015625,
-0.04293823242... |
hubei-hunan/logs | 2023-11-02T10:18:33.000Z | [
"license:mit",
"region:us"
] | hubei-hunan | null | null | 0 | 0 | 2023-10-16T08:17:08 | ---
license: mit
dataset_info:
features:
- name: timestamp
dtype: string
- name: user
dtype: string
- name: command
dtype: string
- name: game
dtype: string
- name: status
dtype: string
- name: details
dtype: string
splits:
- name: train
num_bytes: 2770
num_examples: 10
download_size: 5110
dataset_size: 2770
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,815 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
Neych/Transformers | 2023-10-16T08:27:16.000Z | [
"region:us"
] | Neych | null | null | 0 | 0 | 2023-10-16T08:27:16 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
waveww/guanaco-llama2-1k | 2023-10-16T08:30:11.000Z | [
"region:us"
] | waveww | null | null | 0 | 0 | 2023-10-16T08:30:09 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966693
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 444 | [
[
-0.0220184326171875,
-0.0128173828125,
0.01739501953125,
0.037689208984375,
-0.03839111328125,
0.000885009765625,
0.0258941650390625,
-0.0190277099609375,
0.0645751953125,
0.0298919677734375,
-0.054718017578125,
-0.06707763671875,
-0.05029296875,
-0.01603698... |
DavidLanz/medical_instruction | 2023-10-16T08:41:48.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:zh",
"language:en",
"license:apache-2.0",
"text-generation",
"region:us"
] | DavidLanz | null | null | 0 | 0 | 2023-10-16T08:32:05 | ---
license: apache-2.0
language:
- zh
- en
tags:
- text-generation
pretty_name: medical
task_categories:
- text-generation
size_categories:
- 1M<n<10M
---
**Supervisory Fine-Tuning Dataset (SFT and RLHF)**
- Dataset Name: medical_finetune_tw.json
- Description: This dataset comprises a total of 2.06 million entries and is sourced from various sources, including:
1. Six medical department medical inquiry datasets from the [Chinese Medical Dialogue Dataset](https://github.com/Toyhom/Chinese-medical-dialogue-data), totaling 790,000 entries.
2. An online medical encyclopedia dataset, [huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa), with 360,000 entries.
3. A medical knowledge graph dataset, [huatuo_knowledge_graph_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa), with 790,000 entries. These three parts are merged, resulting in a dataset with a total of 1.95 million entries.
4. English medical inquiry dialogue data from [Kent0n-Li/ChatDoctor](https://github.com/Kent0n-Li/ChatDoctor), which includes data from HealthCareMagic-100k and GenMedGPT-5k datasets, totaling 110,000 entries.
| 1,199 | [
[
-0.0304718017578125,
-0.0540771484375,
0.048431396484375,
-0.0047454833984375,
-0.01197052001953125,
-0.0091400146484375,
-0.007110595703125,
-0.0307159423828125,
0.006221771240234375,
0.0672607421875,
-0.06390380859375,
-0.04949951171875,
-0.033203125,
0.00... |
binwang/InductivE-embeddings | 2023-10-17T03:00:18.000Z | [
"license:mit",
"region:us"
] | binwang | null | null | 0 | 0 | 2023-10-16T08:40:15 | ---
license: mit
---
Download Files for pre-computed embedding.
| 65 | [
[
-0.0287933349609375,
-0.03546142578125,
0.0313720703125,
0.03497314453125,
-0.03118896484375,
0.0026111602783203125,
-0.0015878677368164062,
0.00478363037109375,
0.053741455078125,
0.05718994140625,
-0.025238037109375,
-0.05181884765625,
-0.04364013671875,
-... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.