datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
abidlabs/dddddddddd | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
ucalyptus/train-bn | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 32000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 26456406807.408
num_examples: 963636
download_size: 25146453197
dataset_size: 26456406807.408
---
# Dataset Card for "train-bn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Finnish-NLP/belebele-fi-filtered-sft | ---
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: 'null'
- name: text
dtype: 'null'
- name: chosen
dtype: 'null'
- name: rejected
dtype: 'null'
- name: prommpt
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: category
dtype: string
- name: conversations_len
dtype: int64
- name: person_1
dtype: string
- name: person_2
dtype: string
- name: instruction
dtype: string
- name: response
dtype: string
- name: sample_words
dtype: int64
splits:
- name: train
num_bytes: 414674
num_examples: 300
download_size: 269823
dataset_size: 414674
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-4.0
task_categories:
- text-generation
- question-answering
language:
- fi
---
# Dataset Card for Finnish-NLP/benebele
## Creation process
- Finnish subset loaded from facebook/belebele |
indicbench/hellaswag_or | ---
dataset_info:
features:
- name: ind
dtype: int64
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 32073115
num_examples: 10042
- name: test
num_bytes: 30890330
num_examples: 10003
download_size: 21008527
dataset_size: 62963445
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
silk-road/Haruhi-Baize-Role-Playing-Conversation | ---
license: cc-by-4.0
task_categories:
- text-generation
language:
- zh
---
## Haruhi-Zero的Conversation训练数据
我们计划拓展ChatHaruhi,从Few-shot到Zero-shot,这个数据集记录使用各个(中文)角色扮演api进行Baize式相互聊天后得到的数据结果
ids代表聊天的时候两张bot的角色卡片, 角色卡片的信息可以在https://huggingface.co/datasets/silk-road/Haruhi-Zero-RolePlaying-movie-PIPPA 中找到
并且对于第一次出现的id0,也会在prompt字段中进行记录。
聊天的时候id和ids的卡片进行对应
- openai 代表两个聊天的bot都使用openai
- GLM 代表两个聊天的bot都使用CharacterGLM
- Claude 代表两个聊天的bot都使用Claude
- Claude_openai 代表id0的使用Claude, id1的使用openai
- Baichuan 代表两个聊天的bot都使用Character-Baichuan-Turbo
目前百川有很严重的访问限制,如果谁有并发更大的百川的接口,可以联系我们借用一下(邮箱chengli.thu@gmail.com)
或者跑下面的代码,(start_id = 10590 end_id = 12708)反馈给我
https://github.com/LC1332/Zero-Haruhi/blob/main/notebook/GenerateBaizeBaichuan.ipynb
目前平衡生成时间、成本和效果来看,最终训练准备先采用openai和Claude_openai, 这两者已经采集了15000/2000的数据,正在进一步生成更多数据
主项目链接
https://github.com/LC1332/Chat-Haruhi-Suzumiya
## API和服务器
如果你有OpenAI、Claude或者Character-Baichuan的api资源 可以参与进来的话,方便联系我一下 发邮件或者在知乎 https://www.zhihu.com/people/cheng-li-47 留一下您的微信
如果你有足够的训练资源去tuning 13B以及Yi 34B规模的模型(2000长度),也可以联系我加入到项目后面的训练中。
如果你能够组织Human Feedback的比较标注,也可以联系我们
整体计划
https://o9z6tor1qu.feishu.cn/docx/LxTWdGnP2oQ0oUx8H0wcmyZCnrb
后期如果有空的话可以进一步拿各家api的结果做一下RLHF或者DPO。
|
baira/indian_food_images | ---
license: openrail
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': burger
'1': butter_naan
'2': chai
'3': chapati
'4': chole_bhature
'5': dal_makhani
'6': dhokla
'7': fried_rice
'8': idli
'9': jalebi
'10': kaathi_rolls
'11': kadai_paneer
'12': kulfi
'13': masala_dosa
'14': momos
'15': paani_puri
'16': pakode
'17': pav_bhaji
'18': pizza
'19': samosa
splits:
- name: train
num_bytes: 1377006438.2874336
num_examples: 5328
- name: test
num_bytes: 235132199.3925666
num_examples: 941
download_size: 1600810218
dataset_size: 1612138637.6800003
---
|
CyberHarem/ira_fireemblem | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of ira (Fire Emblem)
This is the dataset of ira (Fire Emblem), containing 122 images and their tags.
The core tags of this character are `long_hair, black_hair, breasts, purple_eyes, earrings, large_breasts, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 122 | 157.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ira_fireemblem/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 122 | 90.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ira_fireemblem/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 259 | 169.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ira_fireemblem/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 122 | 138.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ira_fireemblem/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 259 | 236.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ira_fireemblem/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ira_fireemblem',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, belt, dress, elbow_gloves, holding_sword, shoulder_armor, solo, white_gloves, breastplate, thighhighs, jewelry, looking_at_viewer, pelvic_curtain, thigh_boots |
| 1 | 15 |  |  |  |  |  | 1girl, solo, looking_at_viewer, navel, medium_breasts, nipples, blush, simple_background, thighhighs, white_background, completely_nude, hand_on_hip, pussy, standing |
| 2 | 17 |  |  |  |  |  | hetero, 1girl, penis, solo_focus, 1boy, nipples, blush, open_mouth, vaginal, cum_in_pussy, thighhighs, torn_clothes, uncensored, completely_nude, gloves, jewelry, sex_from_behind |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | belt | dress | elbow_gloves | holding_sword | shoulder_armor | solo | white_gloves | breastplate | thighhighs | jewelry | looking_at_viewer | pelvic_curtain | thigh_boots | navel | medium_breasts | nipples | blush | simple_background | white_background | completely_nude | hand_on_hip | pussy | standing | hetero | penis | solo_focus | 1boy | open_mouth | vaginal | cum_in_pussy | torn_clothes | uncensored | gloves | sex_from_behind |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:---------------|:----------------|:-----------------|:-------|:---------------|:--------------|:-------------|:----------|:--------------------|:-----------------|:--------------|:--------|:-----------------|:----------|:--------|:--------------------|:-------------------|:------------------|:--------------|:--------|:-----------|:---------|:--------|:-------------|:-------|:-------------|:----------|:---------------|:---------------|:-------------|:---------|:------------------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | | | | | | X | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | |
| 2 | 17 |  |  |  |  |  | X | | | | | | | | | X | X | | | | | | X | X | | | X | | | | X | X | X | X | X | X | X | X | X | X | X |
|
NimaBoscarino/fuego-20230224-005635-529b94 | ---
tags:
- fuego
fuego:
id: 20230224-005635-529b94
status: done
script: train.py
requirements_file: requirements.txt
space_id: NimaBoscarino/fuego-20230224-005635-529b94
space_hardware: cpu-basic
---
|
ThangaTharun/Barishka | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 2640685.4150943398
num_examples: 47
- name: test
num_bytes: 376884.58490566036
num_examples: 6
download_size: 2735971
dataset_size: 3017570.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
sanjay920/rubra-function-calling-v3 | ---
dataset_info:
features:
- name: Conversations_ShareGPT
list:
- name: from
dtype: string
- name: value
dtype: string
- name: Conversations_OAI
list:
- name: content
list:
- name: text
dtype: string
- name: toolCall
struct:
- name: function
struct:
- name: arguments
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: index
dtype: int64
- name: role
dtype: string
- name: toolCall
struct:
- name: function
struct:
- name: arguments
dtype: string
- name: name
dtype: string
- name: id
dtype: string
- name: index
dtype: int64
- name: Tools
dtype: string
- name: IDs
dtype: string
splits:
- name: train
num_bytes: 211766037
num_examples: 100997
download_size: 92514787
dataset_size: 211766037
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jordyvl/DUDE_loader | ---
license: cc-by-4.0
task_categories:
- question-answering
language:
- en
pretty_name: DUDE
size_categories:
- 10K<n<100K
---
## Loading the dataset with a specific configuration
There are 3 different OCR versions to choose from with their original format or standardized DUE format, as well as the option to load the documents as filepaths or as binaries (PDF).
To load a specific configuration, pass a config from one of the following:
```python
#{bin_}{Amazon,Azure,Tesseract}_{original,due}
['Amazon_due', 'Amazon_original', 'Azure_due', 'Azure_original', 'Tesseract_due', 'Tesseract_original',
'bin_Amazon_due', 'bin_Amazon_original', 'bin_Azure_due', 'bin_Azure_original', 'bin_Tesseract_due', 'bin_Tesseract_original']
```
Loading the dataset:
```python
from datasets import load_dataset
ds = load_dataset("jordyvl/DUDE_loader", 'Amazon_original')
```
This dataset repository contains helper functions to convert the dataset to ImDB (image database) format.
We advise to clone the repository and run it according to your preferences (OCR version, lowercasing, ...).
When running the above data loading script, you should be able to find the extracted binaries under the [HF_CACHE](https://huggingface.co/docs/datasets/cache):
`HF_CACHE/datasets/downloads/extracted/<hash>/DUDE_train-val-test_binaries`, which can be reused for the `data_dir` argument.
For example:
```bash
python3 DUDE_imdb_loader.py \
--data_dir ~/.cache/huggingface/datasets/downloads/extracted/7adde0ed7b0150b7f6b32e52bcad452991fde0f3407c8a87e74b1cb475edaa5b/DUDE_train-val-test_binaries/
```
For baselines, we recommend having a look at the [MP-DocVQA repository](https://github.com/rubenpt91/MP-DocVQA-Framework)
We strongly encourage you to benchmark your best models and submit test set predictions on the [DUDE competition leaderboard](https://rrc.cvc.uab.es/?ch=23)
To help with test set predictions, we have included a sample submission file `RRC_DUDE_testset_submission_example.json`.
|
gsstein/50-percent-human-dataset-llama | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: text
dtype: string
- name: prompt
dtype: string
- name: generated
dtype: bool
- name: raw_summary
dtype: string
splits:
- name: train
num_bytes: 129424994
num_examples: 15326
- name: test
num_bytes: 4636641
num_examples: 576
- name: validation
num_bytes: 4919881
num_examples: 576
download_size: 84420006
dataset_size: 138981516
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
lil-lab/kilogram | ---
annotations_creators:
- crowdsourced
language:
- en
multilinguality:
- monolingual
pretty_name: KiloGram
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- tangrams
- reference-games
- vision-language
viewer: false
---
Preprocessed training and evaluation data from KiloGram.
KiloGram dataset and code repo: https://github.com/lil-lab/kilogram
---
# File Formats
## Training Set
Texts: `train_*.json` are all in the format of `{tangramName: list(annotations)}`.
Images: Colored images with parts (under `/color`) are named in the format of `tangramName_{idx}.png`, where `idx` corresponds to the index of the annotation in the text file.
## Validation, Development, Heldout Set
Texts: `{whole, part}_{black, color}.json` are in the format of `{"targets": list(imageFileNames), "images": list(imageFileNames), "texts": list(annotations)}`. We flattened all the contexts and concatenated them into one list for each entry.
E.g. the first 10 elements in `"targets"` are the image file name of the target of the first context repeated 10 times; the first 10 of `"images"` are the image file names in that context; and the first 10 of `"texts"` are the corresponding 10 annotations in that context.
`/controlled` contains experiments with constrained contexts controlled for number of parts, and `/random` contains ones without. (See Appendix A.8 in paper)
`/development/texts/augmented/aug_dev.json` and `images/augmented.tar.bz2` are experiments in the same format as above used to evaluate the effect of adding part information.
Intermediate files:
`*/text/controlled/eval_batch_data.json` are in the format of
`{tangramName: {numOfParts: list({"target": [tangramName_{idx}, annotation], "distractors": list(list([tangramName_{idx}, annotation]))})}}`, used to generate controlled experiment jsons. Note: annotations are descriptions concatenated by "#" instead of in natural English. |
CyberHarem/antonina_neuralcloud | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of antonina/アントニーナ/安冬妮娜 (Neural Cloud)
This is the dataset of antonina/アントニーナ/安冬妮娜 (Neural Cloud), containing 34 images and their tags.
The core tags of this character are `long_hair, yellow_eyes, hat, headphones, bangs, hair_between_eyes, white_headwear, aqua_hair, green_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 34 | 55.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/antonina_neuralcloud/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 34 | 27.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/antonina_neuralcloud/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 72 | 54.94 MiB | [Download](https://huggingface.co/datasets/CyberHarem/antonina_neuralcloud/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 34 | 45.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/antonina_neuralcloud/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 72 | 83.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/antonina_neuralcloud/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/antonina_neuralcloud',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 34 |  |  |  |  |  | 1girl, looking_at_viewer, solo, black_gloves, holding, long_sleeves, jacket, blush, closed_mouth, open_clothes, black_thighhighs, sitting, black_shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | looking_at_viewer | solo | black_gloves | holding | long_sleeves | jacket | blush | closed_mouth | open_clothes | black_thighhighs | sitting | black_shirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------------|:-------|:---------------|:----------|:---------------|:---------|:--------|:---------------|:---------------|:-------------------|:----------|:--------------|
| 0 | 34 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
Outrun32/Aivazovsky-captions-BLIP | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 3610951.0
num_examples: 50
download_size: 3612823
dataset_size: 3610951.0
---
# Dataset Card for "Aivazovsky-captions-BLIP"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/kitami_yuzu_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kitami_yuzu/喜多見柚/키타미유즈 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of kitami_yuzu/喜多見柚/키타미유즈 (THE iDOLM@STER: Cinderella Girls), containing 222 images and their tags.
The core tags of this character are `brown_hair, short_hair, brown_eyes, bangs, blunt_bangs, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 222 | 222.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitami_yuzu_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 222 | 145.96 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitami_yuzu_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 482 | 292.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitami_yuzu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 222 | 205.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitami_yuzu_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 482 | 388.09 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitami_yuzu_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kitami_yuzu_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 23 |  |  |  |  |  | 1girl, solo, blush, looking_at_viewer, :p, smile, one_eye_closed, ;p, hoodie |
| 1 | 10 |  |  |  |  |  | 1girl, blush, cardigan, necktie, school_uniform, solo, pleated_skirt, looking_at_viewer, school_bag, smile, tongue_out |
| 2 | 9 |  |  |  |  |  | 1girl, blush, looking_at_viewer, school_uniform, smile, solo, white_shirt, collared_shirt, long_sleeves, red_necktie, closed_mouth, diagonal_stripes, white_background, pleated_skirt, simple_background, bob_cut, brown_cardigan, diagonal-striped_necktie |
| 3 | 5 |  |  |  |  |  | 1girl, blush, looking_at_viewer, solo, water, collarbone, fruit, partially_submerged, smile, bathing, cleavage, medium_breasts, onsen, tongue_out, blurry, completely_nude, heart, naked_towel, one_eye_closed, sitting, steam |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | blush | looking_at_viewer | :p | smile | one_eye_closed | ;p | hoodie | cardigan | necktie | school_uniform | pleated_skirt | school_bag | tongue_out | white_shirt | collared_shirt | long_sleeves | red_necktie | closed_mouth | diagonal_stripes | white_background | simple_background | bob_cut | brown_cardigan | diagonal-striped_necktie | water | collarbone | fruit | partially_submerged | bathing | cleavage | medium_breasts | onsen | blurry | completely_nude | heart | naked_towel | sitting | steam |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------------|:-----|:--------|:-----------------|:-----|:---------|:-----------|:----------|:-----------------|:----------------|:-------------|:-------------|:--------------|:-----------------|:---------------|:--------------|:---------------|:-------------------|:-------------------|:--------------------|:----------|:-----------------|:---------------------------|:--------|:-------------|:--------|:----------------------|:----------|:-----------|:-----------------|:--------|:---------|:------------------|:--------|:--------------|:----------|:--------|
| 0 | 23 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | X | X | X | | X | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | X | X | | X | | | | | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | | X | X | | | | | | | | X | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
rumbleFTW/indic-pl-bert | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: input_ids
sequence:
sequence: int32
- name: phonemes
sequence: string
splits:
- name: train
num_bytes: 288672844
num_examples: 34475
download_size: 74299681
dataset_size: 288672844
---
# Dataset Card for "indic-pl-bert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
coastalcph/bmlama10 | ---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hlt-lab/mutualsample-synonym_adjective | ---
dataset_info:
features:
- name: context
dtype: string
- name: response
dtype: string
- name: reference
dtype: string
splits:
- name: train
num_bytes: 13906
num_examples: 22
download_size: 15306
dataset_size: 13906
---
# Dataset Card for "mutualsample-synonym_adjective"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pwc-india/madras_dataset | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 22751754.0
num_examples: 10
download_size: 22753302
dataset_size: 22751754.0
---
# Dataset Card for "madras_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Fredithefish/hh-rlhf-RedPajama-Chat-Format | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B | ---
pretty_name: Evaluation run of GeneZC/MiniChat-1.5-3B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [GeneZC/MiniChat-1.5-3B](https://huggingface.co/GeneZC/MiniChat-1.5-3B) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B\"\
,\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese\
\ are the [latest results from run 2023-12-02T12:48:18.451769](https://huggingface.co/datasets/open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B/blob/main/results_2023-12-02T12-48-18.451769.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.24184988627748294,\n\
\ \"acc_stderr\": 0.011794861371318703\n },\n \"harness|gsm8k|5\":\
\ {\n \"acc\": 0.24184988627748294,\n \"acc_stderr\": 0.011794861371318703\n\
\ }\n}\n```"
repo_url: https://huggingface.co/GeneZC/MiniChat-1.5-3B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_02T12_48_18.451769
path:
- '**/details_harness|gsm8k|5_2023-12-02T12-48-18.451769.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-02T12-48-18.451769.parquet'
- config_name: results
data_files:
- split: 2023_12_02T12_48_18.451769
path:
- results_2023-12-02T12-48-18.451769.parquet
- split: latest
path:
- results_2023-12-02T12-48-18.451769.parquet
---
# Dataset Card for Evaluation run of GeneZC/MiniChat-1.5-3B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/GeneZC/MiniChat-1.5-3B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [GeneZC/MiniChat-1.5-3B](https://huggingface.co/GeneZC/MiniChat-1.5-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T12:48:18.451769](https://huggingface.co/datasets/open-llm-leaderboard/details_GeneZC__MiniChat-1.5-3B/blob/main/results_2023-12-02T12-48-18.451769.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.24184988627748294,
"acc_stderr": 0.011794861371318703
},
"harness|gsm8k|5": {
"acc": 0.24184988627748294,
"acc_stderr": 0.011794861371318703
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
bigscience-data/roots_indic-bn_wikisource | ---
language: bn
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-bn_wikisource
# wikisource_filtered
- Dataset uid: `wikisource_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.6306 % of total
- 12.7884 % of fr
- 19.8886 % of indic-bn
- 20.9966 % of indic-ta
- 2.3478 % of ar
- 4.7068 % of indic-hi
- 18.0998 % of indic-te
- 1.7155 % of es
- 19.4800 % of indic-kn
- 9.1737 % of indic-ml
- 17.1771 % of indic-mr
- 17.1870 % of indic-gu
- 70.3687 % of indic-as
- 1.0165 % of pt
- 7.8642 % of indic-pa
- 1.3501 % of vi
- 4.9411 % of indic-or
- 0.5307 % of ca
- 2.3593 % of id
- 1.5928 % of eu
### BigScience processing steps
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- remove_wiki_mojibake
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
|
Henfrey/sampledata | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 641530.0
num_examples: 12
download_size: 624109
dataset_size: 641530.0
---
# Dataset Card for "sampledata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ywan111/macbook-dataset-b1 | ---
license: apache-2.0
---
|
Nerfgun3/cyberware_style | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/datasets/Nerfgun3/cyberware_style/resolve/main/cyber_showcase.png"
tags:
- stable-diffusion
- text-to-image
- image-to-image
inference: false
---
# Cyberware Style Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/cyberware_style/resolve/main/cyber_showcase.png"/>
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"cyberware_style"```
Personally, I would recommend to use my embeddings with a strength of 0.8, but this time I would use it just as it is.
The embedding itself is based on the dataset given by Eppinette: https://huggingface.co/Eppinette/Cyberware
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
indicbench/hellaswag_ta | ---
dataset_info:
features:
- name: ind
dtype: int64
- name: activity_label
dtype: string
- name: ctx_a
dtype: string
- name: ctx_b
dtype: string
- name: ctx
dtype: string
- name: endings
sequence: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: split_type
dtype: string
- name: label
dtype: string
splits:
- name: validation
num_bytes: 35158275
num_examples: 10042
- name: test
num_bytes: 33784800
num_examples: 10003
download_size: 22310897
dataset_size: 68943075
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
legacy107/qa_wikipedia_chunked | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer_start
dtype: int64
- name: answer
dtype: string
- name: article
dtype: string
- name: chunked_article
sequence: string
splits:
- name: train
num_bytes: 15700776313
num_examples: 110970
- name: validation
num_bytes: 1842888919
num_examples: 13833
- name: test
num_bytes: 1928000472
num_examples: 13873
download_size: 2970213547
dataset_size: 19471665704
---
# Dataset Card for "qa_wikipedia_chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fever/feverous | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|wikipedia
task_categories:
- text-classification
task_ids: []
paperswithcode_id: feverous
pretty_name: FEVEROUS
tags:
- knowledge-verification
---
# Dataset Card for FEVEROUS
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://fever.ai/dataset/feverous.html
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [FEVEROUS: Fact Extraction and VERification Over Unstructured and Structured information](https://arxiv.org/abs/2106.05707)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
With billions of individual pages on the web providing information on almost every conceivable topic, we should have
the ability to collect facts that answer almost every conceivable question. However, only a small fraction of this
information is contained in structured sources (Wikidata, Freebase, etc.) – we are therefore limited by our ability to
transform free-form text to structured knowledge. There is, however, another problem that has become the focus of a lot
of recent research and media coverage: false information coming from unreliable sources.
The FEVER workshops are a venue for work in verifiable knowledge extraction and to stimulate progress in this direction.
FEVEROUS (Fact Extraction and VERification Over Unstructured and Structured information) is a fact
verification dataset which consists of 87,026 verified claims. Each claim is annotated with evidence in the form of
sentences and/or cells from tables in Wikipedia, as well as a label indicating whether this evidence supports, refutes,
or does not provide enough information to reach a verdict. The dataset also contains annotation metadata such as
annotator actions (query keywords, clicks on page, time signatures), and the type of challenge each claim poses.
### Supported Tasks and Leaderboards
The task is verification of textual claims against textual sources.
When compared to textual entailment (TE)/natural language inference, the key difference is that in these tasks the
passage to verify each claim is given, and in recent years it typically consists a single sentence, while in
verification systems it is retrieved from a large set of documents in order to form the evidence.
### Languages
The dataset is in English (`en`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 187.82 MB
- **Size of the generated dataset:** 123.25 MB
- **Total amount of disk used:** 311.07 MB
An example of 'wikipedia_pages' looks as follows:
```
{'id': 24435,
'label': 1,
'claim': 'Michael Folivi competed with ten teams from 2016 to 2021, appearing in 54 games and making seven goals in total.',
'evidence': [{'content': ['Michael Folivi_cell_1_2_0',
'Michael Folivi_cell_1_7_0',
'Michael Folivi_cell_1_8_0',
'Michael Folivi_cell_1_9_0',
'Michael Folivi_cell_1_12_0'],
'context': [['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0'],
['Michael Folivi_title',
'Michael Folivi_section_4',
'Michael Folivi_header_cell_1_0_0']]},
{'content': ['Michael Folivi_cell_0_13_1',
'Michael Folivi_cell_0_14_1',
'Michael Folivi_cell_0_15_1',
'Michael Folivi_cell_0_16_1',
'Michael Folivi_cell_0_18_1'],
'context': [['Michael Folivi_title',
'Michael Folivi_header_cell_0_13_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_14_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_15_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_16_0',
'Michael Folivi_header_cell_0_11_0'],
['Michael Folivi_title',
'Michael Folivi_header_cell_0_18_0',
'Michael Folivi_header_cell_0_11_0']]}],
'annotator_operations': [{'operation': 'start',
'value': 'start',
'time': 0.0},
{'operation': 'Now on', 'value': '?search=', 'time': 0.78},
{'operation': 'search', 'value': 'Michael Folivi', 'time': 78.101},
{'operation': 'Now on', 'value': 'Michael Folivi', 'time': 78.822},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_2_0',
'time': 96.202},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_7_0',
'time': 96.9},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_8_0',
'time': 97.429},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_9_0',
'time': 97.994},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_1_12_0',
'time': 99.02},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_13_1',
'time': 106.108},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_14_1',
'time': 106.702},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_15_1',
'time': 107.423},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_16_1',
'time': 108.186},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_17_1',
'time': 108.788},
{'operation': 'Highlighting',
'value': 'Michael Folivi_header_cell_0_17_0',
'time': 108.8},
{'operation': 'Highlighting',
'value': 'Michael Folivi_cell_0_18_1',
'time': 109.469},
{'operation': 'Highlighting deleted',
'value': 'Michael Folivi_cell_0_17_1',
'time': 124.28},
{'operation': 'Highlighting deleted',
'value': 'Michael Folivi_header_cell_0_17_0',
'time': 124.293},
{'operation': 'finish', 'value': 'finish', 'time': 141.351}],
'expected_challenge': '',
'challenge': 'Numerical Reasoning'}
```
### Data Fields
The data fields are the same among all splits.
- `id` (int): ID of the sample.
- `label` (ClassLabel): Annotated label for the claim. Can be one of {"SUPPORTS", "REFUTES", "NOT ENOUGH INFO"}.
- `claim` (str): Text of the claim.
- `evidence` (list of dict): Evidence sets (at maximum three). Each set consists of dictionaries with two fields:
- `content` (list of str): List of element IDs serving as the evidence for the claim. Each element ID is in the format
`"[PAGE ID]_[EVIDENCE TYPE]_[NUMBER ID]"`, where `[EVIDENCE TYPE]` can be: `sentence`, `cell`, `header_cell`,
`table_caption`, `item`.
- `context` (list of list of str): List (for each element ID in `content`) of a list of Wikipedia elements that are
automatically associated with that element ID and serve as context. This includes an article's title, relevant
sections (the section and sub-section(s) the element is located in), and for cells the closest row and column
header (multiple row/column headers if they follow each other).
- `annotator_operations` (list of dict): List of operations an annotator used to find the evidence and reach a verdict,
given the claim. Each element in the list is a dictionary with the fields:
- `operation` (str): Operation name. Any of the following:
- `start`, `finish`: Annotation started/finished. The value is the name of the operation.
- `search`: Annotator used the Wikipedia search function. The value is the entered search term or the term selected
from the automatic suggestions. If the annotator did not select any of the suggestions but instead went into
advanced search, the term is prefixed with "contains...".
- `hyperlink`: Annotator clicked on a hyperlink in the page. The value is the anchor text of the hyperlink.
- `Now on`: The page the annotator has landed after a search or a hyperlink click. The value is the PAGE ID.
- `Page search`: Annotator search on a page. The value is the search term.
- `page-search-reset`: Annotator cleared the search box. The value is the name of the operation.
- `Highlighting`, `Highlighting deleted`: Annotator selected/unselected an element on the page. The value is
`ELEMENT ID`.
- `back-button-clicked`: Annotator pressed the back button. The value is the name of the operation.
- `value` (str): Value associated with the operation.
- `time` (float): Time in seconds from the start of the annotation.
- `expected_challenge` (str): The challenge the claim generator selected will be faced when verifying the claim, one
out of the following: `Numerical Reasoning`, `Multi-hop Reasoning`, `Entity Disambiguation`,
`Combining Tables and Text`, `Search terms not in claim`, `Other`.
- `challenge` (str): Main challenge to verify the claim, one out of the following: `Numerical Reasoning`,
`Multi-hop Reasoning`, `Entity Disambiguation`, `Combining Tables and Text`, `Search terms not in claim`, `Other`.
### Data Splits
| | train | validation | test |
|--------------------|------:|-----------:|-----:|
| Number of examples | 71291 | 7890 | 7845 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
```
These data annotations incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at http://creativecommons.org/licenses/by-sa/3.0/ (collectively, the “License Termsâ€). You may not use these files except in compliance with the applicable License Terms.
```
### Citation Information
If you use this dataset, please cite:
```bibtex
@inproceedings{Aly21Feverous,
author = {Aly, Rami and Guo, Zhijiang and Schlichtkrull, Michael Sejr and Thorne, James and Vlachos, Andreas and Christodoulopoulos, Christos and Cocarascu, Oana and Mittal, Arpit},
title = {{FEVEROUS}: Fact Extraction and {VERification} Over Unstructured and Structured information},
eprint={2106.05707},
archivePrefix={arXiv},
primaryClass={cs.CL},
year = {2021}
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
|
autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567166 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloomz-7b1
metrics: []
dataset_name: futin/guess
dataset_config: vi
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
fathyshalab/MDCSI_medizin-gesundheit-pflege | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_name
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 181792
num_examples: 403
- name: test
num_bytes: 46406
num_examples: 101
download_size: 116667
dataset_size: 228198
---
# Dataset Card for "reklamation24_medizin-gesundheit-pflege-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/random_letter_find_passage_train100_eval40_rare | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 22059
num_examples: 240
- name: validation
num_bytes: 4591
num_examples: 40
download_size: 15185
dataset_size: 26650
---
# Dataset Card for "random_letter_find_passage_train100_eval40_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gabrielstanlay/emerson_borges | ---
license: openrail
---
|
SGBTalha/minguadoRVCv2 | ---
license: openrail
---
|
open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M | ---
pretty_name: Evaluation run of cerebras/Cerebras-GPT-590M
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [cerebras/Cerebras-GPT-590M](https://huggingface.co/cerebras/Cerebras-GPT-590M)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-14T22:11:07.408754](https://huggingface.co/datasets/open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M/blob/main/results_2023-10-14T22-11-07.408754.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001153523489932886,\n\
\ \"em_stderr\": 0.00034761798968571054,\n \"f1\": 0.039916107382550345,\n\
\ \"f1_stderr\": 0.001153929680724628,\n \"acc\": 0.24300057504519282,\n\
\ \"acc_stderr\": 0.007948184376446\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.001153523489932886,\n \"em_stderr\": 0.00034761798968571054,\n\
\ \"f1\": 0.039916107382550345,\n \"f1_stderr\": 0.001153929680724628\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.004548900682335102,\n \
\ \"acc_stderr\": 0.0018535550440036204\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.48145224940805054,\n \"acc_stderr\": 0.014042813708888378\n\
\ }\n}\n```"
repo_url: https://huggingface.co/cerebras/Cerebras-GPT-590M
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_10_14T22_11_07.408754
path:
- '**/details_harness|drop|3_2023-10-14T22-11-07.408754.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-14T22-11-07.408754.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_14T22_11_07.408754
path:
- '**/details_harness|gsm8k|5_2023-10-14T22-11-07.408754.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-14T22-11-07.408754.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_14T22_11_07.408754
path:
- '**/details_harness|winogrande|5_2023-10-14T22-11-07.408754.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-14T22-11-07.408754.parquet'
- config_name: results
data_files:
- split: 2023_10_14T22_11_07.408754
path:
- results_2023-10-14T22-11-07.408754.parquet
- split: latest
path:
- results_2023-10-14T22-11-07.408754.parquet
---
# Dataset Card for Evaluation run of cerebras/Cerebras-GPT-590M
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/cerebras/Cerebras-GPT-590M
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [cerebras/Cerebras-GPT-590M](https://huggingface.co/cerebras/Cerebras-GPT-590M) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-14T22:11:07.408754](https://huggingface.co/datasets/open-llm-leaderboard/details_cerebras__Cerebras-GPT-590M/blob/main/results_2023-10-14T22-11-07.408754.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968571054,
"f1": 0.039916107382550345,
"f1_stderr": 0.001153929680724628,
"acc": 0.24300057504519282,
"acc_stderr": 0.007948184376446
},
"harness|drop|3": {
"em": 0.001153523489932886,
"em_stderr": 0.00034761798968571054,
"f1": 0.039916107382550345,
"f1_stderr": 0.001153929680724628
},
"harness|gsm8k|5": {
"acc": 0.004548900682335102,
"acc_stderr": 0.0018535550440036204
},
"harness|winogrande|5": {
"acc": 0.48145224940805054,
"acc_stderr": 0.014042813708888378
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
anzorq/kbd_speech | ---
language:
- kbd
task_categories:
- automatic-speech-recognition
- text-to-speech
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: gender
dtype: string
- name: country
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 193658385.11
num_examples: 20555
download_size: 518811329
dataset_size: 193658385.11
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "kbd_speech"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
m-a-p/Music-Instruct | ---
license: cc-by-nc-4.0
---
# Music Instruct (MI) Dataset
This is the dataset used to train and evaluate the MusiLingo model.
This dataset contains Q&A pairs related
to individual musical compositions, specifically
tailored for open-ended music queries. It originates
from the music-caption pairs in the MusicCaps
dataset.
The MI dataset was created through prompt engineering and applying few-shot learning techniques
to GPT-4. More details on dataset generation can be found in our paper *[MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response
](https://arxiv.org/abs/2309.08730)*.
The resulting MI dataset consists of two versions:
v1 (short questions), with 27,540 Q&A pairs seeking comprehensive
details about musical snippets including but not limited to emotion, instrument, vocal track, tempo, and genre etc., often yielding concise one or two-sentence responses. In contrast, v2
comprises 32,953 Q&A pairs featuring more general questions about the musical pieces (long questions), resulting
in typically more extensive responses that serve as
paraphrased renditions of the original caption.
## Evaluation and dataset SPlittion
You can use all (or the long/short partition of) the Q\&A pairs of which audio is in the training split of AudioSet as MI training set and use the short QA and long QA with audio in evaluation split of AudioSet as two testingsets separately.
```
# training set
ds_mixed_train = MIDataset(processor, '/content/drive/MyDrive/music_data', split='train', question_type='all')
ds_long_train = MIDataset(processor, '/content/drive/MyDrive/music_data', split='train', question_type='long')
ds_short_train = MIDataset(processor, '/content/drive/MyDrive/music_data', split='train', question_type='short')
# testing set for short QA
ds_short = MIDataset(processor, '/content/drive/MyDrive/music_data', split='test', question_type='short')
# testing set for long QA
ds_long = MIDataset(processor, '/content/drive/MyDrive/music_data', split='test', question_type='long')
```
And the evaluation includes BLEU, METEOR, ROUGE, and Bert-Score.
## Citation
```
@article{deng2023musilingo,
title={MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response},
author={Deng, Zihao and Ma, Yinghao and Liu, Yudong and Guo, Rongchen and Zhang, Ge and Chen, Wenhu and Huang, Wenhao and Benetos, Emmanouil},
journal={arXiv preprint arXiv:2309.08730},
year={2023}
}
```
|
jamesargent/alpaca-small | ---
license: mit
---
|
heliosprime/twitter_dataset_1712832935 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 38735
num_examples: 92
download_size: 22355
dataset_size: 38735
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1712832935"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liwu/MNBVC | ---
annotations_creators:
- other
language:
- zh
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: MNBVC
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for MNBVC
## Table of Contents
- [Dataset Card for MNBVC](#dataset-card-for-mnbvc)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [数据集介绍](#数据集介绍)
- [数据子集](#数据子集)
- [数据格式](#数据格式)
- [文本数据](#文本数据)
- [问答数据](#问答数据)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://mnbvc.253874.net/
- **Repository:** https://github.com/esbatmop/MNBVC
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### 数据集介绍
中文互联网上最古老最神秘(没有之一)的里屋社区于2023.1.1庄重宣布:
在英明神武的里屋管子带领下,决心发挥社区所长(哪都长),帮助开源社区长期更新一份最大的中文互联网语料集。
Huggingface上的MNBVC数据集在逐渐更新中,请到[https://github.com/esbatmop/MNBVC](https://github.com/esbatmop/MNBVC) 获取未完成清洗的更多数据。
可以使用如下脚本加载:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'law_judgement', split='train', streaming=True)
next(iter(dataset)) # get the first line
```
## 数据子集
MNBVC数据集包含数个子集:
- `law_judgement`: 来自法律文书的文本。
- `gov_xuexiqiangguo`: 来自学习强国的文本。
- `gov_report`: 来自政府工作报告的文本。
- `co_ann_report`: 企业年报文本。
- `code_metadata`: 代码元数据。
- `qa_zhihu`: 来自知乎的问答数据。
- `qa_wikihow`: 来自wikihow的问答数据。
- `qa_mfa`: 外交部问答数据。
- `news_peoples_daily`: 来自人民日报的文本数据。
- `wikipedia`: 来自维基百科的文本数据。
- `qa_stackexchange`: 来自StackExchange的问答数据。
- `qa_chatgpt`: 使用ChatGPT构造的问答语料,感谢[genggui001](https://github.com/genggui001)贡献语料。
- `math_qa`: 和数学领域有关的问答数据。
- `math_chat`: 和数学领域有关的对话数据数据,可以提升模型Chain of Thought的能力。
- `crawler_oscar`: 从CommonCrawl中清洗出来的通用文本数据。
## 数据格式
目前MNBVC数据集包含如下几类数据:
- 通用文本
- 问答语料
- 代码语料
- 多轮对话
- 论坛语料
- 平行语料
可以在[MNBVC的wiki页面](https://wiki.mnbvc.org/doku.php/%E7%8E%B0%E6%9C%89%E8%AF%AD%E6%96%99%E6%A0%BC%E5%BC%8F)上查看这几类数据的具体格式。
项目早期所上传的数据使用如下格式,以后这一格式会被废弃,相应数据也会重新上传:
```json
{
"text": datasets.Value("string"),
"meta": datasets.Value("string")
}
```
### Contributions
Thanks to the [Liwu community](http://mnbvc.253874.net/) for constructing this dataset.
Thanks to [silver](https://github.com/silverriver) and [jiaming](https://huggingface.co/Yjiaming) for adding and uploading this dataset to Huggingface. |
bulkbeings/human-ai-and-strategy | ---
license: mit
---
|
open-llm-leaderboard/details_PistachioAlt__Noromaid-Bagel-7B-Slerp | ---
pretty_name: Evaluation run of PistachioAlt/Noromaid-Bagel-7B-Slerp
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [PistachioAlt/Noromaid-Bagel-7B-Slerp](https://huggingface.co/PistachioAlt/Noromaid-Bagel-7B-Slerp)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PistachioAlt__Noromaid-Bagel-7B-Slerp\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-29T17:27:18.779306](https://huggingface.co/datasets/open-llm-leaderboard/details_PistachioAlt__Noromaid-Bagel-7B-Slerp/blob/main/results_2023-12-29T17-27-18.779306.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6422049061069124,\n\
\ \"acc_stderr\": 0.032410788947613685,\n \"acc_norm\": 0.6464619875380062,\n\
\ \"acc_norm_stderr\": 0.03305989898949311,\n \"mc1\": 0.3659730722154223,\n\
\ \"mc1_stderr\": 0.01686294168408838,\n \"mc2\": 0.5288372703003257,\n\
\ \"mc2_stderr\": 0.015191217388559787\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6126279863481229,\n \"acc_stderr\": 0.01423587248790987,\n\
\ \"acc_norm\": 0.6450511945392492,\n \"acc_norm_stderr\": 0.013983036904094087\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6489743079067914,\n\
\ \"acc_stderr\": 0.004763155068744877,\n \"acc_norm\": 0.8458474407488548,\n\
\ \"acc_norm_stderr\": 0.0036035695286784127\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \
\ \"acc_stderr\": 0.042320736951515885,\n \"acc_norm\": 0.6,\n \"\
acc_norm_stderr\": 0.042320736951515885\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316091,\n\
\ \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316091\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.028152837942493857,\n\
\ \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.028152837942493857\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7361111111111112,\n\
\ \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.7361111111111112,\n\
\ \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \
\ \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\"\
: 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n\
\ \"acc_stderr\": 0.03669072477416906,\n \"acc_norm\": 0.6358381502890174,\n\
\ \"acc_norm_stderr\": 0.03669072477416906\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n\
\ \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.03240038086792747,\n\
\ \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.03240038086792747\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n\
\ \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.41005291005291006,\n \"acc_stderr\": 0.025331202438944437,\n \"\
acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.025331202438944437\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7548387096774194,\n \"acc_stderr\": 0.02447224384089552,\n \"\
acc_norm\": 0.7548387096774194,\n \"acc_norm_stderr\": 0.02447224384089552\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"\
acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.032250781083062896,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.032250781083062896\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586815,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586815\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758733,\n\
\ \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758733\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6487179487179487,\n \"acc_stderr\": 0.024203665177902803,\n\
\ \"acc_norm\": 0.6487179487179487,\n \"acc_norm_stderr\": 0.024203665177902803\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131143,\n \
\ \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131143\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342863,\n\
\ \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342863\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.37748344370860926,\n \"acc_stderr\": 0.0395802723112157,\n \"\
acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.0395802723112157\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8348623853211009,\n \"acc_stderr\": 0.01591955782997606,\n \"\
acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.01591955782997606\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5416666666666666,\n \"acc_stderr\": 0.033981108902946366,\n \"\
acc_norm\": 0.5416666666666666,\n \"acc_norm_stderr\": 0.033981108902946366\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8088235294117647,\n \"acc_stderr\": 0.027599174300640766,\n \"\
acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.027599174300640766\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676166,\n \
\ \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676166\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n\
\ \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n\
\ \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n\
\ \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7272727272727273,\n \"acc_stderr\": 0.04065578140908705,\n \"\
acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04065578140908705\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n\
\ \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5178571428571429,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.5178571428571429,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406957,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406957\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8071519795657727,\n\
\ \"acc_stderr\": 0.014108533515757433,\n \"acc_norm\": 0.8071519795657727,\n\
\ \"acc_norm_stderr\": 0.014108533515757433\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.708092485549133,\n \"acc_stderr\": 0.024476994076247326,\n\
\ \"acc_norm\": 0.708092485549133,\n \"acc_norm_stderr\": 0.024476994076247326\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3318435754189944,\n\
\ \"acc_stderr\": 0.015748421208187303,\n \"acc_norm\": 0.3318435754189944,\n\
\ \"acc_norm_stderr\": 0.015748421208187303\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.024739981355113592,\n\
\ \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.024739981355113592\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n\
\ \"acc_stderr\": 0.02567025924218894,\n \"acc_norm\": 0.7138263665594855,\n\
\ \"acc_norm_stderr\": 0.02567025924218894\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.02465968518596729,\n\
\ \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.02465968518596729\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.43617021276595747,\n \"acc_stderr\": 0.02958345203628407,\n \
\ \"acc_norm\": 0.43617021276595747,\n \"acc_norm_stderr\": 0.02958345203628407\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45827900912646674,\n\
\ \"acc_stderr\": 0.01272570165695364,\n \"acc_norm\": 0.45827900912646674,\n\
\ \"acc_norm_stderr\": 0.01272570165695364\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \
\ \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6830065359477124,\n \"acc_stderr\": 0.018824219512706214,\n \
\ \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.018824219512706214\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n\
\ \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \
\ \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7591836734693878,\n \"acc_stderr\": 0.027372942201788167,\n\
\ \"acc_norm\": 0.7591836734693878,\n \"acc_norm_stderr\": 0.027372942201788167\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8706467661691543,\n\
\ \"acc_stderr\": 0.023729830881018526,\n \"acc_norm\": 0.8706467661691543,\n\
\ \"acc_norm_stderr\": 0.023729830881018526\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n\
\ \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3659730722154223,\n\
\ \"mc1_stderr\": 0.01686294168408838,\n \"mc2\": 0.5288372703003257,\n\
\ \"mc2_stderr\": 0.015191217388559787\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7940015785319653,\n \"acc_stderr\": 0.011366474352008826\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.46853677028051555,\n \
\ \"acc_stderr\": 0.013745189948450417\n }\n}\n```"
repo_url: https://huggingface.co/PistachioAlt/Noromaid-Bagel-7B-Slerp
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|arc:challenge|25_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|gsm8k|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hellaswag|10_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-29T17-27-18.779306.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-29T17-27-18.779306.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- '**/details_harness|winogrande|5_2023-12-29T17-27-18.779306.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-29T17-27-18.779306.parquet'
- config_name: results
data_files:
- split: 2023_12_29T17_27_18.779306
path:
- results_2023-12-29T17-27-18.779306.parquet
- split: latest
path:
- results_2023-12-29T17-27-18.779306.parquet
---
# Dataset Card for Evaluation run of PistachioAlt/Noromaid-Bagel-7B-Slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [PistachioAlt/Noromaid-Bagel-7B-Slerp](https://huggingface.co/PistachioAlt/Noromaid-Bagel-7B-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PistachioAlt__Noromaid-Bagel-7B-Slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-29T17:27:18.779306](https://huggingface.co/datasets/open-llm-leaderboard/details_PistachioAlt__Noromaid-Bagel-7B-Slerp/blob/main/results_2023-12-29T17-27-18.779306.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6422049061069124,
"acc_stderr": 0.032410788947613685,
"acc_norm": 0.6464619875380062,
"acc_norm_stderr": 0.03305989898949311,
"mc1": 0.3659730722154223,
"mc1_stderr": 0.01686294168408838,
"mc2": 0.5288372703003257,
"mc2_stderr": 0.015191217388559787
},
"harness|arc:challenge|25": {
"acc": 0.6126279863481229,
"acc_stderr": 0.01423587248790987,
"acc_norm": 0.6450511945392492,
"acc_norm_stderr": 0.013983036904094087
},
"harness|hellaswag|10": {
"acc": 0.6489743079067914,
"acc_stderr": 0.004763155068744877,
"acc_norm": 0.8458474407488548,
"acc_norm_stderr": 0.0036035695286784127
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.042320736951515885,
"acc_norm": 0.6,
"acc_norm_stderr": 0.042320736951515885
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6578947368421053,
"acc_stderr": 0.03860731599316091,
"acc_norm": 0.6578947368421053,
"acc_norm_stderr": 0.03860731599316091
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.028152837942493857,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.028152837942493857
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7361111111111112,
"acc_stderr": 0.03685651095897532,
"acc_norm": 0.7361111111111112,
"acc_norm_stderr": 0.03685651095897532
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.03669072477416906,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.03669072477416906
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41005291005291006,
"acc_stderr": 0.025331202438944437,
"acc_norm": 0.41005291005291006,
"acc_norm_stderr": 0.025331202438944437
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7548387096774194,
"acc_stderr": 0.02447224384089552,
"acc_norm": 0.7548387096774194,
"acc_norm_stderr": 0.02447224384089552
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.032250781083062896,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.032250781083062896
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586815,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586815
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8704663212435233,
"acc_stderr": 0.024233532297758733,
"acc_norm": 0.8704663212435233,
"acc_norm_stderr": 0.024233532297758733
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6487179487179487,
"acc_stderr": 0.024203665177902803,
"acc_norm": 0.6487179487179487,
"acc_norm_stderr": 0.024203665177902803
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.028897748741131143,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131143
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7016806722689075,
"acc_stderr": 0.029719142876342863,
"acc_norm": 0.7016806722689075,
"acc_norm_stderr": 0.029719142876342863
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.0395802723112157,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.0395802723112157
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8348623853211009,
"acc_stderr": 0.01591955782997606,
"acc_norm": 0.8348623853211009,
"acc_norm_stderr": 0.01591955782997606
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5416666666666666,
"acc_stderr": 0.033981108902946366,
"acc_norm": 0.5416666666666666,
"acc_norm_stderr": 0.033981108902946366
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.027599174300640766,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.027599174300640766
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.026750826994676166,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.026750826994676166
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04065578140908705,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04065578140908705
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5178571428571429,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.5178571428571429,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406957,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406957
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8071519795657727,
"acc_stderr": 0.014108533515757433,
"acc_norm": 0.8071519795657727,
"acc_norm_stderr": 0.014108533515757433
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.708092485549133,
"acc_stderr": 0.024476994076247326,
"acc_norm": 0.708092485549133,
"acc_norm_stderr": 0.024476994076247326
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3318435754189944,
"acc_stderr": 0.015748421208187303,
"acc_norm": 0.3318435754189944,
"acc_norm_stderr": 0.015748421208187303
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.024739981355113592,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.024739981355113592
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7138263665594855,
"acc_stderr": 0.02567025924218894,
"acc_norm": 0.7138263665594855,
"acc_norm_stderr": 0.02567025924218894
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7314814814814815,
"acc_stderr": 0.02465968518596729,
"acc_norm": 0.7314814814814815,
"acc_norm_stderr": 0.02465968518596729
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.43617021276595747,
"acc_stderr": 0.02958345203628407,
"acc_norm": 0.43617021276595747,
"acc_norm_stderr": 0.02958345203628407
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45827900912646674,
"acc_stderr": 0.01272570165695364,
"acc_norm": 0.45827900912646674,
"acc_norm_stderr": 0.01272570165695364
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.018824219512706214,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.018824219512706214
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7591836734693878,
"acc_stderr": 0.027372942201788167,
"acc_norm": 0.7591836734693878,
"acc_norm_stderr": 0.027372942201788167
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8706467661691543,
"acc_stderr": 0.023729830881018526,
"acc_norm": 0.8706467661691543,
"acc_norm_stderr": 0.023729830881018526
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3659730722154223,
"mc1_stderr": 0.01686294168408838,
"mc2": 0.5288372703003257,
"mc2_stderr": 0.015191217388559787
},
"harness|winogrande|5": {
"acc": 0.7940015785319653,
"acc_stderr": 0.011366474352008826
},
"harness|gsm8k|5": {
"acc": 0.46853677028051555,
"acc_stderr": 0.013745189948450417
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567169 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- futin/guess
eval_info:
task: text_zero_shot_classification
model: bigscience/bloomz-560m
metrics: []
dataset_name: futin/guess
dataset_config: vi
dataset_split: test
col_mapping:
text: text
classes: classes
target: target
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. |
KAUE2006/BurroMarioJorgeAndrade | ---
license: openrail
---
|
inswave/AISqaure_Dpo_v1 | ---
license: cc-by-nc-sa-4.0
---
|
Falah/islamic_prompts | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 1500926
num_examples: 5000
download_size: 189732
dataset_size: 1500926
---
# Dataset Card for "islamic_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ncats/EpiSet4NER-v2 | ---
annotations_creators:
- machine-generated
- expert-generated
language:
- en
language_creators:
- found
- expert-generated
license:
- other
multilinguality:
- monolingual
pretty_name: EpiSet4NER-v2
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- epidemiology
- rare disease
- named entity recognition
- NER
- NIH
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard)
- **Paper:** Pending
### Dataset Summary
EpiSet4NER-v2 is a gold-standard dataset for epidemiological entity recognition of location, epidemiologic types (e.g. "prevalence", "annual incidence", "estimated occurrence"), and epidemiological rates (e.g. "1.7 per 1,000,000 live births", "2.1:1.000.000", "one in five million", "0.03%") created by the [Genetic and Rare Diseases Information Center (GARD)](https://rarediseases.info.nih.gov/), a program in [the National Center for Advancing Translational Sciences](https://ncats.nih.gov/), one of the 27 [National Institutes of Health](https://www.nih.gov/). It was labeled programmatically using spaCy NER and rule-based methods, then manually validated by biomedical researchers, including a GARD curator (genetic and rare disease expert). This weakly-supervised teaching method allowed us to construct this high quality dataset in an efficient manner and achieve satisfactory performance on a multi-type token classification problem. It was used to train [EpiExtract4GARD-v2](https://huggingface.co/ncats/EpiExtract4GARD-v2), a BioBERT-based model fine-tuned for NER.
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature that indicates sentence number.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-LOC` (1), `I-LOC` (2), `B-EPI` (3), `I-EPI` (4),`B-STAT` (5),`I-STAT` (6).
### Data Splits
|name |train |validation|test|
|---------|-----:|----:|----:|
|EpiSet \# of abstracts|456|114|50|
|EpiSet \# tokens |117888|31262|13910|
## Dataset Creation

*Figure 1:* Creation of EpiSet4NER by NIH/NCATS
Comparing the programmatically labeled test set to the manually corrected test set allowed us to measure the precision, recall, and F1 of the programmatic labeling.
*Table 1:* Programmatic labeling of EpiSet4NER
| Evaluation Level | Entity | Precision | Recall | F1 |
|:----------------:|:------------------------:|:---------:|:------:|:-----:|
| Entity-Level | Overall | 0.559 | 0.662 | 0.606 |
| | Location | 0.597 | 0.661 | 0.627 |
| | Epidemiologic Type | 0.854 | 0.911 | 0.882 |
| | Epidemiologic Rate | 0.175 | 0.255 | 0.207 |
| Token-Level | Overall | 0.805 | 0.710 | 0.755 |
| | Location | 0.868 | 0.713 | 0.783 |
| | Epidemiologic Type | 0.908 | 0.908 | 0.908 |
| | Epidemiologic Rate | 0.739 | 0.645 | 0.689 |
An example of the text labeling:

*Figure 2:* Text Labeling using spaCy and rule-based labeling. Ideal labeling is bolded on the left. Actual programmatic output is on the right. [\[Figure citation\]](https://pubmed.ncbi.nlm.nih.gov/33649778/)
### Curation Rationale
To train ML/DL models that automate the process of rare disease epidemiological curation. This is crucial information to patients & families, researchers, grantors, and policy makers, primarily for funding purposes.
### Source Data
620 rare disease abstracts classified as epidemiological by a LSTM RNN rare disease epi classifier from 488 diseases. See Figure 1.
#### Initial Data Collection and Normalization
A random sample of 500 disease names were gathered from a list of ~6061 rare diseases tracked by GARD until ≥50 abstracts had been returned for each disease or the EBI RESTful API results were exhausted. Though we called ~25,000 abstracts from PubMed's db, only 7699 unique abstracts were returned for 488 diseases. Out of 7699 abstracts, only 620 were classified as epidemiological by the LSTM RNN epidemiological classifier.
### Annotations
#### Annotation process
Programmatic labeling. See [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/create_labeled_dataset_V2.ipynb) and then [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/modify_existing_labels.ipynb). The test set was manually corrected after creation.
#### Who are the annotators?
Programmatic labeling was done by [@William Kariampuzha](https://github.com/wzkariampuzha), one of the NCATS researchers.
The test set was manually corrected by 2 more NCATS researchers and a GARD curator (genetic and rare disease expert).
### Personal and Sensitive Information
None. These are freely available abstracts from PubMed.
## Considerations for Using the Data
### Social Impact of Dataset
Assisting 25-30 millions Americans with rare diseases. Additionally can be useful for Orphanet or CDC researchers/curators.
### Discussion of Biases and Limitations
- There were errors in the source file that contained rare disease synonyms of names, which may have led to some unrelated abstracts being included in the training, validation, and test sets.
- The abstracts were gathered through the EBI API and is thus subject to any biases that the EBI API had. The NCBI API returns very different results as shown by an API analysis here.
- The [long short-term memory recurrent neural network epi classifier](https://pubmed.ncbi.nlm.nih.gov/34457147/) was used to sift the 7699 rare disease abstracts. This model had a hold-out validation F1 score of 0.886 and a test F1 (which was compared against a GARD curator who used full-text articles to determine truth-value of epidemiological abstract) of 0.701. With 620 epi abstracts filtered from 7699 original rare disease abstracts, there are likely several false positives and false negative epi abstracts.
- Tokenization was done by spaCy which may be a limitation (or not) for current and future models trained on this set.
- The programmatic labeling was very imprecise as seen by Table 1. This is likely the largest limitation of the [BioBERT-based model](https://huggingface.co/ncats/EpiExtract4GARD) trained on this set.
- The test set was difficult to validate even for general NCATS researchers, which is why we relied on a rare disease expert to verify our modifications. As this task of epidemiological information identification is quite difficult for non-expert humans to complete, this set, and especially a gold-standard dataset in the possible future, represents a challenging gauntlet for NLP systems, especially those focusing on numeracy, to compete on.
## Additional Information
### Dataset Curators
[NIH GARD](https://rarediseases.info.nih.gov/about-gard/pages/23/about-gard)
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at NCATS/Axle Informatics for adding this dataset. |
atmallen/qm_alice_hard_4_grader_last_1.0e | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: alice_label
dtype: bool
- name: bob_label
dtype: bool
- name: difficulty
dtype: int64
- name: statement
dtype: string
- name: choices
sequence: string
- name: character
dtype: string
- name: label
dtype:
class_label:
names:
'0': 'False'
'1': 'True'
splits:
- name: train
num_bytes: 2899268.0
num_examples: 37091
- name: validation
num_bytes: 310182.0
num_examples: 3969
- name: test
num_bytes: 306854.0
num_examples: 3926
download_size: 1013749
dataset_size: 3516304.0
---
# Dataset Card for "qm_alice_hard_4_grader_last_1.0e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Hunzla/simplified_google_speech_commands_wav2vec2_960h | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype:
class_label:
names:
'0': 'yes'
'1': 'no'
'2': up
'3': down
'4': left
'5': right
'6': 'on'
'7': 'off'
'8': stop
'9': go
'10': zero
'11': one
'12': two
'13': three
'14': four
'15': five
'16': six
'17': seven
'18': eight
'19': nine
'20': bed
'21': bird
'22': cat
'23': dog
'24': happy
'25': house
'26': marvin
'27': sheila
'28': tree
'29': wow
'30': backward
'31': forward
'32': follow
'33': learn
'34': visual
'35': _silence_
- name: is_unknown
dtype: bool
- name: speaker_id
dtype: string
- name: utterance_id
dtype: int8
- name: embeddings
sequence:
sequence: float32
- name: input_length
dtype: int64
- name: label_encodings
sequence: int64
splits:
- name: train
num_bytes: 11238935260.0
num_examples: 84848
- name: test
num_bytes: 650121138.75
num_examples: 4890
- name: validation
num_bytes: 1322841633.25
num_examples: 9982
download_size: 13489230691
dataset_size: 13211898032.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_iter_3 | ---
pretty_name: Evaluation run of ZhangShenao/0.001_idpo_declr_iter_3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ZhangShenao/0.001_idpo_declr_iter_3](https://huggingface.co/ZhangShenao/0.001_idpo_declr_iter_3)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_iter_3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-08T11:06:37.091882](https://huggingface.co/datasets/open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_iter_3/blob/main/results_2024-04-08T11-06-37.091882.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6018723758193364,\n\
\ \"acc_stderr\": 0.03317800691328414,\n \"acc_norm\": 0.6080179723856561,\n\
\ \"acc_norm_stderr\": 0.03386848605749199,\n \"mc1\": 0.3843329253365973,\n\
\ \"mc1_stderr\": 0.017028707301245203,\n \"mc2\": 0.5471745669091238,\n\
\ \"mc2_stderr\": 0.015938274457791324\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6143344709897611,\n \"acc_stderr\": 0.014224250973257182,\n\
\ \"acc_norm\": 0.6416382252559727,\n \"acc_norm_stderr\": 0.014012883334859855\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6645090619398526,\n\
\ \"acc_stderr\": 0.0047119683790690274,\n \"acc_norm\": 0.8530173272256523,\n\
\ \"acc_norm_stderr\": 0.0035336498517284888\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n\
\ \"acc_stderr\": 0.04266763404099583,\n \"acc_norm\": 0.5777777777777777,\n\
\ \"acc_norm_stderr\": 0.04266763404099583\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.631578947368421,\n \"acc_stderr\": 0.03925523381052932,\n\
\ \"acc_norm\": 0.631578947368421,\n \"acc_norm_stderr\": 0.03925523381052932\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n\
\ \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \
\ \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6792452830188679,\n \"acc_stderr\": 0.028727502957880267,\n\
\ \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.028727502957880267\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n\
\ \"acc_stderr\": 0.03800968060554859,\n \"acc_norm\": 0.7083333333333334,\n\
\ \"acc_norm_stderr\": 0.03800968060554859\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\"\
: 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6127167630057804,\n\
\ \"acc_stderr\": 0.03714325906302065,\n \"acc_norm\": 0.6127167630057804,\n\
\ \"acc_norm_stderr\": 0.03714325906302065\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n\
\ \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n\
\ \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.03261936918467383,\n\
\ \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.03261936918467383\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n\
\ \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.42105263157894735,\n\
\ \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5103448275862069,\n \"acc_stderr\": 0.04165774775728763,\n\
\ \"acc_norm\": 0.5103448275862069,\n \"acc_norm_stderr\": 0.04165774775728763\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.43915343915343913,\n \"acc_stderr\": 0.025559920550531,\n \"acc_norm\"\
: 0.43915343915343913,\n \"acc_norm_stderr\": 0.025559920550531\n },\n\
\ \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3492063492063492,\n\
\ \"acc_stderr\": 0.04263906892795132,\n \"acc_norm\": 0.3492063492063492,\n\
\ \"acc_norm_stderr\": 0.04263906892795132\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7225806451612903,\n \"acc_stderr\": 0.025470196835900055,\n \"\
acc_norm\": 0.7225806451612903,\n \"acc_norm_stderr\": 0.025470196835900055\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"\
acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\"\
: 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7333333333333333,\n \"acc_stderr\": 0.03453131801885417,\n\
\ \"acc_norm\": 0.7333333333333333,\n \"acc_norm_stderr\": 0.03453131801885417\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7575757575757576,\n \"acc_stderr\": 0.030532892233932026,\n \"\
acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.030532892233932026\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8341968911917098,\n \"acc_stderr\": 0.026839845022314415,\n\
\ \"acc_norm\": 0.8341968911917098,\n \"acc_norm_stderr\": 0.026839845022314415\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5615384615384615,\n \"acc_stderr\": 0.02515826601686859,\n \
\ \"acc_norm\": 0.5615384615384615,\n \"acc_norm_stderr\": 0.02515826601686859\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2740740740740741,\n \"acc_stderr\": 0.027195934804085626,\n \
\ \"acc_norm\": 0.2740740740740741,\n \"acc_norm_stderr\": 0.027195934804085626\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6008403361344538,\n \"acc_stderr\": 0.03181110032413926,\n \
\ \"acc_norm\": 0.6008403361344538,\n \"acc_norm_stderr\": 0.03181110032413926\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"\
acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7908256880733945,\n \"acc_stderr\": 0.01743793717334323,\n \"\
acc_norm\": 0.7908256880733945,\n \"acc_norm_stderr\": 0.01743793717334323\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.44907407407407407,\n \"acc_stderr\": 0.03392238405321616,\n \"\
acc_norm\": 0.44907407407407407,\n \"acc_norm_stderr\": 0.03392238405321616\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7745098039215687,\n \"acc_stderr\": 0.02933116229425174,\n \"\
acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.02933116229425174\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7468354430379747,\n \"acc_stderr\": 0.02830465794303529,\n \
\ \"acc_norm\": 0.7468354430379747,\n \"acc_norm_stderr\": 0.02830465794303529\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6591928251121076,\n\
\ \"acc_stderr\": 0.03181149747055359,\n \"acc_norm\": 0.6591928251121076,\n\
\ \"acc_norm_stderr\": 0.03181149747055359\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6564885496183206,\n \"acc_stderr\": 0.041649760719448786,\n\
\ \"acc_norm\": 0.6564885496183206,\n \"acc_norm_stderr\": 0.041649760719448786\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"\
acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.034089978868575295,\n\
\ \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.034089978868575295\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n\
\ \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8007662835249042,\n\
\ \"acc_stderr\": 0.014283378044296418,\n \"acc_norm\": 0.8007662835249042,\n\
\ \"acc_norm_stderr\": 0.014283378044296418\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6734104046242775,\n \"acc_stderr\": 0.025248264774242826,\n\
\ \"acc_norm\": 0.6734104046242775,\n \"acc_norm_stderr\": 0.025248264774242826\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.34972067039106147,\n\
\ \"acc_stderr\": 0.015949308790233645,\n \"acc_norm\": 0.34972067039106147,\n\
\ \"acc_norm_stderr\": 0.015949308790233645\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6372549019607843,\n \"acc_stderr\": 0.02753007844711031,\n\
\ \"acc_norm\": 0.6372549019607843,\n \"acc_norm_stderr\": 0.02753007844711031\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6913183279742765,\n\
\ \"acc_stderr\": 0.026236965881153266,\n \"acc_norm\": 0.6913183279742765,\n\
\ \"acc_norm_stderr\": 0.026236965881153266\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6790123456790124,\n \"acc_stderr\": 0.025976566010862737,\n\
\ \"acc_norm\": 0.6790123456790124,\n \"acc_norm_stderr\": 0.025976566010862737\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.42959582790091266,\n\
\ \"acc_stderr\": 0.012643004623790203,\n \"acc_norm\": 0.42959582790091266,\n\
\ \"acc_norm_stderr\": 0.012643004623790203\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6213235294117647,\n \"acc_stderr\": 0.02946513363977613,\n\
\ \"acc_norm\": 0.6213235294117647,\n \"acc_norm_stderr\": 0.02946513363977613\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.619281045751634,\n \"acc_stderr\": 0.019643801557924803,\n \
\ \"acc_norm\": 0.619281045751634,\n \"acc_norm_stderr\": 0.019643801557924803\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n\
\ \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n\
\ \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6448979591836734,\n \"acc_stderr\": 0.030635655150387638,\n\
\ \"acc_norm\": 0.6448979591836734,\n \"acc_norm_stderr\": 0.030635655150387638\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n\
\ \"acc_stderr\": 0.02768691358801301,\n \"acc_norm\": 0.8109452736318408,\n\
\ \"acc_norm_stderr\": 0.02768691358801301\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n\
\ \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n\
\ \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640044,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640044\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3843329253365973,\n\
\ \"mc1_stderr\": 0.017028707301245203,\n \"mc2\": 0.5471745669091238,\n\
\ \"mc2_stderr\": 0.015938274457791324\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7719021310181531,\n \"acc_stderr\": 0.011793015817663592\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2850644427596664,\n \
\ \"acc_stderr\": 0.012435042334904004\n }\n}\n```"
repo_url: https://huggingface.co/ZhangShenao/0.001_idpo_declr_iter_3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|arc:challenge|25_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|gsm8k|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hellaswag|10_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-08T11-06-37.091882.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-08T11-06-37.091882.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- '**/details_harness|winogrande|5_2024-04-08T11-06-37.091882.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-08T11-06-37.091882.parquet'
- config_name: results
data_files:
- split: 2024_04_08T11_06_37.091882
path:
- results_2024-04-08T11-06-37.091882.parquet
- split: latest
path:
- results_2024-04-08T11-06-37.091882.parquet
---
# Dataset Card for Evaluation run of ZhangShenao/0.001_idpo_declr_iter_3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ZhangShenao/0.001_idpo_declr_iter_3](https://huggingface.co/ZhangShenao/0.001_idpo_declr_iter_3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_iter_3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-08T11:06:37.091882](https://huggingface.co/datasets/open-llm-leaderboard/details_ZhangShenao__0.001_idpo_declr_iter_3/blob/main/results_2024-04-08T11-06-37.091882.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6018723758193364,
"acc_stderr": 0.03317800691328414,
"acc_norm": 0.6080179723856561,
"acc_norm_stderr": 0.03386848605749199,
"mc1": 0.3843329253365973,
"mc1_stderr": 0.017028707301245203,
"mc2": 0.5471745669091238,
"mc2_stderr": 0.015938274457791324
},
"harness|arc:challenge|25": {
"acc": 0.6143344709897611,
"acc_stderr": 0.014224250973257182,
"acc_norm": 0.6416382252559727,
"acc_norm_stderr": 0.014012883334859855
},
"harness|hellaswag|10": {
"acc": 0.6645090619398526,
"acc_stderr": 0.0047119683790690274,
"acc_norm": 0.8530173272256523,
"acc_norm_stderr": 0.0035336498517284888
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099583,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099583
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.631578947368421,
"acc_stderr": 0.03925523381052932,
"acc_norm": 0.631578947368421,
"acc_norm_stderr": 0.03925523381052932
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.028727502957880267,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.028727502957880267
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7083333333333334,
"acc_stderr": 0.03800968060554859,
"acc_norm": 0.7083333333333334,
"acc_norm_stderr": 0.03800968060554859
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6127167630057804,
"acc_stderr": 0.03714325906302065,
"acc_norm": 0.6127167630057804,
"acc_norm_stderr": 0.03714325906302065
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.74,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5319148936170213,
"acc_stderr": 0.03261936918467383,
"acc_norm": 0.5319148936170213,
"acc_norm_stderr": 0.03261936918467383
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.046446020912223177,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.046446020912223177
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5103448275862069,
"acc_stderr": 0.04165774775728763,
"acc_norm": 0.5103448275862069,
"acc_norm_stderr": 0.04165774775728763
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43915343915343913,
"acc_stderr": 0.025559920550531,
"acc_norm": 0.43915343915343913,
"acc_norm_stderr": 0.025559920550531
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3492063492063492,
"acc_stderr": 0.04263906892795132,
"acc_norm": 0.3492063492063492,
"acc_norm_stderr": 0.04263906892795132
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7225806451612903,
"acc_stderr": 0.025470196835900055,
"acc_norm": 0.7225806451612903,
"acc_norm_stderr": 0.025470196835900055
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7333333333333333,
"acc_stderr": 0.03453131801885417,
"acc_norm": 0.7333333333333333,
"acc_norm_stderr": 0.03453131801885417
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.030532892233932026,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.030532892233932026
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8341968911917098,
"acc_stderr": 0.026839845022314415,
"acc_norm": 0.8341968911917098,
"acc_norm_stderr": 0.026839845022314415
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5615384615384615,
"acc_stderr": 0.02515826601686859,
"acc_norm": 0.5615384615384615,
"acc_norm_stderr": 0.02515826601686859
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2740740740740741,
"acc_stderr": 0.027195934804085626,
"acc_norm": 0.2740740740740741,
"acc_norm_stderr": 0.027195934804085626
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6008403361344538,
"acc_stderr": 0.03181110032413926,
"acc_norm": 0.6008403361344538,
"acc_norm_stderr": 0.03181110032413926
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7908256880733945,
"acc_stderr": 0.01743793717334323,
"acc_norm": 0.7908256880733945,
"acc_norm_stderr": 0.01743793717334323
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.44907407407407407,
"acc_stderr": 0.03392238405321616,
"acc_norm": 0.44907407407407407,
"acc_norm_stderr": 0.03392238405321616
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7745098039215687,
"acc_stderr": 0.02933116229425174,
"acc_norm": 0.7745098039215687,
"acc_norm_stderr": 0.02933116229425174
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7468354430379747,
"acc_stderr": 0.02830465794303529,
"acc_norm": 0.7468354430379747,
"acc_norm_stderr": 0.02830465794303529
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6591928251121076,
"acc_stderr": 0.03181149747055359,
"acc_norm": 0.6591928251121076,
"acc_norm_stderr": 0.03181149747055359
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6564885496183206,
"acc_stderr": 0.041649760719448786,
"acc_norm": 0.6564885496183206,
"acc_norm_stderr": 0.041649760719448786
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.042365112580946336,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.042365112580946336
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.034089978868575295,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.034089978868575295
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8007662835249042,
"acc_stderr": 0.014283378044296418,
"acc_norm": 0.8007662835249042,
"acc_norm_stderr": 0.014283378044296418
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6734104046242775,
"acc_stderr": 0.025248264774242826,
"acc_norm": 0.6734104046242775,
"acc_norm_stderr": 0.025248264774242826
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.34972067039106147,
"acc_stderr": 0.015949308790233645,
"acc_norm": 0.34972067039106147,
"acc_norm_stderr": 0.015949308790233645
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6372549019607843,
"acc_stderr": 0.02753007844711031,
"acc_norm": 0.6372549019607843,
"acc_norm_stderr": 0.02753007844711031
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6913183279742765,
"acc_stderr": 0.026236965881153266,
"acc_norm": 0.6913183279742765,
"acc_norm_stderr": 0.026236965881153266
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6790123456790124,
"acc_stderr": 0.025976566010862737,
"acc_norm": 0.6790123456790124,
"acc_norm_stderr": 0.025976566010862737
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.42959582790091266,
"acc_stderr": 0.012643004623790203,
"acc_norm": 0.42959582790091266,
"acc_norm_stderr": 0.012643004623790203
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6213235294117647,
"acc_stderr": 0.02946513363977613,
"acc_norm": 0.6213235294117647,
"acc_norm_stderr": 0.02946513363977613
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.619281045751634,
"acc_stderr": 0.019643801557924803,
"acc_norm": 0.619281045751634,
"acc_norm_stderr": 0.019643801557924803
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6448979591836734,
"acc_stderr": 0.030635655150387638,
"acc_norm": 0.6448979591836734,
"acc_norm_stderr": 0.030635655150387638
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8109452736318408,
"acc_stderr": 0.02768691358801301,
"acc_norm": 0.8109452736318408,
"acc_norm_stderr": 0.02768691358801301
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640044,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640044
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3843329253365973,
"mc1_stderr": 0.017028707301245203,
"mc2": 0.5471745669091238,
"mc2_stderr": 0.015938274457791324
},
"harness|winogrande|5": {
"acc": 0.7719021310181531,
"acc_stderr": 0.011793015817663592
},
"harness|gsm8k|5": {
"acc": 0.2850644427596664,
"acc_stderr": 0.012435042334904004
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
zolak/twitter_dataset_50_1713224152 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 104110
num_examples: 276
download_size: 62765
dataset_size: 104110
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NickyNicky/medqa-usmle | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: F
dtype: string
- name: G
dtype: string
- name: H
dtype: string
- name: I
dtype: string
- name: J
dtype: string
- name: K
dtype: string
- name: L
dtype: string
- name: M
dtype: string
- name: meta_info
dtype: string
splits:
- name: train
num_bytes: 13491610
num_examples: 14369
download_size: 7345287
dataset_size: 13491610
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
anasselhoud/Bundesliga-2010-2023 | ---
license: openrail
size_categories:
- 1K<n<10K
---
# Bundesliga Results from 2010 to 2023
This dataset contains the results of all matches in the German Bundesliga from 2010 to 2023. The raw data was collected from the OpenLigaDB API.
**The dataset has been prepared and adjusted by me** to make it more suitable for machine learning training purposes.
## Dataset Information
The dataset has 20 columns, including:
| Column Name | Description |
|-------------------------|--------------------------------------------------------------------------|
| ***matchID*** | The unique identifier for each match. |
| ***matchDateTime*** | The date and time when the match was scheduled to start. |
| ***timeZoneID*** | The timezone of the match. |
| ***leagueName*** | The name of the league where the match took place. |
| ***leagueSeason*** | The season of the league where the match took place. |
| ***leagueShortcut*** | The abbreviated name of the league where the match took place. |
| ***matchDateTimeUTC*** | The date and time when the match was scheduled to start in UTC timezone. |
| ***lastUpdateDateTime*** | The date and time when the match data was last updated. |
| ***matchIsFinished*** | A boolean value indicating whether the match is finished or not. |
| ***numberOfViewers*** | The number of viewers who watched the match. |
| ***locationCity*** | The city where the match took place. |
| ***locationStadium*** | The name of the stadium where the match took place. |
| ***team1_Name*** | The name of the first team in the match. |
| ***team1_shortName*** | The abbreviated name of the first team in the match. |
| ***team1_teamIconUrl*** | The URL of the icon for the first team in the match. |
| ***team1_GroupName*** | The group name of the first team in the match. |
| ***team2_Name*** | The name of the second team in the match. |
| ***team2_shortName*** | The abbreviated name of the second team in the match. |
| ***team2_teamIconUrl*** | The URL of the icon for the second team in the match. |
| ***team2_GroupName*** | The group name of the second team in the match. |
| ***finalresult_pointsTeam1*** | The final score of the first team in the match. |
| ***finalresult_pointsTeam2*** | The final score of the second team in the match. |
| ***halftime_pointsTeam1*** | The score of the first team in the match at halftime. |
| ***halftime_pointsTeam2*** | The score of the second team in the match at halftime. |
The dataset is sorted by ***matchDateTime*** in ascending order, which means that the first row in the dataset is the earliest match, and the last row is the latest match. |
DataNeed/company-reports | ---
language:
- en
pretty_name: "Company Reports Dataset"
tags:
- esg reports
- sustainability
- corporate governance
- environmental
license: "cc-by-sa-4.0"
task_categories:
- text-classification
configs:
- config_name: default
data_files:
- split: "2024_02_03"
path:
- "data/company_reports_2024_02_03.json"
- split: "2024_01_23"
path:
- "data/company_reports_2024_01_23.json"
- split: "2024_01_22"
path:
- "data/company_reports_2024_01_22.json"
- split: "2024_01_21"
path:
- "data/company_reports_2024_01_21.json"
- split: "2024_01_19"
path:
- "data/company_reports_2024_01_19.json"
---
# Company Reports Dataset
## Description
This dataset contains ESG (Environmental, Social, and Governance) sustainability reports from various companies. It includes data like company details, report categories, textual analysis of the reports, and more.
## Dataset Structure
- `id`: Unique identifier for each report entry.
- `document_category`: Classification of the document (e.g., ESG sustainability report).
- `year`: Publication year of the report.
- `company_name`: Name of the respective company.
- `company_description`: A concise description of the company.
- `company_website`: The official website URL of the company.
- `economic_activity`: Sector of economic activity.
- `file_name`: Filename of the report.
- `url`: Direct URL to access the report.
- `downloaded_at`: Date and time when the report was downloaded.
- `text_analyzed`: The analyzed text extracted from the report.
- `tot_text_cleaned`: The cleaned version of the report text.
- `tot_text_raw`: The original, unedited text of the report.
- `documents_description`: A short description of the documents.
## Data Processing Versions
Different versions of the dataset are available, each processed with specific methodologies:
### Version: 2024_01_19
- **Extraction Methodology**: Utilizing 'unstructured' package with the following parameters:
- Partition method with strategy = fast
- **Translation Methodology**:
- Individual element_id translation using the multilingual model 'facebook/m2m100_418M'
- **Feature Extraction Methodology**:
- Concatenation of cleaned and translated element_ids, with GPT-3.5-turbo model applied to the first 2500 characters for JSON extraction.
### Versions: 2024_01_21, 2024_01_22
- **Extraction Methodology**: Employing 'unstructured' package with the following parameters:
- Partition_pdf method with strategy = auto
- **Translation Methodology**:
- Aggregating into chunks, translating each chunk using Helsinki-NLP/opus-mt-{source_lang}-{target_lang} models. Language detection on each chunk is performed using langdetect.detect(text_cleaned[:100]).
- **Feature Extraction Methodology**:
- Concatenation of cleaned and translated chunks, with GPT-3.5-turbo model applied to the first 2500 characters for JSON extraction.
### Version: 2024_01_23
- [Details to be updated]
## Collection Methodology
The dataset was collected from publicly available ESG reports of various companies to represent a wide range of industries.
## Intended Use
This dataset is suitable for tasks like text analysis, ESG metric analysis, corporate sustainability practice research, and more.
## Useful link
https://huggingface.co/docs/datasets/share
## Licensing
The dataset is available under the [CC-BY-SA-4.0](https://creativecommons.org
/licenses/by-sa/4.0/) license. Please ensure to adhere to the terms of this license when using or distributing this dataset.
|
wenzhuoliu/math_concept_ambiguation | ---
dataset_info:
features:
- name: concept
dtype: string
- name: def1
sequence: string
- name: def2
sequence: string
- name: def3
sequence: string
splits:
- name: train
num_bytes: 70719
num_examples: 201
download_size: 43331
dataset_size: 70719
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "math_concept_ambiguation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Aravindas/nfems1 | ---
license: cc
---
|
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_83 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1301525384.0
num_examples: 255602
download_size: 1325762989
dataset_size: 1301525384.0
---
# Dataset Card for "chunk_83"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DavidLanz/alpaca-gpt4-tw-input-output-48k | ---
task_categories:
- text-generation
- conversational
- question-answering
language:
- en
size_categories:
- 10K<n<100K
license: cc-by-4.0
tags:
- gpt3
- alpaca
- instruction-finetuning
---
# Dataset Card for "alpaca-gpt4-tw-input-output-48k"
This dataset contains English Instruction-Following generated by GPT-4 using Alpaca prompts for fine-tuning LLMs.
The dataset was originaly shared in this repository: https://github.com/ntunlplab/traditional-chinese-alpaca. This is just a wraper for compatibility with huggingface's datasets library.
## Dataset structure
It contains 52K instruction-following data generated by GPT-4 using the same prompts as in Alpaca.
The dataset has the same format as Alpaca data, except the output is generated by GPT-4:
- `instruction`: `str`, describes the task the model should perform. Each of the 52K instructions is unique.
- `input`: `str`, optional context or input for the task.
- `output`: `str`, the answer to the instruction as generated by `GPT-4`.
## Difference with the original Alpaca dataset
The original Alpaca dataset used text-davinci-003 to complete the prompts. This dataset uses those same prompts, but generating the completions with GPT-4. Thus, in general, the responses are of higher quality and lenght. Here is an example:
#### Example from Alpaca-GPT3:
```bash
{
'instruction': '辨識那一個平台與其他不同。',
'input': '輸入:Twitter,Instagram,Telegram。',
'output': 'Twitter,Instagram 和 Telegram 都是流行的社群媒體平台,但它們之間也有一些不同之處:\n\n- Twitter: 是一個Twitter社群媒體平台,使用者可以發佈 280 字元以內的訊息(即推文)並與其他使用者互動。它主要關注文字交流,儘管使用者也可以分享照片和視訊。\n- Instagram: 是一個圖片和視訊分享的社群媒體平台,使用者可以在 Instagram 上發佈照片和視訊並與其他使用者互動。它主要關注圖片和視訊的分享,儘管使用者也可以新增文字說明。\n- Telegram: 是一個即時通訊和社群媒體平台,使用者可以通過文字、語音、圖片和視訊與其他使用者即時交流。與 Twitter 和 Instagram 不同,Telegram 更側重於即時通訊,它還提供了端到端加密的私人聊天功能。'
}
```
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode). |
krushilpatel/covid-tweet-text-classification | ---
license: unknown
---
|
gouthamsk/esp_idf_code | ---
dataset_info:
features:
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 19664495
num_examples: 13663
download_size: 5132190
dataset_size: 19664495
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
David19930/audio_dataset_wsp | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcribe
dtype: string
splits:
- name: train
num_bytes: 1638963.0
num_examples: 87
download_size: 1634215
dataset_size: 1638963.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
distilled-one-sec-cv12-each-chunk-uniq/chunk_168 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1336809020.0
num_examples: 260485
download_size: 1370333962
dataset_size: 1336809020.0
---
# Dataset Card for "chunk_168"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_018 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 64622288.45629991
num_examples: 120179
download_size: 9924326
dataset_size: 64622288.45629991
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_018"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
imgmongelli/testdataframe | ---
license: mit
---
|
ArwaAbdul/Fingerprint_split_90_10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
splits:
- name: train
num_bytes: 504155396.6682027
num_examples: 3000
- name: test
num_bytes: 77898517.33179724
num_examples: 472
download_size: 337755809
dataset_size: 582053914.0
---
# Dataset Card for "Fingerprint_split_90_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
indicbench/truthfulqa_kn | ---
dataset_info:
- config_name: default
features:
- name: _data_files
list:
- name: filename
dtype: string
- name: _fingerprint
dtype: string
- name: _format_columns
dtype: 'null'
- name: _format_kwargs
dtype: string
- name: _format_type
dtype: 'null'
- name: _output_all_columns
dtype: bool
- name: _split
dtype: 'null'
splits:
- name: train
num_bytes: 119
num_examples: 2
download_size: 3715
dataset_size: 119
- config_name: generation
features:
- name: type
dtype: string
- name: category
dtype: string
- name: question
dtype: string
- name: best_answer
dtype: string
- name: correct_answers
sequence: string
- name: incorrect_answers
sequence: string
- name: source
dtype: string
splits:
- name: validation
num_bytes: 1188333
num_examples: 817
download_size: 359485
dataset_size: 1188333
- config_name: multiple_choice
features:
- name: question
dtype: string
- name: mc1_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int64
- name: mc2_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int64
splits:
- name: validation
num_bytes: 1663864
num_examples: 817
download_size: 466837
dataset_size: 1663864
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: generation
data_files:
- split: validation
path: generation/validation-*
- config_name: multiple_choice
data_files:
- split: validation
path: multiple_choice/validation-*
---
|
olm/olm-CC-MAIN-2022-33-sampling-ratio-0.20 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: OLM August 2022 Common Crawl
size_categories:
- 10M<n<100M
source_datasets: []
tags:
- pretraining
- language modelling
- common crawl
- web
task_categories: []
task_ids: []
---
# Dataset Card for OLM August 2022 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 20% of the August 2022 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. |
ismailiismail/FrEn_handpicks | ---
dataset_info:
features:
- name: French
dtype: string
- name: English
dtype: string
splits:
- name: train
num_bytes: 34126
num_examples: 394
download_size: 16438
dataset_size: 34126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "FrEn_handpicks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pythainlp/thai-culturax-clean-dataset | ---
license: odc-by
annotations_creators:
- no-annotation
language_creators:
- found
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
language:
- th
size_categories:
- 10M<n<100M
---
# Thai CulturaX Clean dataset
- The data is sourced from the Thai subset of [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset, which itself is sourced from mC4 and four OSCAR corpora.
- It has about 8,748,575,684 words (without whitespace) and 16,768,585 lines (97 GB).
- It was filtered content promoting gambling, adult content, and narcotics.
## Considerations for Using the Data
This dataset is the cleaned version of the CulturaX datasets, which were both extracted from CommonCrawl, personal and sensitive information might still contain personal and sensitive information. This must be considered prior to using this dataset for any purpose, such as training deep learning models, etc. |
kewu93/three_styles_prompted_250_512x512_50perclass_random | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: style_class
dtype: string
splits:
- name: train
num_bytes: 4334193.0
num_examples: 150
- name: val
num_bytes: 4317601.0
num_examples: 150
download_size: 8183790
dataset_size: 8651794.0
---
# Dataset Card for "three_styles_prompted_250_512x512_50perclass_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Taldrin/FurventureGamesV1 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5692962
num_examples: 989
download_size: 905469
dataset_size: 5692962
---
|
bigscience-data/roots_indic-mr_wikisource | ---
language: mr
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_indic-mr_wikisource
# wikisource_filtered
- Dataset uid: `wikisource_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 2.6306 % of total
- 12.7884 % of fr
- 19.8886 % of indic-bn
- 20.9966 % of indic-ta
- 2.3478 % of ar
- 4.7068 % of indic-hi
- 18.0998 % of indic-te
- 1.7155 % of es
- 19.4800 % of indic-kn
- 9.1737 % of indic-ml
- 17.1771 % of indic-mr
- 17.1870 % of indic-gu
- 70.3687 % of indic-as
- 1.0165 % of pt
- 7.8642 % of indic-pa
- 1.3501 % of vi
- 4.9411 % of indic-or
- 0.5307 % of ca
- 2.3593 % of id
- 1.5928 % of eu
### BigScience processing steps
#### Filters applied to: fr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-bn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-ta
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: ar
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-te
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: es
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: indic-kn
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- remove_wiki_mojibake
- filter_small_docs_bytes_300
#### Filters applied to: indic-ml
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-mr
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-gu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-as
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-pa
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: vi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: indic-or
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_1024
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
- filter_small_docs_bytes_300
#### Filters applied to: eu
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- dedup_template_soft
- filter_remove_empty_docs
|
lucadiliello/triviaqa | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: key
dtype: string
- name: labels
list:
- name: end
sequence: int64
- name: start
sequence: int64
splits:
- name: train
num_bytes: 270309068
num_examples: 61688
- name: validation
num_bytes: 34067846
num_examples: 7785
download_size: 178840166
dataset_size: 304376914
---
# Dataset Card for "triviaqa"
Split taken from the MRQA 2019 Shared Task, formatted and filtered for Question Answering. For the original dataset, have a look [here](https://huggingface.co/datasets/mrqa). |
arjundd/meddlr-data | ---
license: apache-2.0
---
|
distilled-one-sec-cv12-each-chunk-uniq/chunk_87 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1356777632.0
num_examples: 264376
download_size: 1389412373
dataset_size: 1356777632.0
---
# Dataset Card for "chunk_87"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Phonecharger/1770 | ---
license: openrail
---
|
bituruna11/minhavoz | ---
license: openrail
---
|
snappr/passed_images | ---
dataset_info:
features:
- name: image
dtype: image
- name: review_status
dtype: string
- name: restaurant_item_name
dtype: string
- name: caption_simple
dtype: string
- name: caption_blip
dtype: string
- name: output_media_id
dtype: string
splits:
- name: train
num_bytes: 8676866357.0
num_examples: 9759
- name: test
num_bytes: 2128008298.0
num_examples: 2442
download_size: 10721838828
dataset_size: 10804874655.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
nlp-vtcc/Tiger-MathInstruct | ---
configs:
- config_name: default
data_files:
- split: vi
path: data/vi-*
- split: en
path: data/en-*
dataset_info:
features:
- name: source
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
splits:
- name: vi
num_bytes: 227116640
num_examples: 262040
- name: en
num_bytes: 188743056
num_examples: 262040
download_size: 207887300
dataset_size: 415859696
---
# Dataset Card for "Tiger-MathInstruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
numb3r33/know_gremlin | ---
license: apache-2.0
---
|
open-llm-leaderboard/details_wang7776__vicuna-7b-v1.3-attention-sparsity-10 | ---
pretty_name: Evaluation run of wang7776/vicuna-7b-v1.3-attention-sparsity-10
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [wang7776/vicuna-7b-v1.3-attention-sparsity-10](https://huggingface.co/wang7776/vicuna-7b-v1.3-attention-sparsity-10)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_wang7776__vicuna-7b-v1.3-attention-sparsity-10\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-26T22:28:11.732265](https://huggingface.co/datasets/open-llm-leaderboard/details_wang7776__vicuna-7b-v1.3-attention-sparsity-10/blob/main/results_2024-01-26T22-28-11.732265.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.47889056395254415,\n\
\ \"acc_stderr\": 0.03436079323218269,\n \"acc_norm\": 0.48500356482741425,\n\
\ \"acc_norm_stderr\": 0.03513586695106674,\n \"mc1\": 0.31456548347613217,\n\
\ \"mc1_stderr\": 0.01625524199317919,\n \"mc2\": 0.4686560841151894,\n\
\ \"mc2_stderr\": 0.015106430830741629\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.48293515358361777,\n \"acc_stderr\": 0.014602878388536595,\n\
\ \"acc_norm\": 0.5221843003412969,\n \"acc_norm_stderr\": 0.014597001927076133\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5795658235411273,\n\
\ \"acc_stderr\": 0.004926198483948702,\n \"acc_norm\": 0.7704640509858594,\n\
\ \"acc_norm_stderr\": 0.004196749648385372\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421296,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421296\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.45925925925925926,\n\
\ \"acc_stderr\": 0.04304979692464242,\n \"acc_norm\": 0.45925925925925926,\n\
\ \"acc_norm_stderr\": 0.04304979692464242\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.45394736842105265,\n \"acc_stderr\": 0.04051646342874142,\n\
\ \"acc_norm\": 0.45394736842105265,\n \"acc_norm_stderr\": 0.04051646342874142\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.49,\n\
\ \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.5471698113207547,\n \"acc_stderr\": 0.03063562795796182,\n\
\ \"acc_norm\": 0.5471698113207547,\n \"acc_norm_stderr\": 0.03063562795796182\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4791666666666667,\n\
\ \"acc_stderr\": 0.041775789507399935,\n \"acc_norm\": 0.4791666666666667,\n\
\ \"acc_norm_stderr\": 0.041775789507399935\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110175,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110175\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\"\
: 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4277456647398844,\n\
\ \"acc_stderr\": 0.03772446857518026,\n \"acc_norm\": 0.4277456647398844,\n\
\ \"acc_norm_stderr\": 0.03772446857518026\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808778,\n\
\ \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808778\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.59,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n\
\ \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.3659574468085106,\n \"acc_stderr\": 0.031489558297455304,\n\
\ \"acc_norm\": 0.3659574468085106,\n \"acc_norm_stderr\": 0.031489558297455304\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21052631578947367,\n\
\ \"acc_stderr\": 0.038351539543994194,\n \"acc_norm\": 0.21052631578947367,\n\
\ \"acc_norm_stderr\": 0.038351539543994194\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.4413793103448276,\n \"acc_stderr\": 0.04137931034482758,\n\
\ \"acc_norm\": 0.4413793103448276,\n \"acc_norm_stderr\": 0.04137931034482758\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.31746031746031744,\n \"acc_stderr\": 0.02397386199899208,\n \"\
acc_norm\": 0.31746031746031744,\n \"acc_norm_stderr\": 0.02397386199899208\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3253968253968254,\n\
\ \"acc_stderr\": 0.04190596438871136,\n \"acc_norm\": 0.3253968253968254,\n\
\ \"acc_norm_stderr\": 0.04190596438871136\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720683,\n \
\ \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720683\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5193548387096775,\n\
\ \"acc_stderr\": 0.02842268740431211,\n \"acc_norm\": 0.5193548387096775,\n\
\ \"acc_norm_stderr\": 0.02842268740431211\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.35960591133004927,\n \"acc_stderr\": 0.033764582465095665,\n\
\ \"acc_norm\": 0.35960591133004927,\n \"acc_norm_stderr\": 0.033764582465095665\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\"\
: 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.5696969696969697,\n \"acc_stderr\": 0.03866225962879077,\n\
\ \"acc_norm\": 0.5696969696969697,\n \"acc_norm_stderr\": 0.03866225962879077\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.6212121212121212,\n \"acc_stderr\": 0.03456088731993747,\n \"\
acc_norm\": 0.6212121212121212,\n \"acc_norm_stderr\": 0.03456088731993747\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.6839378238341969,\n \"acc_stderr\": 0.033553973696861736,\n\
\ \"acc_norm\": 0.6839378238341969,\n \"acc_norm_stderr\": 0.033553973696861736\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.47692307692307695,\n \"acc_stderr\": 0.025323990861736118,\n\
\ \"acc_norm\": 0.47692307692307695,\n \"acc_norm_stderr\": 0.025323990861736118\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2518518518518518,\n \"acc_stderr\": 0.026466117538959912,\n \
\ \"acc_norm\": 0.2518518518518518,\n \"acc_norm_stderr\": 0.026466117538959912\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.44537815126050423,\n \"acc_stderr\": 0.032284106267163895,\n\
\ \"acc_norm\": 0.44537815126050423,\n \"acc_norm_stderr\": 0.032284106267163895\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.03802039760107903,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.03802039760107903\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.636697247706422,\n \"acc_stderr\": 0.020620603919625804,\n \"\
acc_norm\": 0.636697247706422,\n \"acc_norm_stderr\": 0.020620603919625804\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4583333333333333,\n \"acc_stderr\": 0.03398110890294636,\n \"\
acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.03398110890294636\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6225490196078431,\n \"acc_stderr\": 0.03402272044340703,\n \"\
acc_norm\": 0.6225490196078431,\n \"acc_norm_stderr\": 0.03402272044340703\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.6286919831223629,\n \"acc_stderr\": 0.03145068600744859,\n \
\ \"acc_norm\": 0.6286919831223629,\n \"acc_norm_stderr\": 0.03145068600744859\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.57847533632287,\n\
\ \"acc_stderr\": 0.03314190222110658,\n \"acc_norm\": 0.57847533632287,\n\
\ \"acc_norm_stderr\": 0.03314190222110658\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.5801526717557252,\n \"acc_stderr\": 0.04328577215262972,\n\
\ \"acc_norm\": 0.5801526717557252,\n \"acc_norm_stderr\": 0.04328577215262972\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6528925619834711,\n \"acc_stderr\": 0.04345724570292534,\n \"\
acc_norm\": 0.6528925619834711,\n \"acc_norm_stderr\": 0.04345724570292534\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6574074074074074,\n\
\ \"acc_stderr\": 0.045879047413018105,\n \"acc_norm\": 0.6574074074074074,\n\
\ \"acc_norm_stderr\": 0.045879047413018105\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.5644171779141104,\n \"acc_stderr\": 0.03895632464138937,\n\
\ \"acc_norm\": 0.5644171779141104,\n \"acc_norm_stderr\": 0.03895632464138937\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.30357142857142855,\n\
\ \"acc_stderr\": 0.04364226155841044,\n \"acc_norm\": 0.30357142857142855,\n\
\ \"acc_norm_stderr\": 0.04364226155841044\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6213592233009708,\n \"acc_stderr\": 0.048026946982589726,\n\
\ \"acc_norm\": 0.6213592233009708,\n \"acc_norm_stderr\": 0.048026946982589726\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.688034188034188,\n\
\ \"acc_stderr\": 0.030351527323344944,\n \"acc_norm\": 0.688034188034188,\n\
\ \"acc_norm_stderr\": 0.030351527323344944\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \
\ \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6538952745849298,\n\
\ \"acc_stderr\": 0.01701196526641207,\n \"acc_norm\": 0.6538952745849298,\n\
\ \"acc_norm_stderr\": 0.01701196526641207\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5115606936416185,\n \"acc_stderr\": 0.026911898686377913,\n\
\ \"acc_norm\": 0.5115606936416185,\n \"acc_norm_stderr\": 0.026911898686377913\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n\
\ \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n\
\ \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5620915032679739,\n \"acc_stderr\": 0.028408302020332687,\n\
\ \"acc_norm\": 0.5620915032679739,\n \"acc_norm_stderr\": 0.028408302020332687\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5530546623794212,\n\
\ \"acc_stderr\": 0.028237769422085335,\n \"acc_norm\": 0.5530546623794212,\n\
\ \"acc_norm_stderr\": 0.028237769422085335\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.558641975308642,\n \"acc_stderr\": 0.027628737155668767,\n\
\ \"acc_norm\": 0.558641975308642,\n \"acc_norm_stderr\": 0.027628737155668767\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.3404255319148936,\n \"acc_stderr\": 0.028267657482650147,\n \
\ \"acc_norm\": 0.3404255319148936,\n \"acc_norm_stderr\": 0.028267657482650147\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.36310299869621904,\n\
\ \"acc_stderr\": 0.012282264406018761,\n \"acc_norm\": 0.36310299869621904,\n\
\ \"acc_norm_stderr\": 0.012282264406018761\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.4522058823529412,\n \"acc_stderr\": 0.03023375855159644,\n\
\ \"acc_norm\": 0.4522058823529412,\n \"acc_norm_stderr\": 0.03023375855159644\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.43300653594771243,\n \"acc_stderr\": 0.020045442473324227,\n \
\ \"acc_norm\": 0.43300653594771243,\n \"acc_norm_stderr\": 0.020045442473324227\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.4909090909090909,\n\
\ \"acc_stderr\": 0.04788339768702861,\n \"acc_norm\": 0.4909090909090909,\n\
\ \"acc_norm_stderr\": 0.04788339768702861\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.5469387755102041,\n \"acc_stderr\": 0.03186785930004129,\n\
\ \"acc_norm\": 0.5469387755102041,\n \"acc_norm_stderr\": 0.03186785930004129\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6567164179104478,\n\
\ \"acc_stderr\": 0.03357379665433431,\n \"acc_norm\": 0.6567164179104478,\n\
\ \"acc_norm_stderr\": 0.03357379665433431\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3614457831325301,\n\
\ \"acc_stderr\": 0.03740059382029321,\n \"acc_norm\": 0.3614457831325301,\n\
\ \"acc_norm_stderr\": 0.03740059382029321\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.6549707602339181,\n \"acc_stderr\": 0.03645981377388806,\n\
\ \"acc_norm\": 0.6549707602339181,\n \"acc_norm_stderr\": 0.03645981377388806\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31456548347613217,\n\
\ \"mc1_stderr\": 0.01625524199317919,\n \"mc2\": 0.4686560841151894,\n\
\ \"mc2_stderr\": 0.015106430830741629\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6953433307024467,\n \"acc_stderr\": 0.012935646499325305\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.13191811978771797,\n \
\ \"acc_stderr\": 0.009321265253857515\n }\n}\n```"
repo_url: https://huggingface.co/wang7776/vicuna-7b-v1.3-attention-sparsity-10
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|arc:challenge|25_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|gsm8k|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hellaswag|10_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T22-28-11.732265.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-26T22-28-11.732265.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- '**/details_harness|winogrande|5_2024-01-26T22-28-11.732265.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-26T22-28-11.732265.parquet'
- config_name: results
data_files:
- split: 2024_01_26T22_28_11.732265
path:
- results_2024-01-26T22-28-11.732265.parquet
- split: latest
path:
- results_2024-01-26T22-28-11.732265.parquet
---
# Dataset Card for Evaluation run of wang7776/vicuna-7b-v1.3-attention-sparsity-10
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [wang7776/vicuna-7b-v1.3-attention-sparsity-10](https://huggingface.co/wang7776/vicuna-7b-v1.3-attention-sparsity-10) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_wang7776__vicuna-7b-v1.3-attention-sparsity-10",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-26T22:28:11.732265](https://huggingface.co/datasets/open-llm-leaderboard/details_wang7776__vicuna-7b-v1.3-attention-sparsity-10/blob/main/results_2024-01-26T22-28-11.732265.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.47889056395254415,
"acc_stderr": 0.03436079323218269,
"acc_norm": 0.48500356482741425,
"acc_norm_stderr": 0.03513586695106674,
"mc1": 0.31456548347613217,
"mc1_stderr": 0.01625524199317919,
"mc2": 0.4686560841151894,
"mc2_stderr": 0.015106430830741629
},
"harness|arc:challenge|25": {
"acc": 0.48293515358361777,
"acc_stderr": 0.014602878388536595,
"acc_norm": 0.5221843003412969,
"acc_norm_stderr": 0.014597001927076133
},
"harness|hellaswag|10": {
"acc": 0.5795658235411273,
"acc_stderr": 0.004926198483948702,
"acc_norm": 0.7704640509858594,
"acc_norm_stderr": 0.004196749648385372
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421296,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421296
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.45925925925925926,
"acc_stderr": 0.04304979692464242,
"acc_norm": 0.45925925925925926,
"acc_norm_stderr": 0.04304979692464242
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.45394736842105265,
"acc_stderr": 0.04051646342874142,
"acc_norm": 0.45394736842105265,
"acc_norm_stderr": 0.04051646342874142
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5471698113207547,
"acc_stderr": 0.03063562795796182,
"acc_norm": 0.5471698113207547,
"acc_norm_stderr": 0.03063562795796182
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4791666666666667,
"acc_stderr": 0.041775789507399935,
"acc_norm": 0.4791666666666667,
"acc_norm_stderr": 0.041775789507399935
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110175,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110175
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4277456647398844,
"acc_stderr": 0.03772446857518026,
"acc_norm": 0.4277456647398844,
"acc_norm_stderr": 0.03772446857518026
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.04389869956808778,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.04389869956808778
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3659574468085106,
"acc_stderr": 0.031489558297455304,
"acc_norm": 0.3659574468085106,
"acc_norm_stderr": 0.031489558297455304
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.038351539543994194,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.038351539543994194
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4413793103448276,
"acc_stderr": 0.04137931034482758,
"acc_norm": 0.4413793103448276,
"acc_norm_stderr": 0.04137931034482758
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.31746031746031744,
"acc_stderr": 0.02397386199899208,
"acc_norm": 0.31746031746031744,
"acc_norm_stderr": 0.02397386199899208
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3253968253968254,
"acc_stderr": 0.04190596438871136,
"acc_norm": 0.3253968253968254,
"acc_norm_stderr": 0.04190596438871136
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720683,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720683
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5193548387096775,
"acc_stderr": 0.02842268740431211,
"acc_norm": 0.5193548387096775,
"acc_norm_stderr": 0.02842268740431211
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.35960591133004927,
"acc_stderr": 0.033764582465095665,
"acc_norm": 0.35960591133004927,
"acc_norm_stderr": 0.033764582465095665
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5696969696969697,
"acc_stderr": 0.03866225962879077,
"acc_norm": 0.5696969696969697,
"acc_norm_stderr": 0.03866225962879077
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6212121212121212,
"acc_stderr": 0.03456088731993747,
"acc_norm": 0.6212121212121212,
"acc_norm_stderr": 0.03456088731993747
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.6839378238341969,
"acc_stderr": 0.033553973696861736,
"acc_norm": 0.6839378238341969,
"acc_norm_stderr": 0.033553973696861736
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.47692307692307695,
"acc_stderr": 0.025323990861736118,
"acc_norm": 0.47692307692307695,
"acc_norm_stderr": 0.025323990861736118
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2518518518518518,
"acc_stderr": 0.026466117538959912,
"acc_norm": 0.2518518518518518,
"acc_norm_stderr": 0.026466117538959912
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.44537815126050423,
"acc_stderr": 0.032284106267163895,
"acc_norm": 0.44537815126050423,
"acc_norm_stderr": 0.032284106267163895
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.03802039760107903,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.03802039760107903
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.636697247706422,
"acc_stderr": 0.020620603919625804,
"acc_norm": 0.636697247706422,
"acc_norm_stderr": 0.020620603919625804
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6225490196078431,
"acc_stderr": 0.03402272044340703,
"acc_norm": 0.6225490196078431,
"acc_norm_stderr": 0.03402272044340703
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6286919831223629,
"acc_stderr": 0.03145068600744859,
"acc_norm": 0.6286919831223629,
"acc_norm_stderr": 0.03145068600744859
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.57847533632287,
"acc_stderr": 0.03314190222110658,
"acc_norm": 0.57847533632287,
"acc_norm_stderr": 0.03314190222110658
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5801526717557252,
"acc_stderr": 0.04328577215262972,
"acc_norm": 0.5801526717557252,
"acc_norm_stderr": 0.04328577215262972
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6528925619834711,
"acc_stderr": 0.04345724570292534,
"acc_norm": 0.6528925619834711,
"acc_norm_stderr": 0.04345724570292534
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.045879047413018105,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.045879047413018105
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5644171779141104,
"acc_stderr": 0.03895632464138937,
"acc_norm": 0.5644171779141104,
"acc_norm_stderr": 0.03895632464138937
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.30357142857142855,
"acc_stderr": 0.04364226155841044,
"acc_norm": 0.30357142857142855,
"acc_norm_stderr": 0.04364226155841044
},
"harness|hendrycksTest-management|5": {
"acc": 0.6213592233009708,
"acc_stderr": 0.048026946982589726,
"acc_norm": 0.6213592233009708,
"acc_norm_stderr": 0.048026946982589726
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.688034188034188,
"acc_stderr": 0.030351527323344944,
"acc_norm": 0.688034188034188,
"acc_norm_stderr": 0.030351527323344944
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6538952745849298,
"acc_stderr": 0.01701196526641207,
"acc_norm": 0.6538952745849298,
"acc_norm_stderr": 0.01701196526641207
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5115606936416185,
"acc_stderr": 0.026911898686377913,
"acc_norm": 0.5115606936416185,
"acc_norm_stderr": 0.026911898686377913
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5620915032679739,
"acc_stderr": 0.028408302020332687,
"acc_norm": 0.5620915032679739,
"acc_norm_stderr": 0.028408302020332687
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5530546623794212,
"acc_stderr": 0.028237769422085335,
"acc_norm": 0.5530546623794212,
"acc_norm_stderr": 0.028237769422085335
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.558641975308642,
"acc_stderr": 0.027628737155668767,
"acc_norm": 0.558641975308642,
"acc_norm_stderr": 0.027628737155668767
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.3404255319148936,
"acc_stderr": 0.028267657482650147,
"acc_norm": 0.3404255319148936,
"acc_norm_stderr": 0.028267657482650147
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.36310299869621904,
"acc_stderr": 0.012282264406018761,
"acc_norm": 0.36310299869621904,
"acc_norm_stderr": 0.012282264406018761
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4522058823529412,
"acc_stderr": 0.03023375855159644,
"acc_norm": 0.4522058823529412,
"acc_norm_stderr": 0.03023375855159644
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.43300653594771243,
"acc_stderr": 0.020045442473324227,
"acc_norm": 0.43300653594771243,
"acc_norm_stderr": 0.020045442473324227
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.4909090909090909,
"acc_stderr": 0.04788339768702861,
"acc_norm": 0.4909090909090909,
"acc_norm_stderr": 0.04788339768702861
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5469387755102041,
"acc_stderr": 0.03186785930004129,
"acc_norm": 0.5469387755102041,
"acc_norm_stderr": 0.03186785930004129
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6567164179104478,
"acc_stderr": 0.03357379665433431,
"acc_norm": 0.6567164179104478,
"acc_norm_stderr": 0.03357379665433431
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-virology|5": {
"acc": 0.3614457831325301,
"acc_stderr": 0.03740059382029321,
"acc_norm": 0.3614457831325301,
"acc_norm_stderr": 0.03740059382029321
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6549707602339181,
"acc_stderr": 0.03645981377388806,
"acc_norm": 0.6549707602339181,
"acc_norm_stderr": 0.03645981377388806
},
"harness|truthfulqa:mc|0": {
"mc1": 0.31456548347613217,
"mc1_stderr": 0.01625524199317919,
"mc2": 0.4686560841151894,
"mc2_stderr": 0.015106430830741629
},
"harness|winogrande|5": {
"acc": 0.6953433307024467,
"acc_stderr": 0.012935646499325305
},
"harness|gsm8k|5": {
"acc": 0.13191811978771797,
"acc_stderr": 0.009321265253857515
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
adhisetiawan/pothole-segmentation | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 6389783.0
num_examples: 80
- name: validation
num_bytes: 337776.0
num_examples: 5
download_size: 6675556
dataset_size: 6727559.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
TheFinAI/en-finterm | ---
license: mit
dataset_info:
features:
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 763945
num_examples: 158
download_size: 438945
dataset_size: 763945
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
shivank21/dolly | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: inputs
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 14079200
num_examples: 15011
download_size: 7841759
dataset_size: 14079200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aqubed/kub_tickets_small | ---
dataset_info:
features:
- name: number
dtype: int64
- name: title
dtype: string
- name: state
dtype: string
- name: created_at
dtype: string
- name: updated_at
dtype: string
- name: closed_at
dtype: string
- name: assignees
sequence: string
- name: labels
sequence: string
- name: reporter
dtype: string
- name: comments
list:
- name: body
dtype: string
- name: created_at
dtype: string
- name: events
list:
- name: author
dtype: string
- name: created_at
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 5967498
num_examples: 1099
download_size: 1380020
dataset_size: 5967498
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "kub_tickets_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ayan1988/diffusion.2.textual_inversion | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 1740639.0
num_examples: 6
download_size: 0
dataset_size: 1740639.0
---
# Dataset Card for "diffusion.2.textual_inversion"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_50_1713228702 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 178306
num_examples: 413
download_size: 93780
dataset_size: 178306
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nbalepur/QG_vs_QA_v2 | ---
configs:
- config_name: default
data_files:
- split: full
path: data/full-*
- split: subset
path: data/subset-*
dataset_info:
features:
- name: entity
dtype: string
- name: category
dtype: string
- name: meta_data
dtype: string
- name: context
dtype: string
splits:
- name: full
num_bytes: 5445535
num_examples: 7231
- name: subset
num_bytes: 2137999
num_examples: 3443
download_size: 4709588
dataset_size: 7583534
---
# Dataset Card for "QG_vs_QA_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sauradip/FAKE | ---
license: apache-2.0
---
|
mar-yam1497/HotPotQA_Mistral_Instruct_dataset_Revised | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 886833724
num_examples: 540890
download_size: 415174686
dataset_size: 886833724
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-latex-24000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1064271
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kenhktsui/open-toolformer-retrieval-multi-neg-result-new-kw | ---
dataset_info:
features:
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
- name: meta
struct:
- name: first_search_rank
dtype: int64
- name: second_search
dtype: bool
- name: second_search_success
dtype: bool
- name: source
dtype: string
splits:
- name: train
num_bytes: 58584517
num_examples: 30043
download_size: 23248813
dataset_size: 58584517
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "open-toolformer-retrieval-multi-neg-result-new-kw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rhaymison/medicine-medical-eval-pt | ---
dataset_info:
features:
- name: input
dtype: string
splits:
- name: train
num_bytes: 349347
num_examples: 400
download_size: 213109
dataset_size: 349347
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- pt
tags:
- medical
--- |
giocoral/test | ---
license: mit
---
|
laampt/alpaca-train | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int64
- name: attention_mask
sequence: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2958930
num_examples: 1260
download_size: 616398
dataset_size: 2958930
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "alpaca-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
heliosprime/twitter_dataset_1712919395 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 4805
num_examples: 11
download_size: 8498
dataset_size: 4805
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1712919395"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MohammedNasri/Denoised_data_jason1 | ---
dataset_info:
features:
- name: data
struct:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: 'null'
- name: sampling_rate
dtype: int64
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1158326946
num_examples: 2000
download_size: 286288407
dataset_size: 1158326946
---
# Dataset Card for "Denoised_data_jason1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SilasModder/StudioNude | ---
license: openrail
---
|
Novin-AI/INST-LStyle | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 49387145
num_examples: 35117
download_size: 22545255
dataset_size: 49387145
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
male-2/training_v2-public | ---
dataset_info:
features:
- name: conversation
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 1091
num_examples: 1
download_size: 8505
dataset_size: 1091
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "training_v2-public"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.