datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
andrecoelho/diogoia | ---
license: unknown
---
|
qgallouedec/prj_gia_dataset_metaworld_button_press_wall_v2_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the button-press-wall-v2 environment, sample for the policy button-press-wall-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_button_press_wall_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_button_press_wall_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
tnn1t1s/lines | ---
license: apache-2.0
---
|
Hemanth-thunder/tamil-open-instruct-v1 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654342281
num_examples: 493813
download_size: 547982719
dataset_size: 1654342281
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-classification
- text-generation
- text2text-generation
language:
- ta
pretty_name: tamil instruction
size_categories:
- 100K<n<1M
---
# Open Instruct V1 - A dataset for having LLMs follow instructions.
Open Instruct V1 is an amalgamation of different datasets which are cleaned and then collated into a singular format for training. |
CyberHarem/aa_12_girlsfrontline | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of aa_12/AA-12/AA-12 (Girls' Frontline)
This is the dataset of aa_12/AA-12/AA-12 (Girls' Frontline), containing 268 images and their tags.
The core tags of this character are `blue_eyes, bangs, ahoge, long_hair, breasts, hair_ornament, star_hair_ornament, hat, black_headwear, bags_under_eyes, medium_breasts, grey_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 268 | 373.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aa_12_girlsfrontline/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 268 | 189.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aa_12_girlsfrontline/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 666 | 427.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aa_12_girlsfrontline/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 268 | 318.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aa_12_girlsfrontline/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 666 | 638.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aa_12_girlsfrontline/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/aa_12_girlsfrontline',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, goggles_on_head, navel, side-tie_bikini_bottom, white_bikini, cleavage, criss-cross_halter, solo, blush, lollipop, looking_at_viewer, cowboy_shot, holding_food, stomach, ass_visible_through_thighs, bead_bracelet, blue_sky, collarbone, gun, mouth_hold, open_mouth, outdoors, simple_background, thigh_gap, white_background |
| 1 | 10 |  |  |  |  |  | 1girl, holding_lollipop, simple_background, solo, star_(symbol), black_gloves, white_background, beret, looking_at_viewer, white_jacket, upper_body, closed_mouth, hood, blush, open_jacket |
| 2 | 10 |  |  |  |  |  | 1girl, black_shorts, black_thighhighs, holding_gun, lollipop, shotgun, solo, star_(symbol), looking_at_viewer, beret, black_shirt, long_sleeves, short_shorts, white_jacket, black_gloves, hood_down, open_jacket, blush, choker, collarbone, holding_food, simple_background, white_background, cleavage, full_body, knee_pads, standing |
| 3 | 10 |  |  |  |  |  | 1girl, cone_hair_bun, double_bun, hairclip, solo, star_earrings, white_shirt, collarbone, cleavage, purple_choker, looking_at_viewer, blush, nail_polish, holding_lollipop, mouth_hold, off-shoulder_shirt, sitting, x_hair_ornament, black_bikini, black_bra, short_sleeves, simple_background, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | goggles_on_head | navel | side-tie_bikini_bottom | white_bikini | cleavage | criss-cross_halter | solo | blush | lollipop | looking_at_viewer | cowboy_shot | holding_food | stomach | ass_visible_through_thighs | bead_bracelet | blue_sky | collarbone | gun | mouth_hold | open_mouth | outdoors | simple_background | thigh_gap | white_background | holding_lollipop | star_(symbol) | black_gloves | beret | white_jacket | upper_body | closed_mouth | hood | open_jacket | black_shorts | black_thighhighs | holding_gun | shotgun | black_shirt | long_sleeves | short_shorts | hood_down | choker | full_body | knee_pads | standing | cone_hair_bun | double_bun | hairclip | star_earrings | white_shirt | purple_choker | nail_polish | off-shoulder_shirt | sitting | x_hair_ornament | black_bikini | black_bra | short_sleeves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------------|:--------|:-------------------------|:---------------|:-----------|:---------------------|:-------|:--------|:-----------|:--------------------|:--------------|:---------------|:----------|:-----------------------------|:----------------|:-----------|:-------------|:------|:-------------|:-------------|:-----------|:--------------------|:------------|:-------------------|:-------------------|:----------------|:---------------|:--------|:---------------|:-------------|:---------------|:-------|:--------------|:---------------|:-------------------|:--------------|:----------|:--------------|:---------------|:---------------|:------------|:---------|:------------|:------------|:-----------|:----------------|:-------------|:-----------|:----------------|:--------------|:----------------|:--------------|:---------------------|:----------|:------------------|:---------------|:------------|:----------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | | | | | | X | X | | X | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | | | | | X | | X | X | X | X | | X | | | | | X | | | | | X | | X | | X | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | | | | | X | | X | X | | X | | | | | | | X | | X | | | X | | X | X | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
vedalken/mtg-pauper-blip-captions-human | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 37106171.0
num_examples: 450
download_size: 37094749
dataset_size: 37106171.0
---
# Dataset Card for "mtg-pauper-blip-captions-human"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
carnival13/hpqa_ret | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 2155444816
num_examples: 836741
- name: validation
num_bytes: 162097376
num_examples: 62926
- name: test
num_bytes: 189851200
num_examples: 73700
download_size: 269845048
dataset_size: 2507393392
---
# Dataset Card for "hpqa_ret"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_217 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1124695500.0
num_examples: 220875
download_size: 1149669855
dataset_size: 1124695500.0
---
# Dataset Card for "chunk_217"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kchopra04/saxs_modified | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 601544
num_examples: 601
download_size: 283534
dataset_size: 601544
---
# Dataset Card for "saxs_modified"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/toki_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of toki/飛鳥馬トキ/时 (Blue Archive)
This is the dataset of toki/飛鳥馬トキ/时 (Blue Archive), containing 500 images and their tags.
The core tags of this character are `blonde_hair, blue_eyes, blue_halo, halo, bow, breasts, blue_bow, braid, long_hair, medium_breasts, animal_ears, fake_animal_ears, rabbit_ears, very_long_hair, hairband, blue_hairband, multicolored_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 1.12 GiB | [Download](https://huggingface.co/datasets/CyberHarem/toki_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 500 | 913.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toki_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1419 | 1.92 GiB | [Download](https://huggingface.co/datasets/CyberHarem/toki_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/toki_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, bare_shoulders, blue_leotard, closed_mouth, detached_collar, highleg_leotard, looking_at_viewer, official_alternate_costume, playboy_bunny, rabbit_tail, solo, strapless_leotard, white_thighhighs, thighs, cleavage, fake_tail, official_alternate_hairstyle, simple_background, white_background, sitting, aqua_bowtie, covered_navel, hand_up, headset, white_wrist_cuffs |
| 1 | 5 |  |  |  |  |  | 1girl, bare_shoulders, blue_leotard, bowtie, cleavage, closed_mouth, detached_collar, headset, highleg_leotard, looking_at_viewer, official_alternate_costume, playboy_bunny, rabbit_tail, simple_background, solo, strapless_leotard, white_background, white_thighhighs, fake_tail, thighs, wrist_cuffs, blue_footwear, blush, earpiece, streaked_hair |
| 2 | 9 |  |  |  |  |  | 1girl, bare_shoulders, blue_leotard, cowboy_shot, detached_collar, highleg_leotard, looking_at_viewer, official_alternate_costume, playboy_bunny, rabbit_tail, solo, strapless_leotard, streaked_hair, wrist_cuffs, cleavage, closed_mouth, covered_navel, fake_tail, simple_background, white_background, aqua_bowtie, earpiece, headset, white_thighhighs, thighs, groin |
| 3 | 13 |  |  |  |  |  | 1girl, bare_shoulders, blue_leotard, detached_collar, highleg_leotard, looking_at_viewer, official_alternate_costume, playboy_bunny, sitting, solo, strapless_leotard, wrist_cuffs, white_thighhighs, cleavage, closed_mouth, white_background, simple_background, thighs, blush, blue_bowtie, aqua_bowtie, covered_navel, large_breasts |
| 4 | 5 |  |  |  |  |  | 1girl, aqua_bowtie, bare_shoulders, blue_leotard, detached_collar, headset, highleg_leotard, looking_at_viewer, official_alternate_costume, playboy_bunny, rabbit_tail, solo, strapless_leotard, cleavage, earpiece, fake_tail, holding_tray, official_alternate_hairstyle, wrist_cuffs, blue_hair, cowboy_shot, drinking_glass, microphone, streaked_hair, thighs, covered_navel, parted_lips |
| 5 | 7 |  |  |  |  |  | 1girl, bare_shoulders, black_gloves, blue_ribbon, bun_cover, elbow_gloves, fingerless_gloves, hair_ribbon, highleg_leotard, looking_at_viewer, maid_headdress, short_hair, solo, black_leotard, blue_leotard, simple_background, white_background, closed_mouth, single_hair_bun, sleeveless_turtleneck_leotard, thighhighs, two-tone_leotard, ass, earpiece, small_breasts, thigh_boots, thigh_strap, thighs |
| 6 | 76 |  |  |  |  |  | 1girl, maid_apron, maid_headdress, solo, blue_bowtie, white_apron, looking_at_viewer, short_hair, bun_cover, frilled_apron, black_gloves, fingerless_gloves, chest_harness, closed_mouth, black_dress, simple_background, single_hair_bun, white_background, elbow_gloves, pouch, earpiece, hair_ribbon, blue_ribbon, double_v, long_sleeves, sleeveless |
| 7 | 30 |  |  |  |  |  | 1girl, solo, looking_at_viewer, collared_shirt, white_shirt, pleated_skirt, school_uniform, blush, long_sleeves, blue_hair, closed_mouth, black_skirt, blue_bowtie, white_background, simple_background, alternate_costume, streaked_hair, nail_polish, blue_cardigan, blue_nails |
| 8 | 6 |  |  |  |  |  | 1girl, alternate_costume, bare_shoulders, looking_at_viewer, solo, blue_bikini, cleavage, halterneck, outdoors, thighs, blue_sky, blush, choker, cloud, collarbone, day, navel, beach, food, large_breasts, parted_lips, stomach |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | blue_leotard | closed_mouth | detached_collar | highleg_leotard | looking_at_viewer | official_alternate_costume | playboy_bunny | rabbit_tail | solo | strapless_leotard | white_thighhighs | thighs | cleavage | fake_tail | official_alternate_hairstyle | simple_background | white_background | sitting | aqua_bowtie | covered_navel | hand_up | headset | white_wrist_cuffs | bowtie | wrist_cuffs | blue_footwear | blush | earpiece | streaked_hair | cowboy_shot | groin | blue_bowtie | large_breasts | holding_tray | blue_hair | drinking_glass | microphone | parted_lips | black_gloves | blue_ribbon | bun_cover | elbow_gloves | fingerless_gloves | hair_ribbon | maid_headdress | short_hair | black_leotard | single_hair_bun | sleeveless_turtleneck_leotard | thighhighs | two-tone_leotard | ass | small_breasts | thigh_boots | thigh_strap | maid_apron | white_apron | frilled_apron | chest_harness | black_dress | pouch | double_v | long_sleeves | sleeveless | collared_shirt | white_shirt | pleated_skirt | school_uniform | black_skirt | alternate_costume | nail_polish | blue_cardigan | blue_nails | blue_bikini | halterneck | outdoors | blue_sky | choker | cloud | collarbone | day | navel | beach | food | stomach |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:---------------|:---------------|:------------------|:------------------|:--------------------|:-----------------------------|:----------------|:--------------|:-------|:--------------------|:-------------------|:---------|:-----------|:------------|:-------------------------------|:--------------------|:-------------------|:----------|:--------------|:----------------|:----------|:----------|:--------------------|:---------|:--------------|:----------------|:--------|:-----------|:----------------|:--------------|:--------|:--------------|:----------------|:---------------|:------------|:-----------------|:-------------|:--------------|:---------------|:--------------|:------------|:---------------|:--------------------|:--------------|:-----------------|:-------------|:----------------|:------------------|:--------------------------------|:-------------|:-------------------|:------|:----------------|:--------------|:--------------|:-------------|:--------------|:----------------|:----------------|:--------------|:--------|:-----------|:---------------|:-------------|:-----------------|:--------------|:----------------|:-----------------|:--------------|:--------------------|:--------------|:----------------|:-------------|:--------------|:-------------|:-----------|:-----------|:---------|:--------|:-------------|:------|:--------|:--------|:-------|:----------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | | | | | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | | X | X | | X | | | X | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | X | X | X | X | X | | | X | X | X | X | X | | | | | X | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | X | | X | X | X | X | X | X | X | X | | X | X | X | X | | | | X | X | | X | | | X | | | X | X | X | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 7 |  |  |  |  |  | X | X | X | X | | X | X | | | | X | | | X | | | | X | X | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 76 |  |  |  |  |  | X | | | X | | | X | | | | X | | | | | | | X | X | | | | | | | | | | | X | | | | X | | | | | | | X | X | X | X | X | X | X | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 7 | 30 |  |  |  |  |  | X | | | X | | | X | | | | X | | | | | | | X | X | | | | | | | | | | X | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | X | X | | | | | X | | | | X | | | X | X | | | | | | | | | | | | | | X | | | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
davanstrien/test_imdb_embedd1 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: category
dtype: string
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 35312529
num_examples: 15011
download_size: 40169078
dataset_size: 35312529
---
# Dataset Card for "test_imdb_embedd1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kimnt93/en-new-instruction | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: instruction_type
dtype: string
splits:
- name: train
num_bytes: 66350
num_examples: 604
download_size: 43115
dataset_size: 66350
---
https://github.com/XueFuzhao/InstructionWild/ + https://github.com/yizhongw/self-instruct |
AdapterOcean/data-standardized_cluster_18_alpaca | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8810239
num_examples: 4265
download_size: 3718272
dataset_size: 8810239
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data-standardized_cluster_18_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nailiamirzakhmedova/args_me_10k | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: argument
dtype: string
- name: stance
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 12851963
num_examples: 10000
download_size: 7839623
dataset_size: 12851963
---
# Dataset Card for "args_me_sampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dltdojo/park-tinystories-533k | ---
dataset_info:
features:
- name: story
dtype: string
- name: summary
dtype: string
- name: source
dtype: string
- name: prompt
dtype: string
- name: words
sequence: string
- name: features
sequence: string
splits:
- name: train
num_bytes: 715295544.1516415
num_examples: 533547
download_size: 312241866
dataset_size: 715295544.1516415
---
# Dataset Card for "park-tinystories-533k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gwlms/dewiki-20230701-nltk-corpus | ---
license: cc-by-sa-3.0
language:
- de
--- |
open-llm-leaderboard/details_BarraHome__rezephyr-dpo | ---
pretty_name: Evaluation run of BarraHome/rezephyr-dpo
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [BarraHome/rezephyr-dpo](https://huggingface.co/BarraHome/rezephyr-dpo) on the\
\ [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BarraHome__rezephyr-dpo\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-09T13:18:07.445187](https://huggingface.co/datasets/open-llm-leaderboard/details_BarraHome__rezephyr-dpo/blob/main/results_2024-02-09T13-18-07.445187.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6025963080419425,\n\
\ \"acc_stderr\": 0.0331436677667824,\n \"acc_norm\": 0.6085579671532932,\n\
\ \"acc_norm_stderr\": 0.03382908262424393,\n \"mc1\": 0.29008567931456547,\n\
\ \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.44316239933938906,\n\
\ \"mc2_stderr\": 0.014631197353059351\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5358361774744027,\n \"acc_stderr\": 0.01457381366473572,\n\
\ \"acc_norm\": 0.575938566552901,\n \"acc_norm_stderr\": 0.014441889627464398\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.616211909978092,\n\
\ \"acc_stderr\": 0.004853134271547766,\n \"acc_norm\": 0.8174666401115316,\n\
\ \"acc_norm_stderr\": 0.0038549403270910264\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n\
\ \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n\
\ \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6052631578947368,\n \"acc_stderr\": 0.039777499346220734,\n\
\ \"acc_norm\": 0.6052631578947368,\n \"acc_norm_stderr\": 0.039777499346220734\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n\
\ \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \
\ \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.028637235639800893,\n\
\ \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.028637235639800893\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n\
\ \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n\
\ \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \
\ \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n \
\ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n\
\ \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \
\ \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n\
\ \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n\
\ \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n\
\ \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.73,\n \"acc_stderr\": 0.04461960433384739,\n \
\ \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.04461960433384739\n \
\ },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5404255319148936,\n\
\ \"acc_stderr\": 0.03257901482099835,\n \"acc_norm\": 0.5404255319148936,\n\
\ \"acc_norm_stderr\": 0.03257901482099835\n },\n \"harness|hendrycksTest-econometrics|5\"\
: {\n \"acc\": 0.42105263157894735,\n \"acc_stderr\": 0.046446020912223177,\n\
\ \"acc_norm\": 0.42105263157894735,\n \"acc_norm_stderr\": 0.046446020912223177\n\
\ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\
: 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n \"\
acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.40476190476190477,\n \"acc_stderr\": 0.025279850397404897,\n \"\
acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.025279850397404897\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n\
\ \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n\
\ \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \
\ \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n\
\ \"acc_stderr\": 0.02436259969303108,\n \"acc_norm\": 0.7580645161290323,\n\
\ \"acc_norm_stderr\": 0.02436259969303108\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n\
\ \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\"\
: 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n\
\ \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.02985751567338642,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.02985751567338642\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7979274611398963,\n \"acc_stderr\": 0.02897908979429673,\n\
\ \"acc_norm\": 0.7979274611398963,\n \"acc_norm_stderr\": 0.02897908979429673\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5743589743589743,\n \"acc_stderr\": 0.025069094387296525,\n\
\ \"acc_norm\": 0.5743589743589743,\n \"acc_norm_stderr\": 0.025069094387296525\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131147,\n \
\ \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131147\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.031566630992154156,\n\
\ \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.031566630992154156\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33774834437086093,\n \"acc_stderr\": 0.0386155754625517,\n \"\
acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.0386155754625517\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7889908256880734,\n \"acc_stderr\": 0.01749392240411265,\n \"\
acc_norm\": 0.7889908256880734,\n \"acc_norm_stderr\": 0.01749392240411265\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.47685185185185186,\n \"acc_stderr\": 0.03406315360711507,\n \"\
acc_norm\": 0.47685185185185186,\n \"acc_norm_stderr\": 0.03406315360711507\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7696078431372549,\n \"acc_stderr\": 0.029554292605695066,\n \"\
acc_norm\": 0.7696078431372549,\n \"acc_norm_stderr\": 0.029554292605695066\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7383966244725738,\n \"acc_stderr\": 0.028609516716994934,\n \
\ \"acc_norm\": 0.7383966244725738,\n \"acc_norm_stderr\": 0.028609516716994934\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6367713004484304,\n\
\ \"acc_stderr\": 0.032277904428505,\n \"acc_norm\": 0.6367713004484304,\n\
\ \"acc_norm_stderr\": 0.032277904428505\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7022900763358778,\n \"acc_stderr\": 0.040103589424622034,\n\
\ \"acc_norm\": 0.7022900763358778,\n \"acc_norm_stderr\": 0.040103589424622034\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098823,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098823\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7129629629629629,\n\
\ \"acc_stderr\": 0.043733130409147614,\n \"acc_norm\": 0.7129629629629629,\n\
\ \"acc_norm_stderr\": 0.043733130409147614\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7055214723926381,\n \"acc_stderr\": 0.03581165790474082,\n\
\ \"acc_norm\": 0.7055214723926381,\n \"acc_norm_stderr\": 0.03581165790474082\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n\
\ \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8846153846153846,\n\
\ \"acc_stderr\": 0.020930193185179333,\n \"acc_norm\": 0.8846153846153846,\n\
\ \"acc_norm_stderr\": 0.020930193185179333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7943805874840357,\n\
\ \"acc_stderr\": 0.01445250045678583,\n \"acc_norm\": 0.7943805874840357,\n\
\ \"acc_norm_stderr\": 0.01445250045678583\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.684971098265896,\n \"acc_stderr\": 0.025009313790069713,\n\
\ \"acc_norm\": 0.684971098265896,\n \"acc_norm_stderr\": 0.025009313790069713\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3217877094972067,\n\
\ \"acc_stderr\": 0.015624236160792582,\n \"acc_norm\": 0.3217877094972067,\n\
\ \"acc_norm_stderr\": 0.015624236160792582\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6862745098039216,\n \"acc_stderr\": 0.02656892101545715,\n\
\ \"acc_norm\": 0.6862745098039216,\n \"acc_norm_stderr\": 0.02656892101545715\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n\
\ \"acc_stderr\": 0.02616058445014045,\n \"acc_norm\": 0.6945337620578779,\n\
\ \"acc_norm_stderr\": 0.02616058445014045\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6604938271604939,\n \"acc_stderr\": 0.026348564412011624,\n\
\ \"acc_norm\": 0.6604938271604939,\n \"acc_norm_stderr\": 0.026348564412011624\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.450354609929078,\n \"acc_stderr\": 0.029680105565029036,\n \
\ \"acc_norm\": 0.450354609929078,\n \"acc_norm_stderr\": 0.029680105565029036\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4217731421121252,\n\
\ \"acc_stderr\": 0.012612974369390975,\n \"acc_norm\": 0.4217731421121252,\n\
\ \"acc_norm_stderr\": 0.012612974369390975\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6066176470588235,\n \"acc_stderr\": 0.029674288281311155,\n\
\ \"acc_norm\": 0.6066176470588235,\n \"acc_norm_stderr\": 0.029674288281311155\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6094771241830066,\n \"acc_stderr\": 0.019737008998094597,\n \
\ \"acc_norm\": 0.6094771241830066,\n \"acc_norm_stderr\": 0.019737008998094597\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n\
\ \"acc_stderr\": 0.04631381319425465,\n \"acc_norm\": 0.6272727272727273,\n\
\ \"acc_norm_stderr\": 0.04631381319425465\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6693877551020408,\n \"acc_stderr\": 0.030116426296540606,\n\
\ \"acc_norm\": 0.6693877551020408,\n \"acc_norm_stderr\": 0.030116426296540606\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n\
\ \"acc_stderr\": 0.02768691358801303,\n \"acc_norm\": 0.8109452736318408,\n\
\ \"acc_norm_stderr\": 0.02768691358801303\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \
\ \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n\
\ \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n\
\ \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n\
\ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29008567931456547,\n\
\ \"mc1_stderr\": 0.01588623687420952,\n \"mc2\": 0.44316239933938906,\n\
\ \"mc2_stderr\": 0.014631197353059351\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7703235990528808,\n \"acc_stderr\": 0.011821645601838236\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3244882486732373,\n \
\ \"acc_stderr\": 0.012896095359768107\n }\n}\n```"
repo_url: https://huggingface.co/BarraHome/rezephyr-dpo
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|arc:challenge|25_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|gsm8k|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hellaswag|10_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T13-18-07.445187.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-09T13-18-07.445187.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- '**/details_harness|winogrande|5_2024-02-09T13-18-07.445187.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-09T13-18-07.445187.parquet'
- config_name: results
data_files:
- split: 2024_02_09T13_18_07.445187
path:
- results_2024-02-09T13-18-07.445187.parquet
- split: latest
path:
- results_2024-02-09T13-18-07.445187.parquet
---
# Dataset Card for Evaluation run of BarraHome/rezephyr-dpo
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [BarraHome/rezephyr-dpo](https://huggingface.co/BarraHome/rezephyr-dpo) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BarraHome__rezephyr-dpo",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-09T13:18:07.445187](https://huggingface.co/datasets/open-llm-leaderboard/details_BarraHome__rezephyr-dpo/blob/main/results_2024-02-09T13-18-07.445187.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6025963080419425,
"acc_stderr": 0.0331436677667824,
"acc_norm": 0.6085579671532932,
"acc_norm_stderr": 0.03382908262424393,
"mc1": 0.29008567931456547,
"mc1_stderr": 0.01588623687420952,
"mc2": 0.44316239933938906,
"mc2_stderr": 0.014631197353059351
},
"harness|arc:challenge|25": {
"acc": 0.5358361774744027,
"acc_stderr": 0.01457381366473572,
"acc_norm": 0.575938566552901,
"acc_norm_stderr": 0.014441889627464398
},
"harness|hellaswag|10": {
"acc": 0.616211909978092,
"acc_stderr": 0.004853134271547766,
"acc_norm": 0.8174666401115316,
"acc_norm_stderr": 0.0038549403270910264
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6052631578947368,
"acc_stderr": 0.039777499346220734,
"acc_norm": 0.6052631578947368,
"acc_norm_stderr": 0.039777499346220734
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6830188679245283,
"acc_stderr": 0.028637235639800893,
"acc_norm": 0.6830188679245283,
"acc_norm_stderr": 0.028637235639800893
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.037455547914624555,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.037455547914624555
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6242774566473989,
"acc_stderr": 0.036928207672648664,
"acc_norm": 0.6242774566473989,
"acc_norm_stderr": 0.036928207672648664
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5404255319148936,
"acc_stderr": 0.03257901482099835,
"acc_norm": 0.5404255319148936,
"acc_norm_stderr": 0.03257901482099835
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.046446020912223177,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.046446020912223177
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.025279850397404897,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.025279850397404897
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.04390259265377562,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.04390259265377562
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7580645161290323,
"acc_stderr": 0.02436259969303108,
"acc_norm": 0.7580645161290323,
"acc_norm_stderr": 0.02436259969303108
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.03401506715249039,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.03401506715249039
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.02985751567338642,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.02985751567338642
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7979274611398963,
"acc_stderr": 0.02897908979429673,
"acc_norm": 0.7979274611398963,
"acc_norm_stderr": 0.02897908979429673
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5743589743589743,
"acc_stderr": 0.025069094387296525,
"acc_norm": 0.5743589743589743,
"acc_norm_stderr": 0.025069094387296525
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.028897748741131147,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.028897748741131147
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.031566630992154156,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.031566630992154156
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.0386155754625517,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.0386155754625517
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7889908256880734,
"acc_stderr": 0.01749392240411265,
"acc_norm": 0.7889908256880734,
"acc_norm_stderr": 0.01749392240411265
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.47685185185185186,
"acc_stderr": 0.03406315360711507,
"acc_norm": 0.47685185185185186,
"acc_norm_stderr": 0.03406315360711507
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7696078431372549,
"acc_stderr": 0.029554292605695066,
"acc_norm": 0.7696078431372549,
"acc_norm_stderr": 0.029554292605695066
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7383966244725738,
"acc_stderr": 0.028609516716994934,
"acc_norm": 0.7383966244725738,
"acc_norm_stderr": 0.028609516716994934
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6367713004484304,
"acc_stderr": 0.032277904428505,
"acc_norm": 0.6367713004484304,
"acc_norm_stderr": 0.032277904428505
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7022900763358778,
"acc_stderr": 0.040103589424622034,
"acc_norm": 0.7022900763358778,
"acc_norm_stderr": 0.040103589424622034
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098823,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098823
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7129629629629629,
"acc_stderr": 0.043733130409147614,
"acc_norm": 0.7129629629629629,
"acc_norm_stderr": 0.043733130409147614
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7055214723926381,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.7055214723926381,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8846153846153846,
"acc_stderr": 0.020930193185179333,
"acc_norm": 0.8846153846153846,
"acc_norm_stderr": 0.020930193185179333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7943805874840357,
"acc_stderr": 0.01445250045678583,
"acc_norm": 0.7943805874840357,
"acc_norm_stderr": 0.01445250045678583
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.684971098265896,
"acc_stderr": 0.025009313790069713,
"acc_norm": 0.684971098265896,
"acc_norm_stderr": 0.025009313790069713
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3217877094972067,
"acc_stderr": 0.015624236160792582,
"acc_norm": 0.3217877094972067,
"acc_norm_stderr": 0.015624236160792582
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6862745098039216,
"acc_stderr": 0.02656892101545715,
"acc_norm": 0.6862745098039216,
"acc_norm_stderr": 0.02656892101545715
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.02616058445014045,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.02616058445014045
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6604938271604939,
"acc_stderr": 0.026348564412011624,
"acc_norm": 0.6604938271604939,
"acc_norm_stderr": 0.026348564412011624
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.450354609929078,
"acc_stderr": 0.029680105565029036,
"acc_norm": 0.450354609929078,
"acc_norm_stderr": 0.029680105565029036
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4217731421121252,
"acc_stderr": 0.012612974369390975,
"acc_norm": 0.4217731421121252,
"acc_norm_stderr": 0.012612974369390975
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6066176470588235,
"acc_stderr": 0.029674288281311155,
"acc_norm": 0.6066176470588235,
"acc_norm_stderr": 0.029674288281311155
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6094771241830066,
"acc_stderr": 0.019737008998094597,
"acc_norm": 0.6094771241830066,
"acc_norm_stderr": 0.019737008998094597
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6272727272727273,
"acc_stderr": 0.04631381319425465,
"acc_norm": 0.6272727272727273,
"acc_norm_stderr": 0.04631381319425465
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6693877551020408,
"acc_stderr": 0.030116426296540606,
"acc_norm": 0.6693877551020408,
"acc_norm_stderr": 0.030116426296540606
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8109452736318408,
"acc_stderr": 0.02768691358801303,
"acc_norm": 0.8109452736318408,
"acc_norm_stderr": 0.02768691358801303
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.02917088550072767,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.02917088550072767
},
"harness|truthfulqa:mc|0": {
"mc1": 0.29008567931456547,
"mc1_stderr": 0.01588623687420952,
"mc2": 0.44316239933938906,
"mc2_stderr": 0.014631197353059351
},
"harness|winogrande|5": {
"acc": 0.7703235990528808,
"acc_stderr": 0.011821645601838236
},
"harness|gsm8k|5": {
"acc": 0.3244882486732373,
"acc_stderr": 0.012896095359768107
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CyberHarem/natori_kantaicollection | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of natori/名取/名取 (Kantai Collection)
This is the dataset of natori/名取/名取 (Kantai Collection), containing 268 images and their tags.
The core tags of this character are `short_hair, brown_hair, brown_eyes, hairband, white_hairband, breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 268 | 198.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 268 | 142.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 544 | 269.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 268 | 185.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 544 | 334.79 MiB | [Download](https://huggingface.co/datasets/CyberHarem/natori_kantaicollection/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/natori_kantaicollection',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, detached_sleeves, serafuku, solo, brown_sailor_collar, dated, looking_at_viewer, twitter_username, one-hour_drawing_challenge, simple_background, white_background, pleated_skirt, red_skirt, white_thighhighs, cowboy_shot, black_neckerchief, brown_neckerchief, shirt |
| 1 | 7 |  |  |  |  |  | 1girl, blush, solo, cleavage, collarbone, looking_at_viewer, yukata, bare_shoulders, off_shoulder, open_mouth, simple_background, twitter_username, white_background, obi, tears, upper_body |
| 2 | 8 |  |  |  |  |  | 1girl, yukata, obi, solo, bagged_fish, goldfish, smile, uchiwa, twitter_username, alternate_costume, blush, open_mouth, wide_sleeves |
| 3 | 6 |  |  |  |  |  | 1girl, cleavage, simple_background, solo, blush, cowboy_shot, white_background, bikini, collarbone, looking_at_viewer, navel, cropped_legs, open_mouth, twitter_username |
| 4 | 6 |  |  |  |  |  | 1girl, blush, gym_uniform, open_mouth, red_buruma, short_sleeves, solo, white_shirt, looking_at_viewer, gym_shirt, simple_background |
| 5 | 6 |  |  |  |  |  | 1girl, blush, hetero, open_mouth, penis, solo_focus, 1boy, sex, bar_censor, cum_in_pussy, nipples, vaginal, cowgirl_position, girl_on_top, sweat, tears |
| 6 | 11 |  |  |  |  |  | fake_animal_ears, playboy_bunny, rabbit_ears, cleavage, detached_collar, 1girl, solo, strapless_leotard, looking_at_viewer, black_bowtie, blush, simple_background, white_background, wrist_cuffs, black_leotard, sitting, alternate_costume, black_pantyhose, cowboy_shot |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | detached_sleeves | serafuku | solo | brown_sailor_collar | dated | looking_at_viewer | twitter_username | one-hour_drawing_challenge | simple_background | white_background | pleated_skirt | red_skirt | white_thighhighs | cowboy_shot | black_neckerchief | brown_neckerchief | shirt | blush | cleavage | collarbone | yukata | bare_shoulders | off_shoulder | open_mouth | obi | tears | upper_body | bagged_fish | goldfish | smile | uchiwa | alternate_costume | wide_sleeves | bikini | navel | cropped_legs | gym_uniform | red_buruma | short_sleeves | white_shirt | gym_shirt | hetero | penis | solo_focus | 1boy | sex | bar_censor | cum_in_pussy | nipples | vaginal | cowgirl_position | girl_on_top | sweat | fake_animal_ears | playboy_bunny | rabbit_ears | detached_collar | strapless_leotard | black_bowtie | wrist_cuffs | black_leotard | sitting | black_pantyhose |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------------------|:-----------|:-------|:----------------------|:--------|:--------------------|:-------------------|:-----------------------------|:--------------------|:-------------------|:----------------|:------------|:-------------------|:--------------|:--------------------|:--------------------|:--------|:--------|:-----------|:-------------|:---------|:-----------------|:---------------|:-------------|:------|:--------|:-------------|:--------------|:-----------|:--------|:---------|:--------------------|:---------------|:---------|:--------|:---------------|:--------------|:-------------|:----------------|:--------------|:------------|:---------|:--------|:-------------|:-------|:------|:-------------|:---------------|:----------|:----------|:-------------------|:--------------|:--------|:-------------------|:----------------|:--------------|:------------------|:--------------------|:---------------|:--------------|:----------------|:----------|:------------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 |  |  |  |  |  | X | | | X | | | X | X | | X | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | | X | | | | X | | | | | | | | | | | X | | | X | | | X | X | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | X | | | X | X | | X | X | | | | X | | | | X | X | X | | | | X | | | | | | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 6 |  |  |  |  |  | X | | | X | | | X | | | X | | | | | | | | | X | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | | | | | | | | | | | | | | | | | X | | | | | | X | | X | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 6 | 11 |  |  |  |  |  | X | | | X | | | X | | | X | X | | | | X | | | | X | X | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
Chunt0/aum-12-5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 18547609.0
num_examples: 32
download_size: 18514784
dataset_size: 18547609.0
---
# Dataset Card for "aum-12-5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
celerious/processed-hinglish-CMU | ---
dataset_info:
features:
- name: rating
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1348908
num_examples: 8060
- name: test
num_bytes: 168890
num_examples: 960
- name: validation
num_bytes: 162920
num_examples: 942
download_size: 841328
dataset_size: 1680718
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
xiazeyu/WildfireSimMaps | ---
dataset_info:
features:
- name: name
dtype: string
- name: canopy
sequence: int8
- name: density
sequence: float32
- name: slope
sequence: int8
- name: shape
sequence: int16
length: 2
splits:
- name: train
num_bytes: 27490487
num_examples: 6
download_size: 7175919
dataset_size: 27490487
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc
task_categories:
- feature-extraction
tags:
- climate
- geology
size_categories:
- n<1K
---
# WildfireSimMaps
## Description
This is a dataset containing real-world map data for wildfire simulations.
The data is in the form of 2D maps with the following features:
- `name`: The name of the map data.
- `shape`: The shape of the area, in pixels.
- `canopy`: The canopy cover in the area, in percentage.
- `density`: The density of the area, in percentage.
- `slope`: The slope of the area, in degrees.
## Quick Start
Install the package using pip:
```bash
pip install datasets
```
Then you can use the dataset as follows with **NumPy**:
```python
import numpy as np
from datasets import load_dataset
# Load the dataset
ds = load_dataset("xiazeyu/WildfireSimMaps", split="train")
ds = ds.with_format("numpy")
def preprocess_function(examples):
# Reshape arrays based on the 'shape' field
examples['density'] = [d.reshape(sh) for d, sh in zip(examples['density'], examples['shape'])]
examples['slope'] = [s.reshape(sh) for s, sh in zip(examples['slope'], examples['shape'])]
examples['canopy'] = [c.reshape(sh) for c, sh in zip(examples['canopy'], examples['shape'])]
return examples
ds = ds.map(preprocess_function, batched=True, batch_size=None) # Adjust batch_size as needed
print(ds[0])
```
To use the dataset with **PyTorch**, you can use the following code:
```python
import torch
from datasets import load_dataset
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load the dataset
ds = load_dataset("xiazeyu/WildfireSimMaps", split="train")
ds = ds.with_format("torch", device=device)
def preprocess_function(examples):
# Reshape arrays based on the 'shape' field
examples['density'] = [d.reshape(sh.tolist()) for d, sh in zip(examples['density'], examples['shape'])]
examples['slope'] = [s.reshape(sh.tolist()) for s, sh in zip(examples['slope'], examples['shape'])]
examples['canopy'] = [c.reshape(sh.tolist()) for c, sh in zip(examples['canopy'], examples['shape'])]
return examples
ds = ds.map(preprocess_function, batched=True, batch_size=None) # Adjust batch_size as needed
print(ds[0])
```
## Next Steps
In order to make practical use of this dataset, you may perform the following tasks:
- scale or normalize the data to fit your model's requirements
- reshape the data to fit your model's input shape
- stack the data into a single tensor if needed
- perform data augmentation if needed
- split the data into training, validation, and test sets
In general, you can use the dataset as you would use any other dataset in your pipeline.
And the most important thing is to have fun and learn from the data!
## Visualization
Density

Canopy

Slope

## License
The dataset is licensed under the CC BY-NC 4.0 License.
## Contact
- Zeyu Xia - yxn7cj@virginia.edu
- Sibo Cheng - sibo.cheng@imperial.ac.uk
|
roszcz/maestro-sustain-v2 | ---
dataset_info:
features:
- name: notes
struct:
- name: duration
sequence: float64
- name: end
sequence: float64
- name: pitch
sequence: int64
- name: start
sequence: float64
- name: velocity
sequence: int64
- name: source
dtype: string
splits:
- name: test
num_bytes: 29702019
num_examples: 177
- name: validation
num_bytes: 25612865
num_examples: 137
- name: train
num_bytes: 226620478
num_examples: 962
download_size: 87293150
dataset_size: 281935362
---
# Dataset Card for "maestro-sustain-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
koutch/stackoverflow_question_types | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: title
dtype: string
- name: question_body
dtype: string
- name: question_type
dtype: string
- name: question_date
dtype: string
splits:
- name: train
num_bytes: 3433758
num_examples: 3449
- name: test
num_bytes: 12055
num_examples: 14
download_size: 0
dataset_size: 3445813
license: cc
task_categories:
- text-classification
language:
- en
tags:
- code
pretty_name: staqt
size_categories:
- 1K<n<10K
---
# Dataset Card for "stackoverflow_question_types"
## NOTE: the dataset is still currently under annotation
## Dataset Description
Recent research has taken a look into leveraging data available from StackOverflow (SO) to train large language models for programming-related tasks.
However, users can ask a wide range of questions on stackoverflow; The "stackoverflow question types" is a dataset of manually annotated questions
posted on SO with an associated type. Following a previous [study](https://ieeexplore.ieee.org/document/6405249), each question was annotated with a type
capturing the main concern of the user who posted the question. The questions were annotated with the given types:
* *Need to know*: Questions regarding the possibility or availability of (doing) something. These questions normally show the lack of knowledge or uncertainty about some aspects of the technology (e.g. the presence of a feature in an API or a language).
* *How to do it*: Providing a scenario and asking how to implement it (sometimes with a given technology or API).
* *Debug/corrective*: Dealing with problems in the code under development, such as runtime errors and unexpected behaviour.
* *Seeking different solutions*: The questioner has a working code yet seeks a different approach to doing the job.
* *Conceptual*: The question seeks to understand some aspects of programming (with or without using code examples)
* *Other*: a question related to another aspect of programming, or even non-related to programming.
### Remarks
For this dataset, we are mainly interested in questions related to *programming*.
For instance, for [this question](https://stackoverflow.com/questions/51142399/no-acceptable-c-compiler-found-in-path-installing-python-and-gcc),
the user is "trying to install Python-3.6.5 on a machine that does not have any package manager installed" and is facing issues.
Because it's not related to the concept of programming, we would classify it as "other" and not "debugging".
Moreover, we note the following conceptual distinctions between the different categories:
- Need to know: the user asks "is it possible to do x"
- How to do it: the user wants to do "x", knows it's possible, but has no clear idea or solution/doesn't know how to do it -> wants any solution for solving "x".
- Debug: the user wants to do "x", and has a clear idea/solution "y" but it is not working, and is seeking a correction to "y".
- Seeking-different-solution: the user wants to do "x", and has found already a working solution "y", but is seeking an alternative "z".
Sometimes, it's hard to truly understand the users' true intentions;
the separating line between each category will be minor and might be subject to interpretation.
Naturally, some questions may have multiple concerns (i.e. could correspond to multiple categories).
However, this dataset contains mainly questions for which we could assign a clear single category to each question.
Currently, all questions annotated are a subset of the [stackoverflow_python](koutch/stackoverflow_python) dataset.
### Languages
The currently annotated questions concern posts with the *python* tag. The questions are written in *English*.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- question_id: the unique id of the post
- question_body: the (HTML) content of the question
- question_type: the assigned category/type/label
- "needtoknow"
- "howto",
- "debug",
- "seeking",
- "conceptual",
- "other"
### Data Splits
[More Information Needed]
## Dataset Creation
### Annotations
#### Annotation process
Previous research looked into mining natural language-code pairs from stackoverflow.
Two notable works yielded the [StaQC](https://arxiv.org/abs/1803.09371) and [ConaLA](https://arxiv.org/abs/1803.09371) datasets.
Parts of the dataset used a subset of the manual annotations provided by the authors of the papers (available at [staqc](https://huggingface.co/datasets/koutch/staqc),
and [conala](https://huggingface.co/datasets/neulab/conala])). The questions were annotated as belonging to the "how to do it" category.
To ease the annotation procedure, we used the [argilla platform](https://docs.argilla.io/en/latest/index.html)
and multiple iterations of [few-shot training with a SetFit model](https://docs.argilla.io/en/latest/tutorials/notebooks/labelling-textclassification-setfit-zeroshot.html#%F0%9F%A6%BE-Train-a-few-shot-SetFit-model).
## Considerations for Using the Data
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed] |
mclovinxie/litigiven-showcase | ---
license: apache-2.0
---
|
Yakage/OtakuTon | ---
license: openrail
---
|
Asap7772/education_autolabel_noisy | ---
dataset_info:
features:
- name: level_int
dtype: int64
- name: is_flipped
dtype: int64
- name: x
dtype: string
- name: yw
dtype: string
- name: model_yw
dtype: string
- name: level_yw
dtype: string
- name: level_int_yw
dtype: int64
- name: diff_yw
dtype: int64
- name: yl
dtype: string
- name: model_yl
dtype: string
- name: level_yl
dtype: string
- name: level_int_yl
dtype: int64
- name: diff_yl
dtype: int64
- name: level
dtype: string
splits:
- name: train
num_bytes: 60299094.6
num_examples: 30204
- name: test
num_bytes: 6699899.4
num_examples: 3356
download_size: 25734613
dataset_size: 66998994.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
dmayhem93/self-critiquing-helpful-rate-train | ---
dataset_info:
features:
- name: id
dtype: string
- name: source_id
dtype: string
- name: split
dtype: string
- name: time
dtype: float64
- name: labeler
dtype: string
- name: is_topic_based_summarization
dtype: bool
- name: prompt
dtype: string
- name: helpful
dtype: bool
splits:
- name: train
num_bytes: 185274964
num_examples: 33168
download_size: 0
dataset_size: 185274964
---
# Dataset Card for "self-critiquing-helpful-rate-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
khalilmas9/Fashion_brands | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 1578858.0
num_examples: 10
download_size: 1563176
dataset_size: 1578858.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nikchar/paper_test_assym | ---
dataset_info:
features:
- name: label
dtype: string
- name: claim
dtype: string
- name: evidence_wiki_url
dtype: string
- name: text
dtype: string
- name: retrieved_evidence_title
sequence: string
- name: retrieved_evidence_text
sequence: string
splits:
- name: train
num_bytes: 73088087
num_examples: 11073
download_size: 34395774
dataset_size: 73088087
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "paper_test_assym_bert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceTB/cosmopedia-100k | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: text_token_length
dtype: int64
- name: text
dtype: string
- name: seed_data
dtype: string
- name: format
dtype: string
- name: audience
dtype: string
splits:
- name: train
num_bytes: 534014692.0830894
num_examples: 100000
download_size: 306627644
dataset_size: 534014692.0830894
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
language:
- en
tags:
- synthetic
---
# Dataset description
This is a 100k subset of [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) dataset. A synthetic dataset of textbooks, blogposts, stories, posts and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1.
Here's how you can load the dataset
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceTB/cosmopedia-100k", split="train")
````
|
communityai/Open-Orca___1million-gpt-4-100k | ---
dataset_info:
features:
- name: source
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 185566561.22851032
num_examples: 100000
download_size: 97840738
dataset_size: 185566561.22851032
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mteb/fiqa | ---
language:
- en
multilinguality:
- monolingual
task_categories:
- text-retrieval
source_datasets:
- fiqa
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 365642
num_examples: 14166
- name: dev
num_bytes: 31919
num_examples: 1238
- name: test
num_bytes: 43996
num_examples: 1706
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 45303212
num_examples: 57638
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 491278
num_examples: 6648
configs:
- config_name: default
data_files:
- split: train
path: qrels/train.jsonl
- split: dev
path: qrels/dev.jsonl
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
--- |
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_train-markdown-10000 | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 13336000
num_examples: 1000
download_size: 1107742
dataset_size: 13336000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Abhimanu/dd | ---
license: unknown
---
|
Rijgersberg/ultrachat_10k_nl | ---
configs:
- config_name: default
data_files:
- split: test_sft
path: data/test_sft-*
- split: train_sft
path: data/train_sft-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages_nl
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: test_sft
num_bytes: 6296981
num_examples: 500
- name: train_sft
num_bytes: 120475850
num_examples: 9500
download_size: 65516955
dataset_size: 126772831
license: cc-by-nc-4.0
language:
- nl
- en
tags:
- GEITje
task_categories:
- conversational
- text-generation
size_categories:
- 10K<n<100K
pretty_name: Ultrachat 10k NL
---
# Dataset Card for "ultrachat_10k_nl"
A translated version of 10k randomly selected examples from [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k).
Automatically translated by GPT-3.5.
## More info
Read more about GEITje-chat, the datasets and the translation code in the [📄 README](https://github.com/Rijgersberg/GEITje/blob/main/README-en.md) on GitHub.
|
vdaita/commitpackft-patches | ---
dataset_info:
features:
- name: input
dtype: string
- name: input_inst
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4815901
num_examples: 810
download_size: 1264607
dataset_size: 4815901
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Yidhar/danbooru_aesthetic_test | ---
license: mit
---
|
Codec-SUPERB/fluent_speech_commands_synth | ---
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k_12bps
path: data/encodec_24k_12bps-*
- split: encodec_24k_1_5bps
path: data/encodec_24k_1_5bps-*
- split: encodec_24k_24bps
path: data/encodec_24k_24bps-*
- split: encodec_24k_3bps
path: data/encodec_24k_3bps-*
- split: encodec_24k_6bps
path: data/encodec_24k_6bps-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
splits:
- name: original
num_bytes: 2220326464.0
num_examples: 30043
- name: academicodec_hifi_16k_320d
num_bytes: 2212154504.0
num_examples: 30043
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 2212154504.0
num_examples: 30043
- name: academicodec_hifi_24k_320d
num_bytes: 3322180744.0
num_examples: 30043
- name: audiodec_24k_320d
num_bytes: 3338935944.0
num_examples: 30043
- name: dac_16k
num_bytes: 2221347926.0
num_examples: 30043
- name: dac_24k
num_bytes: 3329678726.0
num_examples: 30043
- name: dac_44k
num_bytes: 6114326168.0
num_examples: 30043
- name: encodec_24k_12bps
num_bytes: 3329678726.0
num_examples: 30043
- name: encodec_24k_1_5bps
num_bytes: 3329678726.0
num_examples: 30043
- name: encodec_24k_24bps
num_bytes: 3329678726.0
num_examples: 30043
- name: encodec_24k_3bps
num_bytes: 3329678726.0
num_examples: 30043
- name: encodec_24k_6bps
num_bytes: 3329678726.0
num_examples: 30043
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 2219150286.0
num_examples: 30043
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 2219150286.0
num_examples: 30043
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 2221347926.0
num_examples: 30043
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 2221347926.0
num_examples: 30043
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 2221347926.0
num_examples: 30043
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 2221347926.0
num_examples: 30043
- name: speech_tokenizer_16k
num_bytes: 2230445064.0
num_examples: 30043
download_size: 21108462066
dataset_size: 57173635950.0
---
# Dataset Card for "fluent_speech_commands_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yousaforever/yousa_data_0 | ---
license: gpl-3.0
---
本数据集共138min,大概包含yousa的50首歌(大部分在2016-2022年),已经过切片处理并筛选,时长在4-15s,共796条wav音频数据。中文占绝大部分,有少量日文及极少英文。
This dataset consists of 138 minutes in total and approximately includes 50 songs by yousa (most of them released between 2016 and 2022). The dataset has been sliced and filtered, with durations ranging from 4 to 15 seconds, resulting in a total of 796 WAV audio files. The majority of the content is in Chinese, with a small amount in Japanese and very little in English. |
wurongbo/wurongbo | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: test
num_bytes: 83036.0
num_examples: 3
- name: train
num_bytes: 83036.0
num_examples: 3
download_size: 169770
dataset_size: 166072.0
---
# Dataset Card for "wurongbo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-onesec-cv12-each-chunk-uniq/chunk_187 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 783516224.0
num_examples: 153872
download_size: 797716983
dataset_size: 783516224.0
---
# Dataset Card for "chunk_187"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yuan-sf63/word_label_0.8_96_Nf | ---
dataset_info:
features:
- name: text
dtype: string
- name: '0'
dtype: int64
- name: '1'
dtype: int64
- name: '2'
dtype: int64
- name: '3'
dtype: int64
- name: '4'
dtype: int64
- name: '5'
dtype: int64
- name: '6'
dtype: int64
- name: '7'
dtype: int64
- name: '8'
dtype: int64
- name: '9'
dtype: int64
- name: '10'
dtype: int64
- name: '11'
dtype: int64
- name: '12'
dtype: int64
- name: '13'
dtype: int64
- name: '14'
dtype: int64
- name: '15'
dtype: int64
- name: '16'
dtype: int64
- name: '17'
dtype: int64
- name: '18'
dtype: int64
- name: '19'
dtype: int64
- name: '20'
dtype: int64
- name: '21'
dtype: int64
- name: '22'
dtype: int64
- name: '23'
dtype: int64
- name: '24'
dtype: int64
- name: '25'
dtype: int64
- name: '26'
dtype: int64
- name: '27'
dtype: int64
- name: '28'
dtype: int64
- name: '29'
dtype: int64
- name: '30'
dtype: int64
- name: '31'
dtype: int64
- name: '32'
dtype: int64
- name: '33'
dtype: int64
- name: '34'
dtype: int64
- name: '35'
dtype: int64
- name: '36'
dtype: int64
- name: '37'
dtype: int64
- name: '38'
dtype: int64
- name: '39'
dtype: int64
- name: '40'
dtype: int64
- name: '41'
dtype: int64
- name: '42'
dtype: int64
- name: '43'
dtype: int64
- name: '44'
dtype: int64
- name: '45'
dtype: int64
- name: '46'
dtype: int64
- name: '47'
dtype: int64
- name: '48'
dtype: int64
- name: '49'
dtype: int64
- name: '50'
dtype: int64
- name: '51'
dtype: int64
- name: '52'
dtype: int64
- name: '53'
dtype: int64
- name: '54'
dtype: int64
- name: '55'
dtype: int64
- name: '56'
dtype: int64
- name: '57'
dtype: int64
- name: '58'
dtype: int64
- name: '59'
dtype: int64
- name: '60'
dtype: int64
- name: '61'
dtype: int64
- name: '62'
dtype: int64
- name: '63'
dtype: int64
- name: '64'
dtype: int64
- name: '65'
dtype: int64
- name: '66'
dtype: int64
- name: '67'
dtype: int64
- name: '68'
dtype: int64
- name: '69'
dtype: int64
- name: '70'
dtype: int64
- name: '71'
dtype: int64
- name: '72'
dtype: int64
- name: '73'
dtype: int64
- name: '74'
dtype: int64
- name: '75'
dtype: int64
- name: '76'
dtype: int64
- name: '77'
dtype: int64
- name: '78'
dtype: int64
- name: '79'
dtype: int64
- name: '80'
dtype: int64
- name: '81'
dtype: int64
- name: '82'
dtype: int64
- name: '83'
dtype: int64
- name: '84'
dtype: int64
- name: '85'
dtype: int64
- name: '86'
dtype: int64
- name: '87'
dtype: int64
- name: '88'
dtype: int64
- name: '89'
dtype: int64
- name: '90'
dtype: int64
- name: '91'
dtype: int64
- name: '92'
dtype: int64
- name: '93'
dtype: int64
- name: '94'
dtype: int64
- name: '95'
dtype: int64
splits:
- name: train
num_bytes: 64568588.254601575
num_examples: 71730
- name: validation
num_bytes: 7175187.745398426
num_examples: 7971
download_size: 10767675
dataset_size: 71743776.0
---
# Dataset Card for "word_label_0.8_96_Nf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Robin-Amann/gap_dataset | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: audio_length
dtype: float64
- name: transcript
dtype: string
- name: label
dtype:
class_label:
names:
'0': silence
'1': hesitation
splits:
- name: train
num_bytes: 8850423007.447142
num_examples: 176275
- name: test
num_bytes: 2206036686.860858
num_examples: 44069
download_size: 10732595786
dataset_size: 11056459694.307999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
DialogueCharacter/english_preference_stanfordnlp_SHP_unfiltered | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 315493419
num_examples: 112568
download_size: 75641649
dataset_size: 315493419
---
# Dataset Card for "english_preference_stanfordnlp_SHP_unfiltered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
quan246/news_test | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: translation
struct:
- name: en
dtype: string
- name: vi
dtype: string
splits:
- name: test
num_bytes: 628165
num_examples: 2352
download_size: 360263
dataset_size: 628165
---
# Dataset Card for "news_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Concyclics/RenMinDaily | ---
license: apache-2.0
task_categories:
- text-generation
- summarization
- question-answering
language:
- zh
tags:
- medical
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
It is the collection of RenMinDaily's report from 2021/01/01 to 2023/12/05. With title as instruction.
|
Maxlinn/TruthfulQA_zh | ---
license: mit
task_categories:
- question-answering
language:
- zh
tags:
- truthfulqa
---
TruthfulQA dataset csv with question and answer field translated into Chinese by requesting GPT-4. |
toilaluan/ig_rewarding_db_v4 | ---
dataset_info:
features:
- name: image
dtype: image
- name: topic
dtype: string
- name: prompt
dtype: string
- name: request_id
dtype: int64
- name: model_type
dtype: string
splits:
- name: train
num_bytes: 330547445.0
num_examples: 4500
download_size: 340509190
dataset_size: 330547445.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ig_rewarding_db_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_stsb_irrealis_be_done | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 6403
num_examples: 29
- name: test
num_bytes: 2389
num_examples: 14
- name: train
num_bytes: 8282
num_examples: 44
download_size: 20672
dataset_size: 17074
---
# Dataset Card for "MULTI_VALUE_stsb_irrealis_be_done"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amphora/mlesg-fit | ---
configs:
- config_name: type
data_files:
- split: test
path: ml-esg-type.csv
- config_name: duration
data_files:
- split: test
path: ml-esg-duration.csv
license: mit
---
|
PORTULAN/extraglue | ---
pretty_name: ExtraGLUE
language:
- pt
source_datasets:
- glue
- superglue
license: mit
viewer: false
task_categories:
- text-classification
- sentence-similarity
- question-answering
task_ids:
- language-modeling
- multi-class-classification
- natural-language-inference
- sentiment-classification
- semantic-similarity-scoring
- semantic-similarity-classification
---
</br>
</br>
<img align="left" width="40" height="40" src="https://github.githubassets.com/images/icons/emoji/unicode/1f917.png">
<p style="text-align: center;"> This is the dataset card for extraGLUE.
You may be interested in some of the other <a href="https://huggingface.co/PORTULAN">datasets for Portuguese</a> and in the models trained with them,
namely <a href="https://huggingface.co/PORTULAN">Albertina (encoders) and Gervásio (decoders) families</a>.
</p>
</br>
</br>
ExtraGLUE
===
</br>
ExtraGLUE is a Portuguese dataset obtained by the automatic translation of some of the tasks in the GLUE and SuperGLUE benchmarks.
Two variants of Portuguese are considered, namely European Portuguese and American Portuguese.
The dataset is distributed for free under an open license.
The 14 tasks in extraGLUE cover different aspects of language understanding:
*Single sentence*
- **SST-2** is a task for predicting the sentiment polarity of movie reviews.
*Semantic similarity*
- **MRPC** is a task for determining whether a pair of sentences are mutual paraphrases.
- **STS-B** is a task for predicting a similarity score (from 1 to 5) for each sentence pair.
*Inference*
- **MNLI** is a task to determine if a given premise sentence entails, contradicts, or is neutral to a hypothesis sentence; this task includes **matched** (in-domain) and **mismatched** (cross-domain) validation and test sets.
- **QNLI** is a question-answering task converted to determine whether the context sentence contains the answer to the question.
- **RTE** is a task for determining whether a premise sentence entails a hypothesis sentence.
- **WNLI** is a pronoun resolution task formulated as sentence pair entailment classification where, in the second sentence, the pronoun is replaced by a possible referent.
- **CB** comprises short texts with embedded clauses; one such clause is extracted as a hypothesis and should be classified as neutral, entailment or contradiction.
- **AX_b** is designed to test models across a wide spectrum of linguistic, commonsense, and world knowledge; each instance contains a sentence pair labeled with entailment or not entailment.
- **AX_g** is designed to measure gender bias, where each premise sentence includes a male or female pronoun and a hypothesis includes a possible referent for the pronoun.
*Question answering*
- **BoolQ** is a question-answering task where yes/no questions are given for short text passages.
- **MultiRC** is a task where, given a context paragraph, a question, and an answer, the goal is to determine whether the answer is true; for the same context and question, more than one answer may be correct.
*Reasoning*
- **CoPA** is a casual reasoning task: given a premise, two choices, and a cause/effect prompt, the system must choose one of the choices.
# Acknowledgments
The research reported here was partially supported by:
PORTULAN CLARIN—Research Infrastructure for the Science and Technology of Language, funded by Lisboa 2020, Alentejo 2020 and FCT—Fundação para a Ciência e Tecnologia under the
grant PINFRA/22117/2016;
research project GPT-PT - Transformer-based Decoder for the Portuguese Language, funded by FCT—Fundação para a Ciência e Tecnologia under the
grant CPCA-IAC/AV/478395/2022;
innovation project ACCELERAT.AI - Multilingual Intelligent Contact Centers, funded by IAPMEI, I.P. - Agência para a Competitividade e Inovação
under the grant C625734525-00462629, of Plano de Recuperação e Resiliência, call RE-C05-i01.01 – Agendas/Alianças Mobilizadoras para a Reindustrialização. |
NickyNicky/aya_dataset_multilingual_inputs_targets_ext10 | ---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: targets_es
dtype: string
- name: targets_en
dtype: string
- name: targets_fr
dtype: string
- name: targets_de
dtype: string
- name: inputs_es
dtype: string
- name: inputs_en
dtype: string
- name: inputs_fr
dtype: string
- name: inputs_de
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1248900
num_examples: 460
download_size: 830810
dataset_size: 1248900
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MaryamAlAli/Mixat_train | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 5445559256.262759
num_examples: 3726
download_size: 4837500403
dataset_size: 5445559256.262759
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Mixat_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-eval-conll2003-conll2003-962530-2172769890 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- conll2003
eval_info:
task: entity_extraction
model: classtest/berttest2
metrics: []
dataset_name: conll2003
dataset_config: conll2003
dataset_split: test
col_mapping:
tokens: tokens
tags: ner_tags
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: classtest/berttest2
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@classtest](https://huggingface.co/classtest) for evaluating this model. |
liuyanchen1015/MULTI_VALUE_wnli_not_preverbal_negator | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 1208
num_examples: 6
- name: test
num_bytes: 1268
num_examples: 4
- name: train
num_bytes: 5519
num_examples: 27
download_size: 12761
dataset_size: 7995
---
# Dataset Card for "MULTI_VALUE_wnli_not_preverbal_negator"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Neuronovo__neuronovo-7B-v0.2 | ---
pretty_name: Evaluation run of Neuronovo/neuronovo-7B-v0.2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Neuronovo__neuronovo-7B-v0.2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-06T03:10:43.608227](https://huggingface.co/datasets/open-llm-leaderboard/details_Neuronovo__neuronovo-7B-v0.2/blob/main/results_2024-01-06T03-10-43.608227.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6556420257543674,\n\
\ \"acc_stderr\": 0.03196441496112865,\n \"acc_norm\": 0.6567576467204072,\n\
\ \"acc_norm_stderr\": 0.03260699328743241,\n \"mc1\": 0.572827417380661,\n\
\ \"mc1_stderr\": 0.017316834410963915,\n \"mc2\": 0.7102141321993041,\n\
\ \"mc2_stderr\": 0.015005749746417735\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.7098976109215017,\n \"acc_stderr\": 0.013261573677520767,\n\
\ \"acc_norm\": 0.7303754266211604,\n \"acc_norm_stderr\": 0.012968040686869148\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7177853017327226,\n\
\ \"acc_stderr\": 0.0044915745394418834,\n \"acc_norm\": 0.8831905994821748,\n\
\ \"acc_norm_stderr\": 0.003205366051421362\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6447368421052632,\n \"acc_stderr\": 0.03894734487013317,\n\
\ \"acc_norm\": 0.6447368421052632,\n \"acc_norm_stderr\": 0.03894734487013317\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.720754716981132,\n \"acc_stderr\": 0.027611163402399715,\n\
\ \"acc_norm\": 0.720754716981132,\n \"acc_norm_stderr\": 0.027611163402399715\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n\
\ \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n\
\ \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \
\ \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \
\ \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6994219653179191,\n\
\ \"acc_stderr\": 0.03496101481191179,\n \"acc_norm\": 0.6994219653179191,\n\
\ \"acc_norm_stderr\": 0.03496101481191179\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n\
\ \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.041633319989322626,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.041633319989322626\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5872340425531914,\n \"acc_stderr\": 0.03218471141400352,\n\
\ \"acc_norm\": 0.5872340425531914,\n \"acc_norm_stderr\": 0.03218471141400352\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5586206896551724,\n \"acc_stderr\": 0.04137931034482757,\n\
\ \"acc_norm\": 0.5586206896551724,\n \"acc_norm_stderr\": 0.04137931034482757\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42857142857142855,\n \"acc_stderr\": 0.025487187147859375,\n \"\
acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.025487187147859375\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356853,\n \"\
acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356853\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"\
acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\"\
: 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.032568666616811015,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.032568666616811015\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386414,\n \"\
acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386414\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n\
\ \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657262,\n\
\ \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657262\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465066,\n \
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465066\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.030066761582977934,\n\
\ \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.030066761582977934\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"\
acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5462962962962963,\n \"acc_stderr\": 0.033953227263757976,\n \"\
acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.033953227263757976\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8529411764705882,\n \"acc_stderr\": 0.024857478080250458,\n \"\
acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.024857478080250458\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.8016877637130801,\n \"acc_stderr\": 0.025955020841621112,\n \
\ \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.025955020841621112\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n\
\ \"acc_stderr\": 0.030769352008229143,\n \"acc_norm\": 0.6995515695067265,\n\
\ \"acc_norm_stderr\": 0.030769352008229143\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.8244274809160306,\n \"acc_stderr\": 0.03336820338476074,\n\
\ \"acc_norm\": 0.8244274809160306,\n \"acc_norm_stderr\": 0.03336820338476074\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990946,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990946\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n\
\ \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n\
\ \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n\
\ \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n\
\ \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n\
\ \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n\
\ \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n\
\ \"acc_stderr\": 0.020588491316092382,\n \"acc_norm\": 0.8888888888888888,\n\
\ \"acc_norm_stderr\": 0.020588491316092382\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \
\ \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8263090676883781,\n\
\ \"acc_stderr\": 0.01354741565866226,\n \"acc_norm\": 0.8263090676883781,\n\
\ \"acc_norm_stderr\": 0.01354741565866226\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468365,\n\
\ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468365\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4569832402234637,\n\
\ \"acc_stderr\": 0.01666049858050917,\n \"acc_norm\": 0.4569832402234637,\n\
\ \"acc_norm_stderr\": 0.01666049858050917\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.02495418432487991,\n\
\ \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.02495418432487991\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n\
\ \"acc_stderr\": 0.02540383297817961,\n \"acc_norm\": 0.7234726688102894,\n\
\ \"acc_norm_stderr\": 0.02540383297817961\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7283950617283951,\n \"acc_stderr\": 0.02474862449053737,\n\
\ \"acc_norm\": 0.7283950617283951,\n \"acc_norm_stderr\": 0.02474862449053737\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4787234042553192,\n \"acc_stderr\": 0.029800481645628693,\n \
\ \"acc_norm\": 0.4787234042553192,\n \"acc_norm_stderr\": 0.029800481645628693\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4680573663624511,\n\
\ \"acc_stderr\": 0.012744149704869649,\n \"acc_norm\": 0.4680573663624511,\n\
\ \"acc_norm_stderr\": 0.012744149704869649\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.7022058823529411,\n \"acc_stderr\": 0.02777829870154544,\n\
\ \"acc_norm\": 0.7022058823529411,\n \"acc_norm_stderr\": 0.02777829870154544\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495148,\n \
\ \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495148\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291293,\n\
\ \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291293\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454115,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454115\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n\
\ \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.572827417380661,\n\
\ \"mc1_stderr\": 0.017316834410963915,\n \"mc2\": 0.7102141321993041,\n\
\ \"mc2_stderr\": 0.015005749746417735\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8066298342541437,\n \"acc_stderr\": 0.011099796645920533\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.624715693707354,\n \
\ \"acc_stderr\": 0.013337170545742925\n }\n}\n```"
repo_url: https://huggingface.co/Neuronovo/neuronovo-7B-v0.2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|arc:challenge|25_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|gsm8k|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hellaswag|10_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T03-10-43.608227.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-06T03-10-43.608227.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- '**/details_harness|winogrande|5_2024-01-06T03-10-43.608227.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-06T03-10-43.608227.parquet'
- config_name: results
data_files:
- split: 2024_01_06T03_10_43.608227
path:
- results_2024-01-06T03-10-43.608227.parquet
- split: latest
path:
- results_2024-01-06T03-10-43.608227.parquet
---
# Dataset Card for Evaluation run of Neuronovo/neuronovo-7B-v0.2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Neuronovo/neuronovo-7B-v0.2](https://huggingface.co/Neuronovo/neuronovo-7B-v0.2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Neuronovo__neuronovo-7B-v0.2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-06T03:10:43.608227](https://huggingface.co/datasets/open-llm-leaderboard/details_Neuronovo__neuronovo-7B-v0.2/blob/main/results_2024-01-06T03-10-43.608227.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6556420257543674,
"acc_stderr": 0.03196441496112865,
"acc_norm": 0.6567576467204072,
"acc_norm_stderr": 0.03260699328743241,
"mc1": 0.572827417380661,
"mc1_stderr": 0.017316834410963915,
"mc2": 0.7102141321993041,
"mc2_stderr": 0.015005749746417735
},
"harness|arc:challenge|25": {
"acc": 0.7098976109215017,
"acc_stderr": 0.013261573677520767,
"acc_norm": 0.7303754266211604,
"acc_norm_stderr": 0.012968040686869148
},
"harness|hellaswag|10": {
"acc": 0.7177853017327226,
"acc_stderr": 0.0044915745394418834,
"acc_norm": 0.8831905994821748,
"acc_norm_stderr": 0.003205366051421362
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6447368421052632,
"acc_stderr": 0.03894734487013317,
"acc_norm": 0.6447368421052632,
"acc_norm_stderr": 0.03894734487013317
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.720754716981132,
"acc_stderr": 0.027611163402399715,
"acc_norm": 0.720754716981132,
"acc_norm_stderr": 0.027611163402399715
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6994219653179191,
"acc_stderr": 0.03496101481191179,
"acc_norm": 0.6994219653179191,
"acc_norm_stderr": 0.03496101481191179
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.041633319989322626,
"acc_norm": 0.78,
"acc_norm_stderr": 0.041633319989322626
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5872340425531914,
"acc_stderr": 0.03218471141400352,
"acc_norm": 0.5872340425531914,
"acc_norm_stderr": 0.03218471141400352
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.025487187147859375,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.025487187147859375
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7838709677419354,
"acc_stderr": 0.02341529343356853,
"acc_norm": 0.7838709677419354,
"acc_norm_stderr": 0.02341529343356853
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386414,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386414
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657262,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657262
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465066,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465066
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6890756302521008,
"acc_stderr": 0.030066761582977934,
"acc_norm": 0.6890756302521008,
"acc_norm_stderr": 0.030066761582977934
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.033953227263757976,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.033953227263757976
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.024857478080250458,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.024857478080250458
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.025955020841621112,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.025955020841621112
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.030769352008229143,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.030769352008229143
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8244274809160306,
"acc_stderr": 0.03336820338476074,
"acc_norm": 0.8244274809160306,
"acc_norm_stderr": 0.03336820338476074
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990946,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990946
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7852760736196319,
"acc_stderr": 0.032262193772867744,
"acc_norm": 0.7852760736196319,
"acc_norm_stderr": 0.032262193772867744
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092382,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092382
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8263090676883781,
"acc_stderr": 0.01354741565866226,
"acc_norm": 0.8263090676883781,
"acc_norm_stderr": 0.01354741565866226
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.023948512905468365,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.023948512905468365
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4569832402234637,
"acc_stderr": 0.01666049858050917,
"acc_norm": 0.4569832402234637,
"acc_norm_stderr": 0.01666049858050917
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.02495418432487991,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.02495418432487991
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.02540383297817961,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.02540383297817961
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7283950617283951,
"acc_stderr": 0.02474862449053737,
"acc_norm": 0.7283950617283951,
"acc_norm_stderr": 0.02474862449053737
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4787234042553192,
"acc_stderr": 0.029800481645628693,
"acc_norm": 0.4787234042553192,
"acc_norm_stderr": 0.029800481645628693
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4680573663624511,
"acc_stderr": 0.012744149704869649,
"acc_norm": 0.4680573663624511,
"acc_norm_stderr": 0.012744149704869649
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7022058823529411,
"acc_stderr": 0.02777829870154544,
"acc_norm": 0.7022058823529411,
"acc_norm_stderr": 0.02777829870154544
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495148,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495148
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.028795185574291293,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.028795185574291293
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454115,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454115
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.572827417380661,
"mc1_stderr": 0.017316834410963915,
"mc2": 0.7102141321993041,
"mc2_stderr": 0.015005749746417735
},
"harness|winogrande|5": {
"acc": 0.8066298342541437,
"acc_stderr": 0.011099796645920533
},
"harness|gsm8k|5": {
"acc": 0.624715693707354,
"acc_stderr": 0.013337170545742925
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
trolllemon/dogs-test | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: test
num_bytes: 1439101.0
num_examples: 60
download_size: 1440742
dataset_size: 1439101.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
sethapun/arithmetic_2md_1to5 | ---
dataset_info:
features:
- name: expression
dtype: string
- name: answer
dtype: float64
- name: label
dtype:
class_label:
names:
'0': 'false'
'1': 'true'
splits:
- name: train
num_bytes: 54000
num_examples: 2000
- name: validation
num_bytes: 10800
num_examples: 400
download_size: 9908
dataset_size: 64800
---
# Dataset Card for "arithmetic_2md_1to5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
darrow-ai/USClassActionOutcomes_ExpertsAnnotations | ---
license: gpl-3.0
---
## Dataset Description
- **Homepage:** https://www.darrow.ai/
- **Repository:** https://github.com/darrow-labs/ClassActionPrediction
- **Paper:** https://arxiv.org/abs/2211.00582
- **Leaderboard:** N/A
- **Point of Contact:** [Gila Hayat](mailto:gila@darrow.ai)
### Dataset Summary
USClassActions is an English dataset of 200 complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool.
### Data Instances
```python
from datasets import load_dataset
dataset = load_dataset('darrow-ai/USClassActionOutcomes_ExpertsAnnotations')
```
### Data Fields
`id`: (**int**) a unique identifier of the document \
`origin_label `: (**str**) the outcome of the case \
`target_text`: (**str**) the facts of the case \
`annotator_prediction `: (**str**) annotators predictions of the case outcome based on the target_text \
`annotator_confidence `: (**str**) the annotator's level of confidence in his outcome prediction \
### Curation Rationale
The dataset was curated by Darrow.ai (2022).
### Citation Information
*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*
*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*
*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*
```
@InProceedings{darrow-niklaus-2022-uscp,
author = {Semo, Gil
and Bernsohn, Dor
and Hagag, Ben
and Hayat, Gila
and Niklaus, Joel},
title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US},
booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop},
year = {2022},
location = {Abu Dhabi},
}
```
|
fathyshalab/massive_general-de-DE | ---
dataset_info:
features:
- name: id
dtype: string
- name: locale
dtype: string
- name: partition
dtype: string
- name: scenario
dtype:
class_label:
names:
'0': social
'1': transport
'2': calendar
'3': play
'4': news
'5': datetime
'6': recommendation
'7': email
'8': iot
'9': general
'10': audio
'11': lists
'12': qa
'13': cooking
'14': takeaway
'15': music
'16': alarm
'17': weather
- name: intent
dtype:
class_label:
names:
'0': datetime_query
'1': iot_hue_lightchange
'2': transport_ticket
'3': takeaway_query
'4': qa_stock
'5': general_greet
'6': recommendation_events
'7': music_dislikeness
'8': iot_wemo_off
'9': cooking_recipe
'10': qa_currency
'11': transport_traffic
'12': general_quirky
'13': weather_query
'14': audio_volume_up
'15': email_addcontact
'16': takeaway_order
'17': email_querycontact
'18': iot_hue_lightup
'19': recommendation_locations
'20': play_audiobook
'21': lists_createoradd
'22': news_query
'23': alarm_query
'24': iot_wemo_on
'25': general_joke
'26': qa_definition
'27': social_query
'28': music_settings
'29': audio_volume_other
'30': calendar_remove
'31': iot_hue_lightdim
'32': calendar_query
'33': email_sendemail
'34': iot_cleaning
'35': audio_volume_down
'36': play_radio
'37': cooking_query
'38': datetime_convert
'39': qa_maths
'40': iot_hue_lightoff
'41': iot_hue_lighton
'42': transport_query
'43': music_likeness
'44': email_query
'45': play_music
'46': audio_volume_mute
'47': social_post
'48': alarm_set
'49': qa_factoid
'50': calendar_set
'51': play_game
'52': alarm_remove
'53': lists_remove
'54': transport_taxi
'55': recommendation_movies
'56': iot_coffee
'57': music_query
'58': play_podcasts
'59': lists_query
- name: text
dtype: string
- name: annot_utt
dtype: string
- name: worker_id
dtype: string
- name: slot_method
sequence:
- name: slot
dtype: string
- name: method
dtype: string
- name: judgments
sequence:
- name: worker_id
dtype: string
- name: intent_score
dtype: int8
- name: slots_score
dtype: int8
- name: grammar_score
dtype: int8
- name: spelling_score
dtype: int8
- name: language_identification
dtype: string
- name: label_name
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 171110
num_examples: 652
- name: validation
num_bytes: 31311
num_examples: 122
- name: test
num_bytes: 49862
num_examples: 189
download_size: 90317
dataset_size: 252283
---
# Dataset Card for "massive_general-de-DE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_lgaalves__llama-2-13b-hf-platypus | ---
pretty_name: Evaluation run of lgaalves/llama-2-13b-hf-platypus
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [lgaalves/llama-2-13b-hf-platypus](https://huggingface.co/lgaalves/llama-2-13b-hf-platypus)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_lgaalves__llama-2-13b-hf-platypus\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-28T02:33:59.939371](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__llama-2-13b-hf-platypus/blob/main/results_2023-10-28T02-33-59.939371.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0017827181208053692,\n\
\ \"em_stderr\": 0.00043200973460388544,\n \"f1\": 0.05985213926174496,\n\
\ \"f1_stderr\": 0.0013641672120704657,\n \"acc\": 0.4325617395685546,\n\
\ \"acc_stderr\": 0.009923090021448928\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.0017827181208053692,\n \"em_stderr\": 0.00043200973460388544,\n\
\ \"f1\": 0.05985213926174496,\n \"f1_stderr\": 0.0013641672120704657\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09401061410159212,\n \
\ \"acc_stderr\": 0.00803881981887246\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.771112865035517,\n \"acc_stderr\": 0.011807360224025398\n\
\ }\n}\n```"
repo_url: https://huggingface.co/lgaalves/llama-2-13b-hf-platypus
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|arc:challenge|25_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_28T00_17_42.072889
path:
- '**/details_harness|drop|3_2023-10-28T00-17-42.072889.parquet'
- split: 2023_10_28T02_33_59.939371
path:
- '**/details_harness|drop|3_2023-10-28T02-33-59.939371.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-28T02-33-59.939371.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_28T00_17_42.072889
path:
- '**/details_harness|gsm8k|5_2023-10-28T00-17-42.072889.parquet'
- split: 2023_10_28T02_33_59.939371
path:
- '**/details_harness|gsm8k|5_2023-10-28T02-33-59.939371.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-28T02-33-59.939371.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hellaswag|10_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-18T14-15-46.670153.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-18T14-15-46.670153.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-18T14-15-46.670153.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_28T00_17_42.072889
path:
- '**/details_harness|winogrande|5_2023-10-28T00-17-42.072889.parquet'
- split: 2023_10_28T02_33_59.939371
path:
- '**/details_harness|winogrande|5_2023-10-28T02-33-59.939371.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-28T02-33-59.939371.parquet'
- config_name: results
data_files:
- split: 2023_09_18T14_15_46.670153
path:
- results_2023-09-18T14-15-46.670153.parquet
- split: 2023_10_28T00_17_42.072889
path:
- results_2023-10-28T00-17-42.072889.parquet
- split: 2023_10_28T02_33_59.939371
path:
- results_2023-10-28T02-33-59.939371.parquet
- split: latest
path:
- results_2023-10-28T02-33-59.939371.parquet
---
# Dataset Card for Evaluation run of lgaalves/llama-2-13b-hf-platypus
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/lgaalves/llama-2-13b-hf-platypus
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [lgaalves/llama-2-13b-hf-platypus](https://huggingface.co/lgaalves/llama-2-13b-hf-platypus) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_lgaalves__llama-2-13b-hf-platypus",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T02:33:59.939371](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__llama-2-13b-hf-platypus/blob/main/results_2023-10-28T02-33-59.939371.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388544,
"f1": 0.05985213926174496,
"f1_stderr": 0.0013641672120704657,
"acc": 0.4325617395685546,
"acc_stderr": 0.009923090021448928
},
"harness|drop|3": {
"em": 0.0017827181208053692,
"em_stderr": 0.00043200973460388544,
"f1": 0.05985213926174496,
"f1_stderr": 0.0013641672120704657
},
"harness|gsm8k|5": {
"acc": 0.09401061410159212,
"acc_stderr": 0.00803881981887246
},
"harness|winogrande|5": {
"acc": 0.771112865035517,
"acc_stderr": 0.011807360224025398
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
esalesky/ted-im2im-de-en | ---
configs:
- config_name: en
default: true
data_files:
- split: train
path: "train-en.parquet"
- split: val
path: "val-en.parquet"
- split: test
path: "test-en.parquet"
features:
text:
dtype: string
id: null
_type: Value
filename:
dtype: string
id: null
_type: Value
image:
decode: true
id: null
_type: Image
- config_name: de
data_files:
- split: train
path: "train-de.parquet"
- split: val
path: "val-de.parquet"
- split: test
path: "test-de.parquet"
features:
text:
dtype: string
id: null
_type: Value
filename:
dtype: string
id: null
_type: Value
image:
decode: true
id: null
_type: Image
- config_name: deen
data_files:
- split: train
path: "train-deen.parquet"
- split: val
path: "val-deen.parquet"
- split: test
path: "test-deen.parquet"
features:
text:
dtype: string
id: null
_type: Value
filename:
dtype: string
id: null
_type: Value
image:
decode: true
id: null
_type: Image
- config_name: ende
data_files:
- split: train
path: "train-ende.parquet"
- split: val
path: "val-ende.parquet"
- split: test
path: "test-ende.parquet"
features:
text:
dtype: string
id: null
_type: Value
filename:
dtype: string
id: null
_type: Value
image:
decode: true
id: null
_type: Image
---
|
kaleemWaheed/twitter_dataset_1713069162 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 11896
num_examples: 26
download_size: 8993
dataset_size: 11896
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Telugu-LLM-Labs/assamese_alpaca_yahma_cleaned_filtered | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: assamese_instruction
dtype: string
- name: assamese_input
dtype: string
- name: assamese_output
dtype: string
splits:
- name: train
num_bytes: 106035141
num_examples: 28910
download_size: 45600326
dataset_size: 106035141
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ylacombe/google-argentinian-spanish | ---
dataset_info:
- config_name: female
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 1928460472.968
num_examples: 3921
download_size: 1625565296
dataset_size: 1928460472.968
- config_name: male
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 844151626.352
num_examples: 1818
download_size: 707569029
dataset_size: 844151626.352
configs:
- config_name: female
data_files:
- split: train
path: female/train-*
- config_name: male
data_files:
- split: train
path: male/train-*
---
# Dataset Card for "google-argentinian-spanish"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
flizzywine/A-Share_Stock_Market2020-2022 | ---
license: apache-2.0
---
|
Ediudo/modelo | ---
license: openrail++
---
|
yzhuang/autotree_automl_heloc_sgosdt_l256_d3_sd0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 328560000
num_examples: 10000
- name: validation
num_bytes: 328560000
num_examples: 10000
download_size: 133253810
dataset_size: 657120000
---
# Dataset Card for "autotree_automl_heloc_sgosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xiaozeroone/pubmed_derived | ---
configs:
- config_name: default
data_files:
- split: pubmed
path: data/pubmed-*
- split: nonbiomedical
path: data/nonbiomedical-*
- split: counterfactual
path: data/counterfactual-*
- split: casual
path: data/casual-*
- split: rap
path: data/rap-*
dataset_info:
features:
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
- name: text
dtype: string
splits:
- name: pubmed
num_bytes: 1166668
num_examples: 1000
- name: nonbiomedical
num_bytes: 1141909
num_examples: 1000
- name: counterfactual
num_bytes: 1179347
num_examples: 991
- name: casual
num_bytes: 1205949
num_examples: 1000
- name: rap
num_bytes: 1252260
num_examples: 1000
download_size: 3357032
dataset_size: 5946133
language:
- en
---
# A corpus of rewritten pubmed abstracts
This corpus contains a 1k example subset from the [pubmed](https://huggingface.co/datasets/pubmed) corpus and various rewritten versions. The rewritten versions change one aspect of the orginal text and keeps other aspects unchanged as much as possible.
- **Paper:** [Dissecting learning and forgetting in language model finetuning](https://openreview.net/forum?id=tmsqb6WpLz)
Another corpus of rewritten general text is provided here: [c4_derived](https://huggingface.co/datasets/xiaozeroone/c4_derived)
### Data Splits
- pubmed: a 1k example subset from the original pubmed corpus
- nonbiomedical: main topic of text changed to nonbiomedical topic
- counerfactual: factuals knowledge in text replaced by incorrect factuals
- casual: style of text changed to a casual style
- rap: style of text changed to a rap style
## Dataset Creation
Text is generated by ChatGPT with corresponding prompts. Refer to the paper for the instructions used to generate text in each derived subsets.
Please check the terms and conditions of pubmed data [here](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html).
### Citation Information
```
@inproceedings{
zhang2024dissecting,
title={Dissecting learning and forgetting in language model finetuning},
author={Xiao Zhang and Ji Wu},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=tmsqb6WpLz}
}
``` |
distilled-from-one-sec-cv12/chunk_261 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 946294612
num_examples: 184391
download_size: 966140031
dataset_size: 946294612
---
# Dataset Card for "chunk_261"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
sdadasfgdfgfdg/Torajo_dataset | ---
license: openrail
---
|
chr_en | ---
annotations_creators:
- expert-generated
- found
- no-annotation
language_creators:
- found
language:
- chr
- en
license:
- other
multilinguality:
- monolingual
- multilingual
- translation
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- fill-mask
- text-generation
- translation
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: chren
config_names:
- monolingual
- monolingual_raw
- parallel
- parallel_raw
dataset_info:
- config_name: monolingual
features:
- name: sentence
dtype: string
splits:
- name: chr
num_bytes: 882824
num_examples: 5210
- name: en5000
num_bytes: 615275
num_examples: 5000
- name: en10000
num_bytes: 1211605
num_examples: 10000
- name: en20000
num_bytes: 2432298
num_examples: 20000
- name: en50000
num_bytes: 6065580
num_examples: 49999
- name: en100000
num_bytes: 12130164
num_examples: 100000
download_size: 16967664
dataset_size: 23337746
- config_name: monolingual_raw
features:
- name: text_sentence
dtype: string
- name: text_title
dtype: string
- name: speaker
dtype: string
- name: date
dtype: int32
- name: type
dtype: string
- name: dialect
dtype: string
splits:
- name: full
num_bytes: 1210056
num_examples: 5210
download_size: 410646
dataset_size: 1210056
- config_name: parallel
features:
- name: sentence_pair
dtype:
translation:
languages:
- en
- chr
splits:
- name: train
num_bytes: 3089562
num_examples: 11639
- name: dev
num_bytes: 260401
num_examples: 1000
- name: out_dev
num_bytes: 78126
num_examples: 256
- name: test
num_bytes: 264595
num_examples: 1000
- name: out_test
num_bytes: 80959
num_examples: 256
download_size: 2143266
dataset_size: 3773643
- config_name: parallel_raw
features:
- name: line_number
dtype: string
- name: sentence_pair
dtype:
translation:
languages:
- en
- chr
- name: text_title
dtype: string
- name: speaker
dtype: string
- name: date
dtype: int32
- name: type
dtype: string
- name: dialect
dtype: string
splits:
- name: full
num_bytes: 5010734
num_examples: 14151
download_size: 2018726
dataset_size: 5010734
configs:
- config_name: monolingual
data_files:
- split: chr
path: monolingual/chr-*
- split: en5000
path: monolingual/en5000-*
- split: en10000
path: monolingual/en10000-*
- split: en20000
path: monolingual/en20000-*
- split: en50000
path: monolingual/en50000-*
- split: en100000
path: monolingual/en100000-*
- config_name: monolingual_raw
data_files:
- split: full
path: monolingual_raw/full-*
- config_name: parallel
data_files:
- split: train
path: parallel/train-*
- split: dev
path: parallel/dev-*
- split: out_dev
path: parallel/out_dev-*
- split: test
path: parallel/test-*
- split: out_test
path: parallel/out_test-*
default: true
- config_name: parallel_raw
data_files:
- split: full
path: parallel_raw/full-*
---
# Dataset Card for ChrEn
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github repository for ChrEn](https://github.com/ZhangShiyue/ChrEn)
- **Paper:** [ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization](https://arxiv.org/abs/2010.04791)
- **Point of Contact:** [benfrey@email.unc.edu](benfrey@email.unc.edu)
### Dataset Summary
ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.
ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.
ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning.
### Supported Tasks and Leaderboards
The dataset is intended to use for `machine-translation` between Enlish (`en`) and Cherokee (`chr`).
### Languages
The dataset contains Enlish (`en`) and Cherokee (`chr`) text. The data encompasses both existing dialects of Cherokee: the Overhill dialect, mostly spoken in Oklahoma (OK), and the Middle dialect, mostly used in North Carolina (NC).
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Many of the source texts were translations of English materials, which means that the Cherokee structures may not be 100% natural in terms of what a speaker might spontaneously produce. Each text was translated by people who speak Cherokee as the first language, which means there is a high probability of grammaticality. These data were originally available in PDF version. We apply the Optical Character Recognition (OCR) via Tesseract OCR engine to extract the Cherokee and English text.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
The sentences were manually aligned by Dr. Benjamin Frey a proficient second-language speaker of Cherokee, who also fixed the errors introduced by OCR. This process is time-consuming and took several months.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was gathered and annotated by Shiyue Zhang, Benjamin Frey, and Mohit Bansal at UNC Chapel Hill.
### Licensing Information
The copyright of the data belongs to original book/article authors or translators (hence, used for research purpose; and please contact Dr. Benjamin Frey for other copyright questions).
### Citation Information
```
@inproceedings{zhang2020chren,
title={ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization},
author={Zhang, Shiyue and Frey, Benjamin and Bansal, Mohit},
booktitle={EMNLP2020},
year={2020}
}
```
### Contributions
Thanks to [@yjernite](https://github.com/yjernite), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
ms903/visinger | ---
license: mit
---
|
WestonBond/YelpTokenized | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': 1 star
'1': 2 star
'2': 3 stars
'3': 4 stars
'4': 5 stars
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 2488411554
num_examples: 650000
- name: test
num_bytes: 191471188
num_examples: 50000
download_size: 565360957
dataset_size: 2679882742
---
# Dataset Card for "YelpTokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_uukuguy__zephyr-7b-alpha-dare-0.85 | ---
pretty_name: Evaluation run of uukuguy/zephyr-7b-alpha-dare-0.85
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [uukuguy/zephyr-7b-alpha-dare-0.85](https://huggingface.co/uukuguy/zephyr-7b-alpha-dare-0.85)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__zephyr-7b-alpha-dare-0.85\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-04T16:03:30.985884](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__zephyr-7b-alpha-dare-0.85/blob/main/results_2023-12-04T16-03-30.985884.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6405125012890543,\n\
\ \"acc_stderr\": 0.0322440782989453,\n \"acc_norm\": 0.6457442431541438,\n\
\ \"acc_norm_stderr\": 0.032888705588954556,\n \"mc1\": 0.29498164014687883,\n\
\ \"mc1_stderr\": 0.015964400965589657,\n \"mc2\": 0.4441404853042373,\n\
\ \"mc2_stderr\": 0.014450558004670922\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5767918088737202,\n \"acc_stderr\": 0.01443803622084803,\n\
\ \"acc_norm\": 0.6117747440273038,\n \"acc_norm_stderr\": 0.01424161420741405\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6387173869747063,\n\
\ \"acc_stderr\": 0.004793904922401889,\n \"acc_norm\": 0.8366859191396137,\n\
\ \"acc_norm_stderr\": 0.0036889652317335197\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411021,\n \
\ \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411021\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n\
\ \"acc_stderr\": 0.04153948404742398,\n \"acc_norm\": 0.6370370370370371,\n\
\ \"acc_norm_stderr\": 0.04153948404742398\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6644736842105263,\n \"acc_stderr\": 0.03842498559395268,\n\
\ \"acc_norm\": 0.6644736842105263,\n \"acc_norm_stderr\": 0.03842498559395268\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n\
\ \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \
\ \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6981132075471698,\n \"acc_stderr\": 0.02825420034443866,\n\
\ \"acc_norm\": 0.6981132075471698,\n \"acc_norm_stderr\": 0.02825420034443866\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n\
\ \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n\
\ \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n\
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n\
\ \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n\
\ \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n\
\ \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n\
\ \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n\
\ \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n\
\ \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.45614035087719296,\n\
\ \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n\
\ \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3941798941798942,\n \"acc_stderr\": 0.02516798233389414,\n \"\
acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.02516798233389414\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n\
\ \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n\
\ \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7741935483870968,\n\
\ \"acc_stderr\": 0.023785577884181015,\n \"acc_norm\": 0.7741935483870968,\n\
\ \"acc_norm_stderr\": 0.023785577884181015\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n\
\ \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\"\
: 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n\
\ \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586808,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586808\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n\
\ \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6564102564102564,\n \"acc_stderr\": 0.024078696580635477,\n\
\ \"acc_norm\": 0.6564102564102564,\n \"acc_norm_stderr\": 0.024078696580635477\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251976,\n \
\ \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251976\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6638655462184874,\n \"acc_stderr\": 0.030684737115135356,\n\
\ \"acc_norm\": 0.6638655462184874,\n \"acc_norm_stderr\": 0.030684737115135356\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.33774834437086093,\n \"acc_stderr\": 0.038615575462551684,\n \"\
acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.038615575462551684\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.818348623853211,\n \"acc_stderr\": 0.016530617409266875,\n \"\
acc_norm\": 0.818348623853211,\n \"acc_norm_stderr\": 0.016530617409266875\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5787037037037037,\n \"acc_stderr\": 0.033674621388960775,\n \"\
acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.033674621388960775\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.803921568627451,\n \"acc_stderr\": 0.027865942286639318,\n \"\
acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639318\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7763713080168776,\n \"acc_stderr\": 0.027123298205229966,\n \
\ \"acc_norm\": 0.7763713080168776,\n \"acc_norm_stderr\": 0.027123298205229966\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n\
\ \"acc_stderr\": 0.03102441174057222,\n \"acc_norm\": 0.6905829596412556,\n\
\ \"acc_norm_stderr\": 0.03102441174057222\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.036412970813137296,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.036412970813137296\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"\
acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7791411042944786,\n \"acc_stderr\": 0.03259177392742178,\n\
\ \"acc_norm\": 0.7791411042944786,\n \"acc_norm_stderr\": 0.03259177392742178\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n\
\ \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8803418803418803,\n\
\ \"acc_stderr\": 0.021262719400406943,\n \"acc_norm\": 0.8803418803418803,\n\
\ \"acc_norm_stderr\": 0.021262719400406943\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8173690932311622,\n\
\ \"acc_stderr\": 0.013816335389973136,\n \"acc_norm\": 0.8173690932311622,\n\
\ \"acc_norm_stderr\": 0.013816335389973136\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.024182427496577615,\n\
\ \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.024182427496577615\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3139664804469274,\n\
\ \"acc_stderr\": 0.015521923933523642,\n \"acc_norm\": 0.3139664804469274,\n\
\ \"acc_norm_stderr\": 0.015521923933523642\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n\
\ \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n\
\ \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n\
\ \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7283950617283951,\n \"acc_stderr\": 0.02474862449053737,\n\
\ \"acc_norm\": 0.7283950617283951,\n \"acc_norm_stderr\": 0.02474862449053737\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \
\ \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4426336375488918,\n\
\ \"acc_stderr\": 0.012685906538206242,\n \"acc_norm\": 0.4426336375488918,\n\
\ \"acc_norm_stderr\": 0.012685906538206242\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.02833295951403121,\n\
\ \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.02833295951403121\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6683006535947712,\n \"acc_stderr\": 0.019047485239360378,\n \
\ \"acc_norm\": 0.6683006535947712,\n \"acc_norm_stderr\": 0.019047485239360378\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n\
\ \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n\
\ \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274648,\n\
\ \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274648\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \
\ \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n\
\ \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n\
\ \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727668,\n\
\ \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727668\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29498164014687883,\n\
\ \"mc1_stderr\": 0.015964400965589657,\n \"mc2\": 0.4441404853042373,\n\
\ \"mc2_stderr\": 0.014450558004670922\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7845303867403315,\n \"acc_stderr\": 0.011555295286059282\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.42077331311599697,\n \
\ \"acc_stderr\": 0.013598489497182837\n }\n}\n```"
repo_url: https://huggingface.co/uukuguy/zephyr-7b-alpha-dare-0.85
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|arc:challenge|25_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|gsm8k|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hellaswag|10_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T16-03-30.985884.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-04T16-03-30.985884.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- '**/details_harness|winogrande|5_2023-12-04T16-03-30.985884.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-04T16-03-30.985884.parquet'
- config_name: results
data_files:
- split: 2023_12_04T16_03_30.985884
path:
- results_2023-12-04T16-03-30.985884.parquet
- split: latest
path:
- results_2023-12-04T16-03-30.985884.parquet
---
# Dataset Card for Evaluation run of uukuguy/zephyr-7b-alpha-dare-0.85
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/zephyr-7b-alpha-dare-0.85
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/zephyr-7b-alpha-dare-0.85](https://huggingface.co/uukuguy/zephyr-7b-alpha-dare-0.85) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__zephyr-7b-alpha-dare-0.85",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-04T16:03:30.985884](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__zephyr-7b-alpha-dare-0.85/blob/main/results_2023-12-04T16-03-30.985884.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6405125012890543,
"acc_stderr": 0.0322440782989453,
"acc_norm": 0.6457442431541438,
"acc_norm_stderr": 0.032888705588954556,
"mc1": 0.29498164014687883,
"mc1_stderr": 0.015964400965589657,
"mc2": 0.4441404853042373,
"mc2_stderr": 0.014450558004670922
},
"harness|arc:challenge|25": {
"acc": 0.5767918088737202,
"acc_stderr": 0.01443803622084803,
"acc_norm": 0.6117747440273038,
"acc_norm_stderr": 0.01424161420741405
},
"harness|hellaswag|10": {
"acc": 0.6387173869747063,
"acc_stderr": 0.004793904922401889,
"acc_norm": 0.8366859191396137,
"acc_norm_stderr": 0.0036889652317335197
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411021,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411021
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.04153948404742398,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.04153948404742398
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6644736842105263,
"acc_stderr": 0.03842498559395268,
"acc_norm": 0.6644736842105263,
"acc_norm_stderr": 0.03842498559395268
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6981132075471698,
"acc_stderr": 0.02825420034443866,
"acc_norm": 0.6981132075471698,
"acc_norm_stderr": 0.02825420034443866
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5829787234042553,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.5829787234042553,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.0411391498118926,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.0411391498118926
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3941798941798942,
"acc_stderr": 0.02516798233389414,
"acc_norm": 0.3941798941798942,
"acc_norm_stderr": 0.02516798233389414
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.04415438226743744,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.04415438226743744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7741935483870968,
"acc_stderr": 0.023785577884181015,
"acc_norm": 0.7741935483870968,
"acc_norm_stderr": 0.023785577884181015
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586808,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586808
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919443,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6564102564102564,
"acc_stderr": 0.024078696580635477,
"acc_norm": 0.6564102564102564,
"acc_norm_stderr": 0.024078696580635477
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.029252905927251976,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.029252905927251976
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6638655462184874,
"acc_stderr": 0.030684737115135356,
"acc_norm": 0.6638655462184874,
"acc_norm_stderr": 0.030684737115135356
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.038615575462551684,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.038615575462551684
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.818348623853211,
"acc_stderr": 0.016530617409266875,
"acc_norm": 0.818348623853211,
"acc_norm_stderr": 0.016530617409266875
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5787037037037037,
"acc_stderr": 0.033674621388960775,
"acc_norm": 0.5787037037037037,
"acc_norm_stderr": 0.033674621388960775
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639318,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7763713080168776,
"acc_stderr": 0.027123298205229966,
"acc_norm": 0.7763713080168776,
"acc_norm_stderr": 0.027123298205229966
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057222,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057222
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.036412970813137296,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.036412970813137296
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7791411042944786,
"acc_stderr": 0.03259177392742178,
"acc_norm": 0.7791411042944786,
"acc_norm_stderr": 0.03259177392742178
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8803418803418803,
"acc_stderr": 0.021262719400406943,
"acc_norm": 0.8803418803418803,
"acc_norm_stderr": 0.021262719400406943
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8173690932311622,
"acc_stderr": 0.013816335389973136,
"acc_norm": 0.8173690932311622,
"acc_norm_stderr": 0.013816335389973136
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7196531791907514,
"acc_stderr": 0.024182427496577615,
"acc_norm": 0.7196531791907514,
"acc_norm_stderr": 0.024182427496577615
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3139664804469274,
"acc_stderr": 0.015521923933523642,
"acc_norm": 0.3139664804469274,
"acc_norm_stderr": 0.015521923933523642
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7581699346405228,
"acc_stderr": 0.024518195641879334,
"acc_norm": 0.7581699346405228,
"acc_norm_stderr": 0.024518195641879334
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.02600330111788514,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.02600330111788514
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7283950617283951,
"acc_stderr": 0.02474862449053737,
"acc_norm": 0.7283950617283951,
"acc_norm_stderr": 0.02474862449053737
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48226950354609927,
"acc_stderr": 0.02980873964223777,
"acc_norm": 0.48226950354609927,
"acc_norm_stderr": 0.02980873964223777
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4426336375488918,
"acc_stderr": 0.012685906538206242,
"acc_norm": 0.4426336375488918,
"acc_norm_stderr": 0.012685906538206242
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6683006535947712,
"acc_stderr": 0.019047485239360378,
"acc_norm": 0.6683006535947712,
"acc_norm_stderr": 0.019047485239360378
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.028666857790274648,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.028666857790274648
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454125,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454125
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727668,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727668
},
"harness|truthfulqa:mc|0": {
"mc1": 0.29498164014687883,
"mc1_stderr": 0.015964400965589657,
"mc2": 0.4441404853042373,
"mc2_stderr": 0.014450558004670922
},
"harness|winogrande|5": {
"acc": 0.7845303867403315,
"acc_stderr": 0.011555295286059282
},
"harness|gsm8k|5": {
"acc": 0.42077331311599697,
"acc_stderr": 0.013598489497182837
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
TeraflopAI/Arizona_Caselaw_Access_Project | ---
license: cc0-1.0
task_categories:
- text-generation
language:
- en
tags:
- legal
- law
- caselaw
pretty_name: Caselaw Access Project
size_categories:
- 1M<n<10M
---
<img src="https://huggingface.co/datasets/TeraflopAI/Caselaw_Access_project/resolve/main/cap.png" width="800">
# The Caselaw Access Project
In collaboration with Ravel Law, Harvard Law Library digitized over 40 million U.S. court decisions consisting of 6.7 million cases from the last 360 years into a dataset that is widely accessible to use. Access a bulk download of the data through the Caselaw Access Project API (CAPAPI): https://case.law/caselaw/
Find more information about accessing state and federal written court decisions of common law through the bulk data service documentation here: https://case.law/docs/
Learn more about the Caselaw Access Project and all of the phenomenal work done by Jack Cushman, Greg Leppert, and Matteo Cargnelutti here: https://case.law/about/
Watch a live stream of the data release here: https://lil.law.harvard.edu/about/cap-celebration/stream
# Post-processing
Teraflop AI is excited to help support the Caselaw Access Project and Harvard Library Innovation Lab, in the release of over 6.6 million state and federal court decisions published throughout U.S. history. It is important to democratize fair access to data to the public, legal community, and researchers. This is a processed and cleaned version of the original CAP data.
During the digitization of these texts, there were erroneous OCR errors that occurred. We worked to post-process each of the texts for model training to fix encoding, normalization, repetition, redundancy, parsing, and formatting.
Teraflop AI’s data engine allows for the massively parallel processing of web-scale datasets into cleaned text form. Our one-click deployment allowed us to easily split the computation between 1000s of nodes on our managed infrastructure.
### Nomic Atlas
Thank you to Nomic AI for providing us with Atlas research credits to store and visualize each of the jurisdictions in this dataset.
Access the Arizona jurisdiction map here: https://huggingface.co/spaces/TeraflopAI/Arizona_CAP
Nomic AI released nomic-embed-text-v1.5, an open-source, 8192 context text embedding model. The embeddings for the Atlas maps are generated by this model. You can find more information about the model release here: https://x.com/nomic_ai/status/1757782157374734665?s=20
The nomic-embed-text-v1.5 model is widely accessible on Hugging Face. The model card provides training, usage, and benchmark information about the model: https://huggingface.co/nomic-ai/nomic-embed-text-v1.5
# Licensing Information
The Caselaw Access Project dataset is licensed under the [CC0 License](https://creativecommons.org/public-domain/cc0/).
# Citation Information
```
The President and Fellows of Harvard University. "Caselaw Access Project." 2024, https://case.law/
```
```
@misc{ccap,
title={Cleaned Caselaw Access Project},
author={Enrico Shippole, Aran Komatsuzaki},
howpublished{\url{https://huggingface.co/datasets/TeraflopAI/Caselaw_Access_Project}},
year={2024}
}
``` |
leeseongmin451/black-outlined-essential-icons-1 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 874942.0
num_examples: 380
download_size: 670334
dataset_size: 874942.0
---
# Dataset Card for "black-outlined-essential-icons-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_stsb_completive_finish | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: dev
num_bytes: 16552
num_examples: 78
- name: test
num_bytes: 9164
num_examples: 43
- name: train
num_bytes: 42910
num_examples: 174
download_size: 55229
dataset_size: 68626
---
# Dataset Card for "MULTI_VALUE_stsb_completive_finish"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bipulparua/llm-lotr-test1 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 2196528.0
num_examples: 268
- name: test
num_bytes: 245880.0
num_examples: 30
download_size: 1128455
dataset_size: 2442408.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "llm-lotr-test1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zzunyang/LawQA_LawSee | ---
task_categories:
- conversational
language:
- ko
tags:
- legal
--- |
Codec-SUPERB/opensinger_extract_unit | ---
configs:
- config_name: default
data_files:
- split: academicodec_hifi_16k_320d
path: data/academicodec_hifi_16k_320d-*
- split: academicodec_hifi_16k_320d_large_uni
path: data/academicodec_hifi_16k_320d_large_uni-*
- split: academicodec_hifi_24k_320d
path: data/academicodec_hifi_24k_320d-*
- split: audiodec_24k_320d
path: data/audiodec_24k_320d-*
- split: dac_16k
path: data/dac_16k-*
- split: dac_24k
path: data/dac_24k-*
- split: dac_44k
path: data/dac_44k-*
- split: encodec_24k
path: data/encodec_24k-*
- split: funcodec_en_libritts_16k_gr1nq32ds320
path: data/funcodec_en_libritts_16k_gr1nq32ds320-*
- split: funcodec_en_libritts_16k_gr8nq32ds320
path: data/funcodec_en_libritts_16k_gr8nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds320
path: data/funcodec_en_libritts_16k_nq32ds320-*
- split: funcodec_en_libritts_16k_nq32ds640
path: data/funcodec_en_libritts_16k_nq32ds640-*
- split: funcodec_zh_en_16k_nq32ds320
path: data/funcodec_zh_en_16k_nq32ds320-*
- split: funcodec_zh_en_16k_nq32ds640
path: data/funcodec_zh_en_16k_nq32ds640-*
- split: speech_tokenizer_16k
path: data/speech_tokenizer_16k-*
dataset_info:
features:
- name: id
dtype: string
- name: unit
sequence:
sequence: int64
splits:
- name: academicodec_hifi_16k_320d
num_bytes: 300416100
num_examples: 43075
- name: academicodec_hifi_16k_320d_large_uni
num_bytes: 300416100
num_examples: 43075
- name: academicodec_hifi_24k_320d
num_bytes: 449971396
num_examples: 43075
- name: audiodec_24k_320d
num_bytes: 961193172
num_examples: 43075
- name: dac_16k
num_bytes: 1897940708
num_examples: 43075
- name: dac_24k
num_bytes: 5413713908
num_examples: 43075
- name: dac_44k
num_bytes: 1613103224
num_examples: 43075
- name: encodec_24k
num_bytes: 226324972
num_examples: 43075
- name: funcodec_en_libritts_16k_gr1nq32ds320
num_bytes: 2405254132
num_examples: 43075
- name: funcodec_en_libritts_16k_gr8nq32ds320
num_bytes: 2405254132
num_examples: 43075
- name: funcodec_en_libritts_16k_nq32ds320
num_bytes: 2405215988
num_examples: 43075
- name: funcodec_en_libritts_16k_nq32ds640
num_bytes: 1208818932
num_examples: 43075
- name: funcodec_zh_en_16k_nq32ds320
num_bytes: 2405215988
num_examples: 43075
- name: funcodec_zh_en_16k_nq32ds640
num_bytes: 2405215988
num_examples: 43075
- name: speech_tokenizer_16k
num_bytes: 602279828
num_examples: 43075
download_size: 3902403817
dataset_size: 25000334568
---
# Dataset Card for "opensinger_extract_unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kamyarazimi/ConcreteCrackDataset | ---
license: other
--- |
Vitrola40/rhpcvocal | ---
license: openrail
---
|
CyberHarem/indra_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of indra/インドラ/因陀罗 (Arknights)
This is the dataset of indra/インドラ/因陀罗 (Arknights), containing 108 images and their tags.
The core tags of this character are `animal_ears, long_hair, tiger_ears, yellow_eyes, grey_hair, tiger_girl, white_hair, scar_on_face, tail, tiger_tail, multicolored_hair, breasts, hair_between_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 108 | 192.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/indra_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 108 | 159.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/indra_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 254 | 310.69 MiB | [Download](https://huggingface.co/datasets/CyberHarem/indra_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/indra_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | 1girl, black_gloves, black_pants, open_jacket, solo, long_sleeves, looking_at_viewer, chain, green_shirt, scar_on_nose, smile, navel, blue_jacket, holding, red_belt, black_choker, red_footwear |
| 1 | 10 |  |  |  |  |  | 1girl, collared_shirt, looking_at_viewer, smile, official_alternate_costume, solo, black_jacket, black_pants, single_braid, striped_necktie, purple_necktie, purple_shirt, simple_background, white_background, black_vest, open_jacket, belt, gloves, scar_on_nose |
| 2 | 6 |  |  |  |  |  | 1girl, blush, hetero, nipples, solo_focus, collarbone, completely_nude, navel, pussy, abs, dark-skinned_male, medium_breasts, penis, scar_on_nose, sex, sweat, interracial, lying, mosaic_censoring, multiple_boys, open_mouth, sitting, spread_legs, very_long_hair |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_gloves | black_pants | open_jacket | solo | long_sleeves | looking_at_viewer | chain | green_shirt | scar_on_nose | smile | navel | blue_jacket | holding | red_belt | black_choker | red_footwear | collared_shirt | official_alternate_costume | black_jacket | single_braid | striped_necktie | purple_necktie | purple_shirt | simple_background | white_background | black_vest | belt | gloves | blush | hetero | nipples | solo_focus | collarbone | completely_nude | pussy | abs | dark-skinned_male | medium_breasts | penis | sex | sweat | interracial | lying | mosaic_censoring | multiple_boys | open_mouth | sitting | spread_legs | very_long_hair |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------|:--------------|:-------|:---------------|:--------------------|:--------|:--------------|:---------------|:--------|:--------|:--------------|:----------|:-----------|:---------------|:---------------|:-----------------|:-----------------------------|:---------------|:---------------|:------------------|:-----------------|:---------------|:--------------------|:-------------------|:-------------|:-------|:---------|:--------|:---------|:----------|:-------------|:-------------|:------------------|:--------|:------|:--------------------|:-----------------|:--------|:------|:--------|:--------------|:--------|:-------------------|:----------------|:-------------|:----------|:--------------|:-----------------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 10 |  |  |  |  |  | X | | X | X | X | | X | | | X | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | | | | | | | | X | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
SEACrowd/idk_mrc | ---
tags:
- question-answering
language:
- ind
---
# idk_mrc
I(n)dontKnow-MRC (IDK-MRC) is an Indonesian Machine Reading Comprehension dataset that covers
answerable and unanswerable questions. Based on the combination of the existing answerable questions in TyDiQA,
the new unanswerable question in IDK-MRC is generated using a question generation model and human-written question.
Each paragraph in the dataset has a set of answerable and unanswerable questions with the corresponding answer.
Besides IDK-MRC (idk_mrc) dataset, several baseline datasets also provided:
1. Trans SQuAD (trans_squad): machine translated SQuAD 2.0 (Muis and Purwarianti, 2020)
2. TyDiQA (tydiqa): Indonesian answerable questions set from the TyDiQA-GoldP (Clark et al., 2020)
3. Model Gen (model_gen): TyDiQA + the unanswerable questions output from the question generation model
4. Human Filt (human_filt): Model Gen dataset that has been filtered by human annotator
## Dataset Usage
Run `pip install nusacrowd` before loading the dataset through HuggingFace's `load_dataset`.
## Citation
```
@misc{putri2022idk,
doi = {10.48550/ARXIV.2210.13778},
url = {https://arxiv.org/abs/2210.13778},
author = {Putri, Rifki Afina and Oh, Alice},
title = {IDK-MRC: Unanswerable Questions for Indonesian Machine Reading Comprehension},
publisher = {arXiv},
year = {2022}
}
```
## License
CC-BY-SA 4.0
## Homepage
[https://github.com/rifkiaputri/IDK-MRC](https://github.com/rifkiaputri/IDK-MRC)
### NusaCatalogue
For easy indexing and metadata: [https://indonlp.github.io/nusa-catalogue](https://indonlp.github.io/nusa-catalogue) |
ryanramos/vqa-with-coco-img-3 | ---
dataset_info:
features:
- name: license
dtype: int64
- name: file_name
dtype: string
- name: coco_url
dtype: string
- name: height
dtype: int64
- name: width
dtype: int64
- name: date_captured
dtype: string
- name: flickr_url
dtype: string
- name: captions
list:
- name: caption
dtype: string
- name: id
dtype: int64
- name: questions
list:
- name: answer_type
dtype: string
- name: answers
list:
- name: answer
dtype: string
- name: answer_confidence
dtype: string
- name: answer_id
dtype: int64
- name: image_id
dtype: int64
- name: multiple_choice_answer
dtype: string
- name: question
dtype: string
- name: question_id
dtype: int64
- name: question_type
dtype: string
- name: image_id
dtype: int64
- name: image
dtype: image
splits:
- name: train
num_bytes: 889819603.5
num_examples: 16500
download_size: 860459417
dataset_size: 889819603.5
---
# Dataset Card for "vqa-with-coco-img-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
c4lliope/us-congress | ---
dataset_info:
features:
- name: key
dtype: string
- name: title
dtype: string
- name: summaries
struct:
- name: pagination
struct:
- name: count
dtype: int64
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: billUrl
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: summaries
list:
- name: actionDate
dtype: string
- name: actionDesc
dtype: string
- name: text
dtype: string
- name: updateDate
dtype: string
- name: versionCode
dtype: string
- name: plaintext
dtype: string
- name: sponsor
dtype: string
- name: actions
struct:
- name: actions
list:
- name: actionCode
dtype: string
- name: actionDate
dtype: string
- name: actionTime
dtype: string
- name: calendarNumber
struct:
- name: calendar
dtype: string
- name: number
dtype: string
- name: committees
list:
- name: name
dtype: string
- name: systemCode
dtype: string
- name: url
dtype: string
- name: recordedVotes
list:
- name: chamber
dtype: string
- name: congress
dtype: int64
- name: date
dtype: string
- name: rollNumber
dtype: int64
- name: sessionNumber
dtype: int64
- name: url
dtype: string
- name: sourceSystem
struct:
- name: code
dtype: int64
- name: name
dtype: string
- name: text
dtype: string
- name: type
dtype: string
- name: pagination
struct:
- name: count
dtype: int64
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: billUrl
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: amendments
struct:
- name: amendments
list:
- name: congress
dtype: int64
- name: description
dtype: string
- name: latestAction
struct:
- name: actionDate
dtype: string
- name: actionTime
dtype: string
- name: text
dtype: string
- name: number
dtype: string
- name: purpose
dtype: string
- name: type
dtype: string
- name: updateDate
dtype: string
- name: url
dtype: string
- name: pagination
struct:
- name: count
dtype: int64
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: billUrl
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: committees
struct:
- name: committees
list:
- name: activities
list:
- name: date
dtype: string
- name: name
dtype: string
- name: chamber
dtype: string
- name: name
dtype: string
- name: subcommittees
list:
- name: activities
list:
- name: date
dtype: string
- name: name
dtype: string
- name: name
dtype: string
- name: systemCode
dtype: string
- name: url
dtype: string
- name: systemCode
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: billUrl
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: cosponsors
struct:
- name: cosponsors
list:
- name: bioguideId
dtype: string
- name: district
dtype: int64
- name: firstName
dtype: string
- name: fullName
dtype: string
- name: isOriginalCosponsor
dtype: bool
- name: lastName
dtype: string
- name: middleName
dtype: string
- name: party
dtype: string
- name: sponsorshipDate
dtype: string
- name: sponsorshipWithdrawnDate
dtype: string
- name: state
dtype: string
- name: url
dtype: string
- name: pagination
struct:
- name: count
dtype: int64
- name: countIncludingWithdrawnCosponsors
dtype: int64
- name: prev
dtype: string
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: billUrl
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: index
struct:
- name: bill
struct:
- name: actions
struct:
- name: count
dtype: int64
- name: url
dtype: string
- name: amendments
struct:
- name: count
dtype: int64
- name: url
dtype: string
- name: cboCostEstimates
list:
- name: description
dtype: string
- name: pubDate
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: committeeReports
list:
- name: citation
dtype: string
- name: url
dtype: string
- name: committees
struct:
- name: count
dtype: int64
- name: url
dtype: string
- name: congress
dtype: int64
- name: constitutionalAuthorityStatementText
dtype: string
- name: cosponsors
struct:
- name: count
dtype: int64
- name: countIncludingWithdrawnCosponsors
dtype: int64
- name: url
dtype: string
- name: introducedDate
dtype: string
- name: latestAction
struct:
- name: actionDate
dtype: string
- name: actionTime
dtype: string
- name: text
dtype: string
- name: laws
list:
- name: number
dtype: string
- name: type
dtype: string
- name: number
dtype: string
- name: originChamber
dtype: string
- name: policyArea
struct:
- name: name
dtype: string
- name: relatedBills
struct:
- name: count
dtype: int64
- name: url
dtype: string
- name: sponsors
list:
- name: bioguideId
dtype: string
- name: district
dtype: int64
- name: firstName
dtype: string
- name: fullName
dtype: string
- name: isByRequest
dtype: string
- name: lastName
dtype: string
- name: middleName
dtype: string
- name: party
dtype: string
- name: state
dtype: string
- name: url
dtype: string
- name: subjects
struct:
- name: count
dtype: int64
- name: url
dtype: string
- name: summaries
struct:
- name: count
dtype: int64
- name: url
dtype: string
- name: textVersions
struct:
- name: count
dtype: int64
- name: url
dtype: string
- name: title
dtype: string
- name: titles
struct:
- name: count
dtype: int64
- name: url
dtype: string
- name: type
dtype: string
- name: updateDate
dtype: string
- name: updateDateIncludingText
dtype: string
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: relatedbills
struct:
- name: pagination
struct:
- name: count
dtype: int64
- name: relatedBills
list:
- name: congress
dtype: int64
- name: latestAction
struct:
- name: actionDate
dtype: string
- name: actionTime
dtype: string
- name: text
dtype: string
- name: number
dtype: int64
- name: relationshipDetails
list:
- name: identifiedBy
dtype: string
- name: type
dtype: string
- name: title
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: billUrl
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: subjects
struct:
- name: pagination
struct:
- name: count
dtype: int64
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: billUrl
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: subjects
struct:
- name: legislativeSubjects
list:
- name: name
dtype: string
- name: policyArea
struct:
- name: name
dtype: string
- name: text
struct:
- name: pagination
struct:
- name: count
dtype: int64
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: billUrl
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: textVersions
list:
- name: date
dtype: string
- name: formats
list:
- name: type
dtype: string
- name: url
dtype: string
- name: type
dtype: string
- name: titles
struct:
- name: pagination
struct:
- name: count
dtype: int64
- name: request
struct:
- name: billNumber
dtype: string
- name: billType
dtype: string
- name: billUrl
dtype: string
- name: congress
dtype: string
- name: contentType
dtype: string
- name: format
dtype: string
- name: titles
list:
- name: billTextVersionCode
dtype: string
- name: billTextVersionName
dtype: string
- name: chamberCode
dtype: string
- name: chamberName
dtype: string
- name: title
dtype: string
- name: titleType
dtype: string
splits:
- name: train
num_bytes: 42798980
num_examples: 6433
download_size: 6439766
dataset_size: 42798980
---
# Dataset Card for "us-congress"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/isayama_yomi_gareizero | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Isayama Yomi (Ga-Rei: Zero)
This is the dataset of Isayama Yomi (Ga-Rei: Zero), containing 248 images and their tags.
The core tags of this character are `black_hair, long_hair, bangs, blunt_bangs, purple_eyes, hime_cut`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 248 | 201.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isayama_yomi_gareizero/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 248 | 151.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isayama_yomi_gareizero/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 488 | 271.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isayama_yomi_gareizero/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 248 | 201.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isayama_yomi_gareizero/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 488 | 349.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/isayama_yomi_gareizero/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/isayama_yomi_gareizero',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, parody, solo, black_serafuku, open_mouth, anime_coloring |
| 1 | 7 |  |  |  |  |  | 1girl, anime_coloring, parody, serafuku, solo |
| 2 | 5 |  |  |  |  |  | 1girl, anime_coloring, looking_at_viewer, serafuku, solo, parody, smile |
| 3 | 6 |  |  |  |  |  | 1girl, black_serafuku, solo, profile |
| 4 | 5 |  |  |  |  |  | 1girl, black_serafuku, solo, katana |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | parody | solo | black_serafuku | open_mouth | anime_coloring | serafuku | looking_at_viewer | smile | profile | katana |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-------|:-----------------|:-------------|:-----------------|:-----------|:--------------------|:--------|:----------|:---------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | | | | | |
| 1 | 7 |  |  |  |  |  | X | X | X | | | X | X | | | | |
| 2 | 5 |  |  |  |  |  | X | X | X | | | X | X | X | X | | |
| 3 | 6 |  |  |  |  |  | X | | X | X | | | | | | X | |
| 4 | 5 |  |  |  |  |  | X | | X | X | | | | | | | X |
|
vblagoje/haystack-pipelines | ---
license: apache-2.0
---
|
irds/clueweb12_touche-2022-task-2_expanded-doc-t5-query | ---
pretty_name: '`clueweb12/touche-2022-task-2/expanded-doc-t5-query`'
viewer: false
source_datasets: []
task_categories:
- text-retrieval
---
# Dataset Card for `clueweb12/touche-2022-task-2/expanded-doc-t5-query`
The `clueweb12/touche-2022-task-2/expanded-doc-t5-query` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12/touche-2022-task-2/expanded-doc-t5-query).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=868,655
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/clueweb12_touche-2022-task-2_expanded-doc-t5-query', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'chatnoir_url': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Bondarenko2022Touche,
address = {Berlin Heidelberg New York},
author = {Alexander Bondarenko and Maik Fr{\"o}be and Johannes Kiesel and Shahbaz Syed and Timon Gurcke and Meriem Beloucif and Alexander Panchenko and Chris Biemann and Benno Stein and Henning Wachsmuth and Martin Potthast and Matthias Hagen},
booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. 13th International Conference of the CLEF Association (CLEF 2022)},
editor = {Alberto Barr{\'o}n-Cede{\~n}o and Giovanni Da San Martino and Mirko Degli Esposti and Fabrizio Sebastiani and Craig Macdonald and Gabriella Pasi and Allan Hanbury and Martin Potthast and Guglielmo Faggioli and Nicola Ferro},
month = sep,
numpages = 29,
publisher = {Springer},
series = {Lecture Notes in Computer Science},
site = {Bologna, Italy},
title = {{Overview of Touch{\'e} 2022: Argument Retrieval}},
year = 2022
}
```
|
ppietro/catrinas | ---
license: afl-3.0
---
|
NobodyExistsOnTheInternet/Chem2800ctx | ---
license: mit
---
|
umm-maybe/gutenberg_english_pre1928 | ---
dataset_info:
features: []
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Sovietico157/Ogro | ---
license: openrail
---
|
joey234/mmlu-global_facts-neg | ---
dataset_info:
features:
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: question
dtype: string
splits:
- name: test
num_bytes: 17996
num_examples: 100
download_size: 11074
dataset_size: 17996
---
# Dataset Card for "mmlu-global_facts-neg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ineoApp/ds_factures | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: bboxes
sequence:
sequence: int64
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': numero facture
'2': fournisseur
'3': date facture
'4': date limite
'5': montant ht
'6': montant ttc
'7': tva
'8': prix tva
'9': addresse
'10': reference
'11': art1 designation
'12': art1 quantite
'13': art1 prix unit
'14': art1 tva
'15': art1 montant ht
'16': art2 designation
'17': art2 quantite
'18': art2 prix unit
'19': art2 tva
'20': art2 montant ht
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 14736563.333333334
num_examples: 14
- name: test
num_bytes: 4210446.666666667
num_examples: 4
download_size: 6308297
dataset_size: 18947010.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
JetBrains-Research/commit-chronicle | ---
license: other
language:
- code
- en
task_categories:
- text-generation
- summarization
tags:
- code
- commit_message_generation
pretty_name: CommitChronicle
size_categories:
- 1M<n<10M
dataset_info:
- config_name: default
features:
- name: author
dtype: int64
- name: date
dtype: string
- name: timezone
dtype: int64
- name: hash
dtype: string
- name: message
dtype: string
- name: mods
list:
- name: change_type
dtype: string
- name: old_path
dtype: string
- name: new_path
dtype: string
- name: diff
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: repo
dtype: string
- name: original_message
dtype: string
splits:
- name: test
num_bytes: 5760117409
num_examples: 1486267
- name: train
num_bytes: 30084265848
num_examples: 7659458
- name: validation
num_bytes: 5905326070
num_examples: 1554042
download_size: 14168436205
dataset_size: 41749709327
- config_name: subset_cmg
features:
- name: author
dtype: int64
- name: date
dtype: string
- name: timezone
dtype: int64
- name: hash
dtype: string
- name: message
dtype: string
- name: mods
list:
- name: change_type
dtype: string
- name: old_path
dtype: string
- name: new_path
dtype: string
- name: diff
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: repo
dtype: string
- name: original_message
dtype: string
splits:
- name: test
num_bytes: 772774959
num_examples: 204336
download_size: 258151047
dataset_size: 772774959
- config_name: subset_llm
features:
- name: author
dtype: int64
- name: date
dtype: string
- name: timezone
dtype: int64
- name: hash
dtype: string
- name: message
dtype: string
- name: mods
list:
- name: change_type
dtype: string
- name: old_path
dtype: string
- name: new_path
dtype: string
- name: diff
dtype: string
- name: language
dtype: string
- name: license
dtype: string
- name: repo
dtype: string
- name: original_message
dtype: string
splits:
- name: test
num_bytes: 15121048
num_examples: 4025
download_size: 5068039
dataset_size: 15121048
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- config_name: subset_cmg
data_files:
- split: test
path: subset_cmg/test-*
- config_name: subset_llm
data_files:
- split: test
path: subset_llm/test-*
---
# 📜 CommitChronicle 🔮
This is the dataset for commit message generation (and/or completion), introduced in the paper "From Commit Message Generation to History-Aware Commit Message Completion", ASE 2023.
Its key features:
* *large-scale and multilingual*: contains 10.7M commits from 11.9k GitHub repositories in 20 programming languages;
* *diverse*: avoids restrictive filtering on commit messages or commit diffs structure;
* *suitable for experiments with commit history*: provides metadata about commit authors and dates and uses split-by-project.
## Dataset Creation
> 🔍 For further details, please refer to:
> * **Paper**: [https://arxiv.org/abs/2308.07655](https://arxiv.org/abs/2308.07655)
> * **Repository**: [https://github.com/JetBrains-Research/commit_message_generation](https://github.com/JetBrains-Research/commit_message_generation)
We used [GitHub Search](https://seart-ghs.si.usi.ch/) tool and official GitHub API to select relevant repositories with permissive licenses (Apache, BSD 3-clause, MIT).
On February 9th, 2023, we collected all commits made since 2017 from these repositories via [PyDriller](https://github.com/ishepard/pydriller).
Next, we extensively cleaned the data, including filtering outliers, dropping commits from bot authors, and dropping duplicates. Note: to avoid disclosing personal information, we replaced the commit authors' names and emails with unique identifiers.
## Dataset Structure
### Data Instances
Each data instance in the dataset is a commit. [A commit example](https://github.com/saridormi/commit_chronicle/commit/a7fb3b64184f0af5b08285cce14b9139baa94049) would look like the following:
```
{
'repo': 'saridormi/commit_chronicle',
'hash': 'a7fb3b64184f0af5b08285cce14b9139baa94049',
'author': 123,
'date': '05.07.2021 15:10:07',
'timezone': 0,
'license': 'MIT License',
'language': 'Jupyter Notebook',
'message': 'Add license badge to readme',
'original_message': 'Add license badge to readme',
'mods': [{'change_type': 'MODIFY',
'new_path': 'README.md',
'old_path': 'README.md'
'diff': '@@ -1,6 +1,6 @@\n'
' # Commits dataset\n'
' \n'
'-> :heavy_exclamation_mark: **TODO:** license\n'
'+\n'}],
}
```
### Data Fields
Each example has the following fields:
| **Field** | **Description** |
|:------------------:|:----------------------------------------:|
| `repo` | Commit repository. |
| `hash` | Commit hash. |
| `author` | Unique id for commit author |
| `date` | Commit date (from author). |
| `timezone` | Commit timezone (from author). |
| `license` | Commit repository's license. |
| `language` | Commit repository's main language. |
| `message` | Commit message (after processing). |
| `original_message` | Commit message (without any processing). |
| `mods` | List of file modifications from commit. |
Each file modification has the following fields:
| **Field** | **Description** |
|:-------------:|:-------------------------------------------------------------------------------------------------:|
| `change_type` | Type of change to current file. One of: `ADD`, `COPY`, `RENAME`, `DELETE`, `MODIFY` or `UNKNOWN`. |
| `old_path` | Path to file before change (might be empty). |
| `new_path` | Path to file after change (might be empty). |
| `diff` | `git diff` for current file. |
### Data Splits
We provide the following configurations:
* `default`
* `train`: full training split (7.66M commits)
* `validation`: full validation split (1.55M commits)
* `test`: full test split (1.49M commits)
* `subset_cmg`
* `test`: test subset used for experiments with CMG approaches (204k commits)
* `subset_llm`
* `test`: test subset used for experiments with a LLM (4k commits)
## Considerations for Using the Data
> Adopted from [the Stack](https://huggingface.co/datasets/bigcode/the-stack).
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their open-access research.
Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information.
The dataset is a collection of commits from repositories with various licenses. Any use of all or part of the code gathered in this dataset must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
## Citation
```
TODO
``` |
Josephgflowers/reason_with_cinder | ---
license: mit
---
|
datasciathlete/corpus4everyone-klue-korean-NER | ---
dataset_info:
features:
- name: ner_tags
sequence:
class_label:
names:
"0": B-PS,
"1": I-PS,
"2": B-FD,
"3": I-FD,
"4": B-TR,
"5": I-TR,
"6": B-AF,
"7": I-AF,
"8": B-OG,
"9": I-OG,
"10": B-LC,
"11": I-LC,
"12": B-CV,
"13": I-CV,
"14": B-DT,
"15": I-DT,
"16": B-TI,
"17": I-TI,
"18": B-QT,
"19": I-QT,
"20": B-EV,
"21": I-EV,
"22": B-AM,
"23": I-AM,
"24": B-PT,
"25": I-PT,
"26": B-MT,
"27": I-MT,
"28": B-TM,
"29": I-TM,
"30": O
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 166572779.43135825
num_examples: 138015
- name: validation
num_bytes: 42859683.236356184
num_examples: 34252
download_size: 22991576
dataset_size: 209432462.66771442
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Sleoruiz/discursos-sexta-class-separated-by-idx | ---
dataset_info:
features:
- name: text
dtype: string
- name: name
dtype: string
- name: comision
dtype: string
- name: gaceta_numero
dtype: string
- name: fecha_gaceta
dtype: string
- name: labels
sequence: string
- name: scores
sequence: float64
- name: idx
dtype: int64
splits:
- name: train
num_bytes: 15686917
num_examples: 11149
download_size: 7247468
dataset_size: 15686917
---
# Dataset Card for "discurskos-sexta-class-separated-by-idx"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.