datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
cr7Por/my_controlnet | ---
dataset_info:
features:
- name: image
dtype: image
- name: image_crop
dtype: image
- name: image_caption
dtype: string
splits:
- name: train
num_bytes: 135354742.0
num_examples: 435
download_size: 135278720
dataset_size: 135354742.0
---
# Dataset Card for "my_controlnet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ZeeshanChaudharee/example | ---
license: apache-2.0
---
|
ilhamxx/xdata_invoices | ---
license: unknown
---
|
winddude/reddit_finance_43_250k | ---
license: gpl-3.0
language:
- en
tags:
- finance
- investing
- crypto
- reddit
---
# reddit finance 43 250k
`reddit_finance_43_250k` is a collection of 250k post/comment pairs from 43 financial, investing and crypto subreddits. Post must have all been text, with a length of 250chars, and a positive score. Each subreddit is narrowed down to the 70th qunatile before being mergered with their top 3 comments and than the other subs. Further score based methods are used to select the top 250k post/comment pairs.
The code to recreate the dataset is here: <https://github.com/getorca/ProfitsBot_V0_OLLM/tree/main/ds_builder>
The trained lora model is here: <https://huggingface.co/winddude/pb_lora_7b_v0.1> |
version-control/data-2 | ---
dataset_info:
features:
- name: version
dtype: string
- name: code
dtype: string
- name: apis
sequence: string
- name: full_version
dtype: string
- name: repo_name
dtype: string
- name: hexsha
dtype: string
splits:
- name: torch
num_bytes: 8549368
num_examples: 697
- name: tensorflow
num_bytes: 4122692
num_examples: 276
- name: scipy
num_bytes: 4668643
num_examples: 193
- name: pandas
num_bytes: 6791523
num_examples: 483
- name: sklearn
num_bytes: 2050255
num_examples: 170
- name: numpy
num_bytes: 29978447
num_examples: 1757
- name: matplotlib
num_bytes: 3453619
num_examples: 251
download_size: 21315897
dataset_size: 59614547
configs:
- config_name: default
data_files:
- split: torch
path: data/torch-*
- split: tensorflow
path: data/tensorflow-*
- split: scipy
path: data/scipy-*
- split: pandas
path: data/pandas-*
- split: sklearn
path: data/sklearn-*
- split: numpy
path: data/numpy-*
- split: matplotlib
path: data/matplotlib-*
---
|
carnival13/massive_eval_DA_tokenized | ---
dataset_info:
features:
- name: pass_label
dtype: int64
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 23064510
num_examples: 24160
download_size: 5097845
dataset_size: 23064510
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "massive_eval_DA_tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Giacinta/weibo | ---
license: apache-2.0
task_categories:
- text-classification
language:
- zh
tags:
- medical
pretty_name: weibo
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: PYH微博抽样数据.csv
--- |
KShivendu/wikipedia-1k-cohere-openai-embeddings | ---
language: en
license: mit
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: wiki_id
dtype: int32
- name: views
dtype: float32
- name: paragraph_id
dtype: int32
- name: langs
dtype: int32
- name: cohere
sequence: float32
- name: openai
sequence: float64
splits:
- name: train
num_bytes: 15850870
num_examples: 1000
download_size: 13208079
dataset_size: 15850870
tags:
- openai
- cohere
- wikipedia
---
Smaller version of https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings that includes Cohere as well as OpenAI embeddings (`text-embedding-ada-002`)
100k version of this dataset will be released soon. |
psm151/wav2vec2-large-xlsr-turkish-demo-colab | ---
license: openrail
---
|
CyberHarem/yat_sen_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of yat_sen/逸仙/逸仙 (Azur Lane)
This is the dataset of yat_sen/逸仙/逸仙 (Azur Lane), containing 86 images and their tags.
The core tags of this character are `long_hair, bangs, hair_ornament, black_hair, breasts, red_eyes, hair_flower, large_breasts, blunt_bangs, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 86 | 150.70 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yat_sen_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 86 | 77.18 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yat_sen_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 200 | 151.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yat_sen_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 86 | 128.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yat_sen_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 200 | 224.93 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yat_sen_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yat_sen_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, china_dress, cleavage_cutout, detached_sleeves, flower, looking_at_viewer, solo, white_dress, bare_shoulders, black_thighhighs, smile, blush, bridal_gauntlets, covered_navel, gloves, sitting, white_background |
| 1 | 6 |  |  |  |  |  | 1girl, bare_shoulders, china_dress, detached_sleeves, flower, looking_at_viewer, smile, solo, white_dress, closed_mouth, sleeveless_dress, black_gloves, black_thighhighs, cleavage_cutout, official_alternate_costume, oil-paper_umbrella, pelvic_curtain, thighs, brown_gloves, full_body, holding_umbrella, sitting |
| 2 | 8 |  |  |  |  |  | 1girl, flower, looking_at_viewer, solo, white_dress, cleavage, sitting, white_thighhighs, blush, closed_mouth, long_sleeves, chinese_clothes, official_alternate_costume, smile, collarbone, holding_umbrella, feet, full_body, no_shoes, see-through_dress |
| 3 | 11 |  |  |  |  |  | 1girl, smile, solo, flower, looking_at_viewer, red_dress, closed_mouth, long_sleeves, brown_hair, wide_sleeves, jewelry, sitting, china_dress, hair_bun, holding_fan |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | china_dress | cleavage_cutout | detached_sleeves | flower | looking_at_viewer | solo | white_dress | bare_shoulders | black_thighhighs | smile | blush | bridal_gauntlets | covered_navel | gloves | sitting | white_background | closed_mouth | sleeveless_dress | black_gloves | official_alternate_costume | oil-paper_umbrella | pelvic_curtain | thighs | brown_gloves | full_body | holding_umbrella | cleavage | white_thighhighs | long_sleeves | chinese_clothes | collarbone | feet | no_shoes | see-through_dress | red_dress | brown_hair | wide_sleeves | jewelry | hair_bun | holding_fan |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------------|:------------------|:-------------------|:---------|:--------------------|:-------|:--------------|:-----------------|:-------------------|:--------|:--------|:-------------------|:----------------|:---------|:----------|:-------------------|:---------------|:-------------------|:---------------|:-----------------------------|:---------------------|:-----------------|:---------|:---------------|:------------|:-------------------|:-----------|:-------------------|:---------------|:------------------|:-------------|:-------|:-----------|:--------------------|:------------|:-------------|:---------------|:----------|:-----------|:--------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | | | | | X | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | | | | X | X | X | X | | | X | X | | | | X | | X | | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 3 | 11 |  |  |  |  |  | X | X | | | X | X | X | | | | X | | | | | X | | X | | | | | | | | | | | | X | | | | | | X | X | X | X | X | X |
|
open-llm-leaderboard/details_OpenModels4all__gemma-1.1-7b-it | ---
pretty_name: Evaluation run of OpenModels4all/gemma-1.1-7b-it
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [OpenModels4all/gemma-1.1-7b-it](https://huggingface.co/OpenModels4all/gemma-1.1-7b-it)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenModels4all__gemma-1.1-7b-it\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-09T11:06:14.950936](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenModels4all__gemma-1.1-7b-it/blob/main/results_2024-04-09T11-06-14.950936.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6014747509936101,\n\
\ \"acc_stderr\": 0.03326656009179332,\n \"acc_norm\": 0.6064901781756107,\n\
\ \"acc_norm_stderr\": 0.03393142550082237,\n \"mc1\": 0.34149326805385555,\n\
\ \"mc1_stderr\": 0.016600688619950826,\n \"mc2\": 0.5039555794689907,\n\
\ \"mc2_stderr\": 0.01643070151671278\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5656996587030717,\n \"acc_stderr\": 0.01448470304885736,\n\
\ \"acc_norm\": 0.5998293515358362,\n \"acc_norm_stderr\": 0.014317197787809174\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5862378012348137,\n\
\ \"acc_stderr\": 0.004915003499517829,\n \"acc_norm\": 0.7620991834295957,\n\
\ \"acc_norm_stderr\": 0.004249278842903416\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5111111111111111,\n\
\ \"acc_stderr\": 0.04318275491977976,\n \"acc_norm\": 0.5111111111111111,\n\
\ \"acc_norm_stderr\": 0.04318275491977976\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.625,\n \"acc_stderr\": 0.039397364351956274,\n \
\ \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.039397364351956274\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n\
\ \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \
\ \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6226415094339622,\n \"acc_stderr\": 0.029832808114796,\n\
\ \"acc_norm\": 0.6226415094339622,\n \"acc_norm_stderr\": 0.029832808114796\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6597222222222222,\n\
\ \"acc_stderr\": 0.039621355734862175,\n \"acc_norm\": 0.6597222222222222,\n\
\ \"acc_norm_stderr\": 0.039621355734862175\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \
\ \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\"\
: 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5953757225433526,\n\
\ \"acc_stderr\": 0.03742461193887248,\n \"acc_norm\": 0.5953757225433526,\n\
\ \"acc_norm_stderr\": 0.03742461193887248\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04690650298201943,\n\
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04690650298201943\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n\
\ \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.032469569197899575,\n\
\ \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.032469569197899575\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n\
\ \"acc_stderr\": 0.04615186962583702,\n \"acc_norm\": 0.40350877192982454,\n\
\ \"acc_norm_stderr\": 0.04615186962583702\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n\
\ \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.46296296296296297,\n \"acc_stderr\": 0.02568056464005688,\n \"\
acc_norm\": 0.46296296296296297,\n \"acc_norm_stderr\": 0.02568056464005688\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n\
\ \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n\
\ \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \
\ \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7451612903225806,\n\
\ \"acc_stderr\": 0.024790118459332208,\n \"acc_norm\": 0.7451612903225806,\n\
\ \"acc_norm_stderr\": 0.024790118459332208\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175007,\n\
\ \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175007\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\"\
: 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.696969696969697,\n \"acc_stderr\": 0.03588624800091707,\n\
\ \"acc_norm\": 0.696969696969697,\n \"acc_norm_stderr\": 0.03588624800091707\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7575757575757576,\n \"acc_stderr\": 0.030532892233932046,\n \"\
acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.030532892233932046\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.02649905770139744,\n\
\ \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.02649905770139744\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.5974358974358974,\n \"acc_stderr\": 0.02486499515976775,\n \
\ \"acc_norm\": 0.5974358974358974,\n \"acc_norm_stderr\": 0.02486499515976775\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.362962962962963,\n \"acc_stderr\": 0.02931820364520686,\n \
\ \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.02931820364520686\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.030489911417673227,\n\
\ \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.030489911417673227\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658752,\n \"\
acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658752\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8091743119266055,\n \"acc_stderr\": 0.01684767640009109,\n \"\
acc_norm\": 0.8091743119266055,\n \"acc_norm_stderr\": 0.01684767640009109\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4537037037037037,\n \"acc_stderr\": 0.033953227263757976,\n \"\
acc_norm\": 0.4537037037037037,\n \"acc_norm_stderr\": 0.033953227263757976\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7401960784313726,\n \"acc_stderr\": 0.03077855467869327,\n \"\
acc_norm\": 0.7401960784313726,\n \"acc_norm_stderr\": 0.03077855467869327\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7426160337552743,\n \"acc_stderr\": 0.0284588209914603,\n \
\ \"acc_norm\": 0.7426160337552743,\n \"acc_norm_stderr\": 0.0284588209914603\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7085201793721974,\n\
\ \"acc_stderr\": 0.030500283176545854,\n \"acc_norm\": 0.7085201793721974,\n\
\ \"acc_norm_stderr\": 0.030500283176545854\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.6946564885496184,\n \"acc_stderr\": 0.04039314978724561,\n\
\ \"acc_norm\": 0.6946564885496184,\n \"acc_norm_stderr\": 0.04039314978724561\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990946,\n \"\
acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990946\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7300613496932515,\n \"acc_stderr\": 0.034878251684978906,\n\
\ \"acc_norm\": 0.7300613496932515,\n \"acc_norm_stderr\": 0.034878251684978906\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8418803418803419,\n\
\ \"acc_stderr\": 0.0239023255495604,\n \"acc_norm\": 0.8418803418803419,\n\
\ \"acc_norm_stderr\": 0.0239023255495604\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7650063856960408,\n\
\ \"acc_stderr\": 0.015162024152278445,\n \"acc_norm\": 0.7650063856960408,\n\
\ \"acc_norm_stderr\": 0.015162024152278445\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.6416184971098265,\n \"acc_stderr\": 0.025816756791584204,\n\
\ \"acc_norm\": 0.6416184971098265,\n \"acc_norm_stderr\": 0.025816756791584204\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23687150837988827,\n\
\ \"acc_stderr\": 0.01421957078810399,\n \"acc_norm\": 0.23687150837988827,\n\
\ \"acc_norm_stderr\": 0.01421957078810399\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6699346405228758,\n \"acc_stderr\": 0.026925654653615697,\n\
\ \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.026925654653615697\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6527331189710611,\n\
\ \"acc_stderr\": 0.027040745502307336,\n \"acc_norm\": 0.6527331189710611,\n\
\ \"acc_norm_stderr\": 0.027040745502307336\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6358024691358025,\n \"acc_stderr\": 0.02677492989972234,\n\
\ \"acc_norm\": 0.6358024691358025,\n \"acc_norm_stderr\": 0.02677492989972234\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.45390070921985815,\n \"acc_stderr\": 0.02970045324729147,\n \
\ \"acc_norm\": 0.45390070921985815,\n \"acc_norm_stderr\": 0.02970045324729147\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4485006518904824,\n\
\ \"acc_stderr\": 0.012702317490559806,\n \"acc_norm\": 0.4485006518904824,\n\
\ \"acc_norm_stderr\": 0.012702317490559806\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.5147058823529411,\n \"acc_stderr\": 0.03035969707904612,\n\
\ \"acc_norm\": 0.5147058823529411,\n \"acc_norm_stderr\": 0.03035969707904612\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5898692810457516,\n \"acc_stderr\": 0.019898412717635903,\n \
\ \"acc_norm\": 0.5898692810457516,\n \"acc_norm_stderr\": 0.019898412717635903\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n\
\ \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n\
\ \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.0287951855742913,\n\
\ \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.0287951855742913\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n\
\ \"acc_stderr\": 0.027686913588013028,\n \"acc_norm\": 0.8109452736318408,\n\
\ \"acc_norm_stderr\": 0.027686913588013028\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n\
\ \"acc_stderr\": 0.03892212195333047,\n \"acc_norm\": 0.5060240963855421,\n\
\ \"acc_norm_stderr\": 0.03892212195333047\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7894736842105263,\n \"acc_stderr\": 0.0312678171466318,\n\
\ \"acc_norm\": 0.7894736842105263,\n \"acc_norm_stderr\": 0.0312678171466318\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.34149326805385555,\n\
\ \"mc1_stderr\": 0.016600688619950826,\n \"mc2\": 0.5039555794689907,\n\
\ \"mc2_stderr\": 0.01643070151671278\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.6992896606156275,\n \"acc_stderr\": 0.01288801049470473\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4177407126611069,\n \
\ \"acc_stderr\": 0.013584820638504828\n }\n}\n```"
repo_url: https://huggingface.co/OpenModels4all/gemma-1.1-7b-it
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|arc:challenge|25_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|gsm8k|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hellaswag|10_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-06-14.950936.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-04-09T11-06-14.950936.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- '**/details_harness|winogrande|5_2024-04-09T11-06-14.950936.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-04-09T11-06-14.950936.parquet'
- config_name: results
data_files:
- split: 2024_04_09T11_06_14.950936
path:
- results_2024-04-09T11-06-14.950936.parquet
- split: latest
path:
- results_2024-04-09T11-06-14.950936.parquet
---
# Dataset Card for Evaluation run of OpenModels4all/gemma-1.1-7b-it
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [OpenModels4all/gemma-1.1-7b-it](https://huggingface.co/OpenModels4all/gemma-1.1-7b-it) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OpenModels4all__gemma-1.1-7b-it",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-09T11:06:14.950936](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenModels4all__gemma-1.1-7b-it/blob/main/results_2024-04-09T11-06-14.950936.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6014747509936101,
"acc_stderr": 0.03326656009179332,
"acc_norm": 0.6064901781756107,
"acc_norm_stderr": 0.03393142550082237,
"mc1": 0.34149326805385555,
"mc1_stderr": 0.016600688619950826,
"mc2": 0.5039555794689907,
"mc2_stderr": 0.01643070151671278
},
"harness|arc:challenge|25": {
"acc": 0.5656996587030717,
"acc_stderr": 0.01448470304885736,
"acc_norm": 0.5998293515358362,
"acc_norm_stderr": 0.014317197787809174
},
"harness|hellaswag|10": {
"acc": 0.5862378012348137,
"acc_stderr": 0.004915003499517829,
"acc_norm": 0.7620991834295957,
"acc_norm_stderr": 0.004249278842903416
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5111111111111111,
"acc_stderr": 0.04318275491977976,
"acc_norm": 0.5111111111111111,
"acc_norm_stderr": 0.04318275491977976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.625,
"acc_stderr": 0.039397364351956274,
"acc_norm": 0.625,
"acc_norm_stderr": 0.039397364351956274
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6226415094339622,
"acc_stderr": 0.029832808114796,
"acc_norm": 0.6226415094339622,
"acc_norm_stderr": 0.029832808114796
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6597222222222222,
"acc_stderr": 0.039621355734862175,
"acc_norm": 0.6597222222222222,
"acc_norm_stderr": 0.039621355734862175
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5953757225433526,
"acc_stderr": 0.03742461193887248,
"acc_norm": 0.5953757225433526,
"acc_norm_stderr": 0.03742461193887248
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04690650298201943,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04690650298201943
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5574468085106383,
"acc_stderr": 0.032469569197899575,
"acc_norm": 0.5574468085106383,
"acc_norm_stderr": 0.032469569197899575
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.04615186962583702,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.04615186962583702
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.46296296296296297,
"acc_stderr": 0.02568056464005688,
"acc_norm": 0.46296296296296297,
"acc_norm_stderr": 0.02568056464005688
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7451612903225806,
"acc_stderr": 0.024790118459332208,
"acc_norm": 0.7451612903225806,
"acc_norm_stderr": 0.024790118459332208
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.03514528562175007,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.03514528562175007
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.696969696969697,
"acc_stderr": 0.03588624800091707,
"acc_norm": 0.696969696969697,
"acc_norm_stderr": 0.03588624800091707
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.030532892233932046,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.030532892233932046
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8393782383419689,
"acc_stderr": 0.02649905770139744,
"acc_norm": 0.8393782383419689,
"acc_norm_stderr": 0.02649905770139744
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5974358974358974,
"acc_stderr": 0.02486499515976775,
"acc_norm": 0.5974358974358974,
"acc_norm_stderr": 0.02486499515976775
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.362962962962963,
"acc_stderr": 0.02931820364520686,
"acc_norm": 0.362962962962963,
"acc_norm_stderr": 0.02931820364520686
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.030489911417673227,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.030489911417673227
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.03822746937658752,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.03822746937658752
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8091743119266055,
"acc_stderr": 0.01684767640009109,
"acc_norm": 0.8091743119266055,
"acc_norm_stderr": 0.01684767640009109
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4537037037037037,
"acc_stderr": 0.033953227263757976,
"acc_norm": 0.4537037037037037,
"acc_norm_stderr": 0.033953227263757976
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7401960784313726,
"acc_stderr": 0.03077855467869327,
"acc_norm": 0.7401960784313726,
"acc_norm_stderr": 0.03077855467869327
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7426160337552743,
"acc_stderr": 0.0284588209914603,
"acc_norm": 0.7426160337552743,
"acc_norm_stderr": 0.0284588209914603
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7085201793721974,
"acc_stderr": 0.030500283176545854,
"acc_norm": 0.7085201793721974,
"acc_norm_stderr": 0.030500283176545854
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6946564885496184,
"acc_stderr": 0.04039314978724561,
"acc_norm": 0.6946564885496184,
"acc_norm_stderr": 0.04039314978724561
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990946,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990946
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7300613496932515,
"acc_stderr": 0.034878251684978906,
"acc_norm": 0.7300613496932515,
"acc_norm_stderr": 0.034878251684978906
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8418803418803419,
"acc_stderr": 0.0239023255495604,
"acc_norm": 0.8418803418803419,
"acc_norm_stderr": 0.0239023255495604
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7650063856960408,
"acc_stderr": 0.015162024152278445,
"acc_norm": 0.7650063856960408,
"acc_norm_stderr": 0.015162024152278445
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6416184971098265,
"acc_stderr": 0.025816756791584204,
"acc_norm": 0.6416184971098265,
"acc_norm_stderr": 0.025816756791584204
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.23687150837988827,
"acc_stderr": 0.01421957078810399,
"acc_norm": 0.23687150837988827,
"acc_norm_stderr": 0.01421957078810399
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6699346405228758,
"acc_stderr": 0.026925654653615697,
"acc_norm": 0.6699346405228758,
"acc_norm_stderr": 0.026925654653615697
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6527331189710611,
"acc_stderr": 0.027040745502307336,
"acc_norm": 0.6527331189710611,
"acc_norm_stderr": 0.027040745502307336
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6358024691358025,
"acc_stderr": 0.02677492989972234,
"acc_norm": 0.6358024691358025,
"acc_norm_stderr": 0.02677492989972234
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.45390070921985815,
"acc_stderr": 0.02970045324729147,
"acc_norm": 0.45390070921985815,
"acc_norm_stderr": 0.02970045324729147
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4485006518904824,
"acc_stderr": 0.012702317490559806,
"acc_norm": 0.4485006518904824,
"acc_norm_stderr": 0.012702317490559806
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5147058823529411,
"acc_stderr": 0.03035969707904612,
"acc_norm": 0.5147058823529411,
"acc_norm_stderr": 0.03035969707904612
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5898692810457516,
"acc_stderr": 0.019898412717635903,
"acc_norm": 0.5898692810457516,
"acc_norm_stderr": 0.019898412717635903
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7183673469387755,
"acc_stderr": 0.0287951855742913,
"acc_norm": 0.7183673469387755,
"acc_norm_stderr": 0.0287951855742913
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8109452736318408,
"acc_stderr": 0.027686913588013028,
"acc_norm": 0.8109452736318408,
"acc_norm_stderr": 0.027686913588013028
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333047,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333047
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7894736842105263,
"acc_stderr": 0.0312678171466318,
"acc_norm": 0.7894736842105263,
"acc_norm_stderr": 0.0312678171466318
},
"harness|truthfulqa:mc|0": {
"mc1": 0.34149326805385555,
"mc1_stderr": 0.016600688619950826,
"mc2": 0.5039555794689907,
"mc2_stderr": 0.01643070151671278
},
"harness|winogrande|5": {
"acc": 0.6992896606156275,
"acc_stderr": 0.01288801049470473
},
"harness|gsm8k|5": {
"acc": 0.4177407126611069,
"acc_stderr": 0.013584820638504828
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
sieu-n/alpaca_eval_multilingual | ---
license: cc-by-nc-4.0
---
### Usage
```
load_dataset("krenerd/alpaca_eval_multilingual", "alpaca_eval") # or alpaca_eval_en
load_dataset("krenerd/alpaca_eval_multilingual", "alpaca_eval_ko")
load_dataset("krenerd/alpaca_eval_multilingual", "alpaca_eval_ja")
```
### Method
The dataset was translated by GPT-4 API using the following prompt.
```
ja = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
"You are a helpful assistant fluent in English and Japanese."
),
HumanMessagePromptTemplate.from_template(
"Translate the following text to Japanese. Show the answer only. このテキストを直訳するのではなく、その意味を保持しつつ、より自然なリクエストに言い換えて翻訳してください text=```{instruction}```"
),
]
)
ko = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
"You are a helpful assistant fluent in English and Korean."
),
HumanMessagePromptTemplate.from_template(
"Translate the following text to Korean. Show the answer only. 말 그대로 번역하지 말고, 의미가 유지되는 한에서 자연스러운 요청으로 번역해줘. text=```{instruction}```"
),
]
)
```
Script: https://gist.github.com/sieu-n/88542733914f80f780359f5c82c99a62 |
llm-blender/mix-instruct | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: mix-instruct
size_categories:
- 100K<n<1M
---
# MixInstruct
## Introduction
This is the official realease of dataset **MixInstruct** for project **LLM-Blender**.
This dataset contains 11 responses from the current popular instruction following-LLMs that includes:
1. [Stanford Alpaca](https://huggingface.co/chavinlo/alpaca-native)
2. [FastChat Vicuna](https://huggingface.co/eachadea/vicuna-13b-1.1)
3. [Dolly V2](https://huggingface.co/databricks/dolly-v2-12b)
4. [StableLM](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b)
5. [Open Assistant](https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5)
6. [Koala](https://huggingface.co/TheBloke/koala-7B-HF)
7. [Baize](https://huggingface.co/mosesjun0h/llama-7b-hf-baize-lora-bf16)
8. [Flan-T5](https://huggingface.co/google/flan-t5-xxl)
9. [ChatGLM](https://huggingface.co/THUDM/chatglm-6b)
10. [MOSS](https://huggingface.co/fnlp/moss-moon-003-sft)
11. [Moasic MPT](https://huggingface.co/mosaicml/mpt-7b-instruct)
We evaluate each response with auto metrics including BLEU, ROUGE, BERTScore, BARTScore. And provide pairwise comparison results by prompting ChatGPT for the $4771$ examples
in the test split. (The rest $229$ examples contain contents filtered by the API).
## Data Format
```json
[
{
"id": "unified_chip2/69962",
"instruction": "",
"input": "I've always wondered what the difference is between a skeptic and a denier.",
"output": "A skeptic is someone who questions the validity of something, while a denier is someone who outright rejects something without evidence or reason.",
"candidates": [
{
"decoding_method": "top_p_sampling",
"model": "oasst-sft-4-pythia-12b-epoch-3.5",
"text": "A skeptic is someone who doubts or expresses ...",
"scores": {
"logprobs": -0.02404022216796875,
"bleu": 5.656152750894142,
"bertscore": 0.7549101114273071,
"rouge1": 0.2857142857142857,
"rouge2": 0.1272727272727273,
"rougeL": 0.23214285714285715,
"rougeLsum": 0.23214285714285715
}
},
...
],
},
...
]
```
Examples evaluted by ChatGPT will contain another filed **cmp_results**.
The options contains:
1. A is better
2. B is better
3. Same good
4. Same bad
```json
"cmp_results": {
"model_A,model_B": "A is better",
...
},
```
Each cmp_results field is encoded into a str in a json format. Please first use `json.loads(item['cmp_results'])` to get the cmp_results for each item.
"null" denotes no cmp_results from ChatGPT avaliable.
## Eval Results
### Auto Metrics
- train
| Models (down) / Metircs (right) | logprobs | rougeL | rouge2 | rougeLsum | rouge1 | bleu | bertscore | bleurt | bartscore |
|:----------------------------------|:------------|:----------------|:----------------|:----------------|:----------------|:----------------|:----------------|:----------------|:-------------|
| alpaca-native | -6.1247 | 0.248 | 0.1414 | 0.2986 | 0.3347 | 8.057 | 0.7196 | -0.5092 | -3.5335 |
| chatglm-6b | -10.1263 | 0.2231 | 0.1212 | 0.2743 | 0.3074 | 6.2597 | 0.7043 | -0.6071 | -3.4975 |
| dolly-v2-12b | -24.8508 | 0.1245 | 0.0502 | 0.1625 | 0.1836 | 2.1062 | 0.6244 | -0.8562 | -3.8145 |
| flan-t5-xxl | -1.0717 | 0.1202 | 0.0456 | 0.1334 | 0.1489 | 1.8418 | 0.6514 | -1.2176 | -4.537 |
| koala-7B-HF | -10.8323 | 0.1533 | 0.0683 | 0.1909 | 0.2165 | 3.2848 | 0.6436 | -0.8284 | -3.8326 |
| llama-7b-hf-baize-lora-bf16 | -24.8867 | 0.1539 | 0.0797 | 0.2042 | 0.2276 | 3.4928 | 0.6564 | -0.6575 | -3.496 |
| moss-moon-003-sft | -796.1366 | 0.1599 | 0.0898 | 0.2135 | 0.236 | 3.944 | 0.6689 | -0.5617 | -3.3404 |
| mpt-7b | -174.1702 | 0.1118 | 0.0447 | 0.1517 | 0.1683 | 1.7698 | 0.618 | -0.9525 | -3.9119 |
| mpt-7b-instruct | -156.8005 | 0.1225 | 0.0538 | 0.1669 | 0.1861 | 2.1041 | 0.6327 | -0.8176 | -3.6996 |
| oasst-sft-4-pythia-12b-epoch-3.5 | -4.7714 | 0.2902 | 0.1763 | 0.3447 | 0.386 | 10.6599 | 0.748 | -0.3762 | -3.4221 |
| stablelm-tuned-alpha-7b | -1268.9396 | 0.1336 | 0.0544 | 0.1714 | 0.1948 | 2.6348 | 0.6355 | -0.9585 | -4.0795 |
| vicuna-13b-1.1 | -11.1528 | 0.211 | 0.1219 | 0.2671 | 0.3003 | 6.3697 | 0.6928 | -0.6194 | -3.4233 |
| Best Model Metric Perf | -1.0717 | 0.2902 | 0.1763 | 0.3447 | 0.386 | 10.6599 | 0.748 | -0.3762 | -3.3404 |
| Oracle | 0.0 | 0.3611 | 0.2471 | 0.4242 | 0.4706 | 15.8557 | 0.7783 | 0.0723 | 0.0 |
| Oracle-Best_Model Gap | 1.0717 | 0.0709 | 0.0708 | 0.0794 | 0.0846 | 5.1958 | 0.0303 | 0.4484 | 3.3404 |
- val
| Models (down) / Metircs (right) | logprobs | rouge1 | rouge2 | rougeLsum | rougeL | bleu | bertscore | bleurt | bartscore |
|:----------------------------------|:------------|:----------------|:----------------|:----------------|:----------------|:----------------|:----------------|:----------------|:---------------|
| alpaca-native | -3.3832 | 0.3342 | 0.1452 | 0.299 | 0.2503 | 8.1749 | 0.7198 | -0.5076 | -3.5517 |
| chatglm-6b | -4.7033 | 0.3066 | 0.1216 | 0.2743 | 0.2241 | 6.3323 | 0.7053 | -0.6091 | -3.51 |
| dolly-v2-12b | -9.1237 | 0.1843 | 0.0511 | 0.1633 | 0.1254 | 2.1368 | 0.6257 | -0.852 | -3.8121 |
| flan-t5-xxl | -1.0077 | 0.1497 | 0.0464 | 0.1342 | 0.1212 | 1.8653 | 0.652 | -1.2089 | -4.5407 |
| koala-7B-HF | -6.015 | 0.2154 | 0.068 | 0.1903 | 0.1538 | 3.2596 | 0.6425 | -0.8298 | -3.8456 |
| llama-7b-hf-baize-lora-bf16 | -12.2594 | 0.2261 | 0.0803 | 0.2034 | 0.1543 | 3.5462 | 0.6562 | -0.6604 | -3.4831 |
| moss-moon-003-sft | -357.3054 | 0.2053 | 0.0678 | 0.1851 | 0.1361 | 2.9639 | 0.648 | -0.7261 | -3.6317 |
| mpt-7b | -171.9416 | 0.1663 | 0.0447 | 0.1499 | 0.1111 | 1.7555 | 0.617 | -0.964 | -3.9189 |
| mpt-7b-instruct | -157.1143 | 0.1841 | 0.054 | 0.1652 | 0.1224 | 2.1252 | 0.6307 | -0.8275 | -3.7183 |
| oasst-ft-4-pythia-12b-epoch-3.5 | -1.6194 | 0.3835 | 0.1761 | 0.3434 | 0.2896 | 10.5858 | 0.7479 | -0.378 | -3.4366 |
| stablelm-tuned-alpha-7b | -869.6767 | 0.192 | 0.0529 | 0.1688 | 0.1317 | 2.5687 | 0.6314 | -0.9618 | -4.1008 |
| vicuna-13b-1.1 | -5.6143 | 0.3029 | 0.1242 | 0.2701 | 0.2142 | 6.5299 | 0.695 | -0.6212 | -3.4332 |
| Best Model Metric Perf | -1.0077 | 0.3835 | 0.1761 | 0.3434 | 0.2896 | 10.5858 | 0.7479 | -0.378 | -3.4332 |
| Oracle | 0.0 | 0.4712 | 0.2488 | 0.4258 | 0.3642 | 15.9896 | 0.7794 | 0.0726 | 0.0 |
| Oracle-Best_Model Gap | 1.0077 | 0.0877 | 0.0728 | 0.0824 | 0.0746 | 5.4038 | 0.0315 | 0.4506 | 3.4332 |
- test
| Models (down) / Metircs (right) | logprobs | rougeL | rougeLsum | rouge1 | rouge2 | bleu | bertscore | bleurt | bartscore |
|:----------------------------------|:------------|:----------------|:----------------|:----------------|:----------------|:----------------|:----------------|:----------------|:---------------|
| alpaca-native | -3.458 | 0.2421 | 0.2915 | 0.3276 | 0.1362 | 7.6478 | 0.7146 | -0.5307 | -3.5696 |
| chatglm-6b | -4.7418 | 0.2225 | 0.2734 | 0.3063 | 0.1192 | 6.0493 | 0.7038 | -0.6167 | -3.5193 |
| dolly-v2-12b | -9.1266 | 0.1236 | 0.1606 | 0.1811 | 0.0495 | 2.062 | 0.6226 | -0.8654 | -3.8331 |
| flan-t5-xxl | -0.9924 | 0.1172 | 0.1296 | 0.1444 | 0.0432 | 1.6066 | 0.6492 | -1.2288 | -4.5717 |
| koala-7B-HF | -6.1159 | 0.1507 | 0.1871 | 0.2131 | 0.0662 | 3.0983 | 0.6396 | -0.8354 | -3.8496 |
| llama-7b-hf-baize-lora-bf16 | -11.9519 | 0.1521 | 0.2022 | 0.2253 | 0.0781 | 3.4005 | 0.6557 | -0.663 | -3.526 |
| moss-moon-003-sft | -356.8774 | 0.1365 | 0.1863 | 0.2062 | 0.0686 | 2.9561 | 0.6485 | -0.7261 | -3.6461 |
| mpt-7b | -176.2144 | 0.1106 | 0.1498 | 0.1663 | 0.0439 | 1.7392 | 0.6165 | -0.9636 | -3.9419 |
| mpt-7b-instruct | -156.0153 | 0.121 | 0.1647 | 0.1837 | 0.0524 | 2.0692 | 0.6321 | -0.8232 | -3.7208 |
| oasst-sft-4-pythia-12b-epoch-3.5 | -1.6749 | 0.2873 | 0.341 | 0.3813 | 0.1738 | 10.5046 | 0.7468 | -0.3908 | -3.4486 |
| stablelm-tuned-alpha-7b | -831.595 | 0.1306 | 0.1672 | 0.1904 | 0.0524 | 2.5044 | 0.6247 | -0.9832 | -4.1208 |
| vicuna-13b-1.1 | -5.6914 | 0.2122 | 0.2677 | 0.3012 | 0.1223 | 6.3584 | 0.696 | -0.6146 | -3.4368 |
| Best Model Metric Perf | -0.9924 | 0.2873 | 0.341 | 0.3813 | 0.1738 | 10.5046 | 0.7468 | -0.3908 | -3.4368 |
| Oracle | 0.0 | 0.3585 | 0.4201 | 0.466 | 0.2438 | 15.4971 | 0.7767 | 0.0679 | 0.0 |
| Oracle-Best_Model Gap | 0.9924 | 0.0712 | 0.0791 | 0.0847 | 0.07 | 4.9925 | 0.0299 | 0.4587 | 3.4368 |
### ChatGPT CMPTS (4771 examples)
| **Methods** | BERTScore | BARTScore | BLEURT | GPT-Rank | Beat Vic(%) | Beat OA(%) | Top-1(%) | Top-2(%) | Top-3(%) |
|:-----------------:|:---------:|:---------:|:---------:|:--------:|:----------:|:----------:|:----------:|:----------:|:----------:|
| Open Assistant | **74.68** | -3.45 | **-0.39** | **3.90** | **62.78** | N/A | 17.35 | 35.67 | 51.98 |
| Vicuna | 69.60 | **-3.44** | -0.61 | 4.13 | N/A | **64.77** | **25.47** | **41.23** | **52.88** |
| Alpaca | 71.46 | -3.57 | -0.53 | 4.62 | 56.70 | 61.35 | 15.41 | 29.81 | 44.46 |
| Baize | 65.57 | -3.53 | -0.66 | 4.86 | 52.76 | 56.40 | 14.23 | 26.91 | 38.80 |
| moss | 64.85 | -3.65 | -0.73 | 5.09 | 51.62 | 51.79 | 15.93 | 27.52 | 38.27 |
| ChatGLM | 70.38 | -3.52 | -0.62 | 5.63 | 44.04 | 45.67 | 9.41 | 19.37 | 28.78 |
| Koala | 63.96 | -3.85 | -0.84 | 6.76 | 39.93 | 39.01 | 8.15 | 15.72 | 22.55 |
| Dolly v2 | 62.26 | -3.83 | -0.87 | 6.90 | 33.33 | 31.44 | 5.16 | 10.06 | 16.45 |
| Mosaic MPT | 63.21 | -3.72 | -0.82 | 7.19 | 30.87 | 30.16 | 5.39 | 10.61 | 16.24 |
| StableLM | 62.47 | -4.12 | -0.98 | 8.71 | 21.55 | 19.87 | 2.33 | 4.74 | 7.96 |
| Flan-T5 | 64.92 | -4.57 | -1.23 | 8.81 | 23.89 | 19.93 | 1.30 | 2.87 | 5.32 |
| Oracle(BERTScore) | **77.67** | -3.17 | -0.27 | 3.88 | 54.41 | 38.84 | 20.16 | 38.11 | 53.49 |
| Oracle(BLEURT) | 75.02 | -3.15 | **-0.15** | 3.77 | 55.61 | 45.80 | 21.48 | 39.84 | 55.36 |
| Oracle(BARTScore) | 73.23 | **-2.87** | -0.38 | 3.69 | 50.32 | 57.01 | 26.10 | 43.70 | 57.33 |
| Oracle(ChatGPT) | 70.32 | -3.33 | -0.51 | **1.00** | **100.00** | **100.00** | **100.00** | **100.00** | **100.00** |
|
LambdaTests/VQAv2_sample_validation_benchmarks_partition_global_6_loca_6 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 43
num_examples: 1
download_size: 0
dataset_size: 43
---
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_global_6_loca_6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
textminr/topic-labeling | ---
language:
- en
- de
size_categories:
- n<1K
configs:
- config_name: gutenberg
default: true
data_files:
- split: train
path:
- "gutenberg/train.jsonl"
- config_name: generic
data_files:
- split: train
path:
- "base/train.jsonl"
- "mn-ds/train.jsonl"
task_categories:
- text-classification
pretty_name: Topic Labeling
---
# Topic Labeling
This is a carefully crafted dataset for topic labeling. It maps a series of words (generated by topic models like LDA, Top2Vec etc.) to a topic label so that LLMs like T5, Mistral etc. can be fine-tuned on this dataset to generate topic labels for a given text. Parts of this dataset have been labeled manually or using GPT-4.
Source datasets:
- [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia)
- [textminr/mn-ds](https://huggingface.co/datasets/textminr/mn-ds)
Additional sources:
- [Project Gutenberg](https://www.gutenberg.org/) |
amogh-sinha/text | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 24588.0
num_examples: 3
- name: test
num_bytes: 8196
num_examples: 1
download_size: 22034
dataset_size: 32784.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/tuye_arknights | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of tuye/トゥイエ/图耶 (Arknights)
This is the dataset of tuye/トゥイエ/图耶 (Arknights), containing 69 images and their tags.
The core tags of this character are `animal_ears, dark_skin, dark-skinned_female, purple_eyes, white_hair, long_hair, hair_between_eyes, horse_ears, grey_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 69 | 125.13 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tuye_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 69 | 106.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tuye_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 184 | 210.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tuye_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tuye_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 35 |  |  |  |  |  | 1girl, solo, long_sleeves, looking_at_viewer, yellow_jacket, open_jacket, yellow_coat, black_gloves, holding, thigh_strap, black_dress, closed_mouth, fingerless_gloves, smile, twintails, umbrella |
| 1 | 12 |  |  |  |  |  | 1girl, long_sleeves, looking_at_viewer, solo, official_alternate_costume, short_hair, closed_mouth, white_ascot, white_gloves, white_thighhighs, black_dress, flower, garter_straps, smile, black_footwear, white_background, holding, shoes, simple_background, sitting |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | long_sleeves | looking_at_viewer | yellow_jacket | open_jacket | yellow_coat | black_gloves | holding | thigh_strap | black_dress | closed_mouth | fingerless_gloves | smile | twintails | umbrella | official_alternate_costume | short_hair | white_ascot | white_gloves | white_thighhighs | flower | garter_straps | black_footwear | white_background | shoes | simple_background | sitting |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:---------------|:--------------------|:----------------|:--------------|:--------------|:---------------|:----------|:--------------|:--------------|:---------------|:--------------------|:--------|:------------|:-----------|:-----------------------------|:-------------|:--------------|:---------------|:-------------------|:---------|:----------------|:-----------------|:-------------------|:--------|:--------------------|:----------|
| 0 | 35 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | X | X | | | | | X | | X | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X |
|
Vaibhav9401/toxic25m | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: llama_finetune_text
dtype: string
splits:
- name: train
num_bytes: 20143312184
num_examples: 25159680
download_size: 3446911922
dataset_size: 20143312184
---
# Dataset Card for "toxic25m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChrisWilson/twitter_dataset_1710354721 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 33402
num_examples: 89
download_size: 26069
dataset_size: 33402
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HarryAJMK418/NewEmINTER | ---
license: openrail
---
|
romariocamilo/noah.mp3 | ---
license: openrail
---
|
Djacon/ru_goemotions | ---
language:
- ru
license:
- mit
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-class-classification
- multi-label-classification
pretty_name: RuGoEmotions
tags:
- emotion
---
# Dataset Card for GoEmotions
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
### Dataset Summary
The RuGoEmotions dataset contains 34k Reddit comments labeled for 9 emotion categories (joy, interest, surprice, sadness, anger, disgust, fear, guilt and neutral).
The dataset already with predefined train/val/test splits
### Supported Tasks and Leaderboards
This dataset is intended for multi-class, multi-label emotion classification.
### Languages
The data is in Russian.
## Dataset Structure
### Data Instances
Each instance is a reddit comment with one or more emotion annotations (or neutral).
### Data Fields
The configuration includes:
- `text`: the reddit comment
- `labels`: the emotion annotations
### Data Splits
The simplified data includes a set of train/val/test splits with 26.9k, 3.29k, and 3.37k examples respectively.
## Dataset Creation
### Curation Rationale
From the paper abstract:
> Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to
detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a
fine-grained typology, adaptable to multiple downstream tasks.
### Source Data
#### Initial Data Collection and Normalization
Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper.
#### Who are the source language producers?
English-speaking Reddit users.
### Annotations
#### Who are the annotators?
Annotations were produced by 3 English-speaking crowdworkers in India.
### Personal and Sensitive Information
This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames
are typically disasociated from personal real-world identities, this is not always the case. It may therefore be
possible to discover the identities of the individuals who created this content in some cases.
## Considerations for Using the Data
### Social Impact of Dataset
Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer
interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases
to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance
pricing, and student attentiveness (see
[this article](https://www.unite.ai/ai-now-institute-warns-about-misuse-of-emotion-detection-software-and-other-ethical-issues/)).
### Discussion of Biases
From the authors' github page:
> Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Researchers at Amazon Alexa, Google Research, and Stanford. See the [author list](https://arxiv.org/abs/2005.00547).
### Licensing Information
The GitHub repository which houses this dataset has an
[Apache License 2.0](https://github.com/google-research/google-research/blob/master/LICENSE).
### Citation Information
@inproceedings{demszky2020goemotions,
author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},
title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},
year = {2020}
}
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
|
xjlulu/ntu_adl_intent | ---
license: apache-2.0
task_categories:
- text-classification
language:
- en
--- |
CyberHarem/tsukumo_yatsuhashi_touhou | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of tsukumo_yatsuhashi/九十九八橋 (Touhou)
This is the dataset of tsukumo_yatsuhashi/九十九八橋 (Touhou), containing 253 images and their tags.
The core tags of this character are `brown_hair, short_hair, hairband, brown_eyes, purple_hairband`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 253 | 190.77 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsukumo_yatsuhashi_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 253 | 149.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsukumo_yatsuhashi_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 452 | 250.06 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsukumo_yatsuhashi_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 253 | 183.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsukumo_yatsuhashi_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 452 | 296.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tsukumo_yatsuhashi_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tsukumo_yatsuhashi_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, barefoot, black_skirt, smile, solo, white_shirt, collared_shirt, long_sleeves, simple_background, closed_mouth, eighth_note, frilled_skirt, full_body, white_background, looking_at_viewer, staff_(music), instrument |
| 1 | 15 |  |  |  |  |  | 1girl, long_sleeves, shirt, skirt, smile, solo, looking_at_viewer, eighth_note, open_mouth |
| 2 | 10 |  |  |  |  |  | 1girl, beamed_eighth_notes, black_skirt, collared_shirt, long_sleeves, looking_at_viewer, solo, white_shirt, bangs, beamed_sixteenth_notes, frilled_skirt, open_mouth, quarter_note, :d, hair_between_eyes, simple_background, staff_(music), blush, cowboy_shot, instrument |
| 3 | 17 |  |  |  |  |  | 2girls, shirt, smile, long_sleeves, open_mouth, sisters, barefoot, eighth_note, purple_hair, biwa_lute, hair_flower, looking_at_viewer, black_skirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | barefoot | black_skirt | smile | solo | white_shirt | collared_shirt | long_sleeves | simple_background | closed_mouth | eighth_note | frilled_skirt | full_body | white_background | looking_at_viewer | staff_(music) | instrument | shirt | skirt | open_mouth | beamed_eighth_notes | bangs | beamed_sixteenth_notes | quarter_note | :d | hair_between_eyes | blush | cowboy_shot | 2girls | sisters | purple_hair | biwa_lute | hair_flower |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------------|:--------|:-------|:--------------|:-----------------|:---------------|:--------------------|:---------------|:--------------|:----------------|:------------|:-------------------|:--------------------|:----------------|:-------------|:--------|:--------|:-------------|:----------------------|:--------|:-------------------------|:---------------|:-----|:--------------------|:--------|:--------------|:---------|:----------|:--------------|:------------|:--------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 1 | 15 |  |  |  |  |  | X | | | X | X | | | X | | | X | | | | X | | | X | X | X | | | | | | | | | | | | | |
| 2 | 10 |  |  |  |  |  | X | | X | | X | X | X | X | X | | | X | | | X | X | X | | | X | X | X | X | X | X | X | X | X | | | | | |
| 3 | 17 |  |  |  |  |  | | X | X | X | | | | X | | | X | | | | X | | | X | | X | | | | | | | | | X | X | X | X | X |
|
theojiang/contrastive_conditional_vid_diff_std_1_6_webvid-test | ---
dataset_info:
features:
- name: original_image
dtype: image
- name: edit_prompt
dtype: string
- name: edited_image
dtype: image
splits:
- name: train
num_bytes: 73413092.0
num_examples: 2000
download_size: 72714847
dataset_size: 73413092.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HamdanXI/arb-eng-parallel-1mill-splitted | ---
dataset_info:
features:
- name: arabic
dtype: string
- name: english
dtype: string
splits:
- name: train
num_bytes: 343460673.8616423
num_examples: 800000
- name: validation
num_bytes: 42932584.23270529
num_examples: 100000
- name: test
num_bytes: 42932584.23270529
num_examples: 100000
download_size: 240888206
dataset_size: 429325842.3270529
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
AlekseyKorshuk/light-wild-chatml | ---
dataset_info:
features:
- name: conversation
list:
- name: content
dtype: string
- name: do_train
dtype: bool
- name: role
dtype: string
splits:
- name: train
num_bytes: 17574962
num_examples: 11556
download_size: 6961878
dataset_size: 17574962
---
# Dataset Card for "light-wild-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Greich/linkedin_posts | ---
license: odbl
---
|
AmelieSchreiber/binding_sites_random_split_by_family | ---
license: mit
---
This is a refined version of a dataset obtained from UniProt ([see here](https://www.uniprot.org/uniprotkb?facets=reviewed%3Atrue%2Cproteins_with%3A9&fields=accession%2Cprotein_families%2Cft_binding%2Cft_act_site%2Csequence&query=%28family%3A*%29+AND+%28ft_binding%3A*%29&view=table)).
The data was first sorted by family, then random families were selected until approximately 20% of the data was separates out for test data.
Next, each sequences longer than 1000 residues was segmented into non-overlapping sections of 1000 amino acids or less. Any sequences
with only partial binding site annotations were thrown out (any sequences with `<`, `>`, or `?`). |
apollo-research/sae-Skylion007-openwebtext-tokenizer-gpt2 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 36178777200.0
num_examples: 8824092
download_size: 17731169875
dataset_size: 36178777200.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tonyassi/celebrity-1000 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Aaron Eckhart
'1': Aaron Paul
'2': Aaron Rodgers
'3': Aaron Taylor-Johnson
'4': Abbi Jacobson
'5': Abhishek Bachchan
'6': Abigail Breslin
'7': Abigail Spencer
'8': Adam Brody
'9': Adam Devine
'10': Adam Driver
'11': Adam Lambert
'12': Adam Levine
'13': Adam Sandler
'14': Adam Scott
'15': Adele
'16': Adrian Grenier
'17': Adèle Exarchopoulos
'18': Aidan Gillen
'19': Aidan Turner
'20': Aishwarya Rai
'21': Aja Naomi King
'22': Alden Ehrenreich
'23': Aldis Hodge
'24': Alec Baldwin
'25': Alex Morgan
'26': Alex Pettyfer
'27': Alex Rodriguez
'28': Alexander Skarsgård
'29': Alexandra Daddario
'30': Alfre Woodard
'31': Alia Shawkat
'32': Alice Braga
'33': Alice Eve
'34': Alicia Keys
'35': Alicia Vikander
'36': Alison Brie
'37': Allison Janney
'38': Allison Williams
'39': Alyson Hannigan
'40': Amanda Peet
'41': Amanda Seyfried
'42': Amandla Stenberg
'43': Amber Heard
'44': America Ferrera
'45': Amy Adams
'46': Amy Poehler
'47': Amy Schumer
'48': Ana de Armas
'49': Andie MacDowell
'50': Andrew Garfield
'51': Andrew Lincoln
'52': Andrew Scott
'53': Andy Garcia
'54': Andy Samberg
'55': Andy Serkis
'56': Angela Bassett
'57': Angelina Jolie
'58': Anna Camp
'59': Anna Faris
'60': Anna Kendrick
'61': Anna Paquin
'62': AnnaSophia Robb
'63': Annabelle Wallis
'64': Anne Hathaway
'65': Anne Marie
'66': Anne-Marie
'67': Ansel Elgort
'68': Anson Mount
'69': Anthony Hopkins
'70': Anthony Joshua
'71': Anthony Mackie
'72': Antonio Banderas
'73': Anya Taylor-Joy
'74': Ariana Grande
'75': Armie Hammer
'76': Ashley Judd
'77': Ashton Kutcher
'78': Aubrey Plaza
'79': Auli'i Cravalho
'80': Awkwafina
'81': Barack Obama
'82': Bella Hadid
'83': Bella Thorne
'84': Ben Barnes
'85': Ben Mendelsohn
'86': Ben Stiller
'87': Ben Whishaw
'88': Benedict Cumberbatch
'89': Benedict Wong
'90': Benicio del Toro
'91': Bill Gates
'92': Bill Hader
'93': Bill Murray
'94': Bill Pullman
'95': Bill Skarsgård
'96': Billie Eilish
'97': Billie Lourd
'98': Billy Crudup
'99': Billy Porter
'100': Blake Lively
'101': Bob Odenkirk
'102': Bonnie Wright
'103': Boyd Holbrook
'104': Brad Pitt
'105': Bradley Cooper
'106': Brendan Fraser
'107': Brian Cox
'108': Brie Larson
'109': Brittany Snow
'110': Bryan Cranston
'111': Bryce Dallas Howard
'112': Busy Philipps
'113': Caitriona Balfe
'114': Cameron Diaz
'115': Camila Cabello
'116': Camila Mendes
'117': Cardi B
'118': Carey Mulligan
'119': Carla Gugino
'120': Carrie Underwood
'121': Casey Affleck
'122': Cate Blanchett
'123': Catherine Keener
'124': Catherine Zeta-Jones
'125': Celine Dion
'126': Chace Crawford
'127': Chadwick Boseman
'128': Channing Tatum
'129': Charlie Cox
'130': Charlie Day
'131': Charlie Hunnam
'132': Charlie Plummer
'133': Charlize Theron
'134': Chiara Ferragni
'135': Chiwetel Ejiofor
'136': Chloe Bennet
'137': Chloe Grace Moretz
'138': Chloe Sevigny
'139': Chloë Grace Moretz
'140': Chloë Sevigny
'141': Chris Cooper
'142': Chris Evans
'143': Chris Hemsworth
'144': Chris Martin
'145': Chris Messina
'146': Chris Noth
'147': Chris O'Dowd
'148': Chris Pine
'149': Chris Pratt
'150': Chris Tucker
'151': Chrissy Teigen
'152': Christian Bale
'153': Christian Slater
'154': Christina Aguilera
'155': Christina Applegate
'156': Christina Hendricks
'157': Christina Milian
'158': Christina Ricci
'159': Christine Baranski
'160': Christoph Waltz
'161': Christopher Plummer
'162': Christopher Walken
'163': Cillian Murphy
'164': Claire Foy
'165': Clive Owen
'166': Clive Standen
'167': Cobie Smulders
'168': Colin Farrell
'169': Colin Firth
'170': Colin Hanks
'171': Connie Britton
'172': Conor McGregor
'173': Constance Wu
'174': Constance Zimmer
'175': Courteney Cox
'176': Cristiano Ronaldo
'177': Daisy Ridley
'178': Dak Prescott
'179': Dakota Fanning
'180': Dakota Johnson
'181': Damian Lewis
'182': Dan Stevens
'183': Danai Gurira
'184': Dane DeHaan
'185': Daniel Craig
'186': Daniel Dae Kim
'187': Daniel Day-Lewis
'188': Daniel Gillies
'189': Daniel Kaluuya
'190': Daniel Mays
'191': Daniel Radcliffe
'192': Danny DeVito
'193': Darren Criss
'194': Dave Bautista
'195': Dave Franco
'196': Dave Grohl
'197': Daveed Diggs
'198': David Attenborough
'199': David Beckham
'200': David Duchovny
'201': David Harbour
'202': David Oyelowo
'203': David Schwimmer
'204': David Tennant
'205': David Thewlis
'206': Dax Shepard
'207': Debra Messing
'208': Demi Lovato
'209': Dennis Quaid
'210': Denzel Washington
'211': Dermot Mulroney
'212': Dev Patel
'213': Diane Keaton
'214': Diane Kruger
'215': Diane Lane
'216': Diego Boneta
'217': Diego Luna
'218': Djimon Hounsou
'219': Dolly Parton
'220': Domhnall Gleeson
'221': Dominic Cooper
'222': Dominic Monaghan
'223': Dominic West
'224': Don Cheadle
'225': Donald Glover
'226': Donald Sutherland
'227': Donald Trump
'228': Dua Lipa
'229': Dwayne "The Rock" Johnson
'230': Dwayne Johnson
'231': Dylan O'Brien
'232': Ed Harris
'233': Ed Helms
'234': Ed Sheeran
'235': Eddie Murphy
'236': Eddie Redmayne
'237': Edgar Ramirez
'238': Edward Norton
'239': Eiza Gonzalez
'240': Eiza González
'241': Elijah Wood
'242': Elisabeth Moss
'243': Elisha Cuthbert
'244': Eliza Coupe
'245': Elizabeth Banks
'246': Elizabeth Debicki
'247': Elizabeth Lail
'248': Elizabeth McGovern
'249': Elizabeth Moss
'250': Elizabeth Olsen
'251': Elle Fanning
'252': Ellen DeGeneres
'253': Ellen Page
'254': Ellen Pompeo
'255': Ellie Goulding
'256': Elon Musk
'257': Emile Hirsch
'258': Emilia Clarke
'259': Emilia Fox
'260': Emily Beecham
'261': Emily Blunt
'262': Emily Browning
'263': Emily Deschanel
'264': Emily Hampshire
'265': Emily Mortimer
'266': Emily Ratajkowski
'267': Emily VanCamp
'268': Emily Watson
'269': Emma Bunton
'270': Emma Chamberlain
'271': Emma Corrin
'272': Emma Mackey
'273': Emma Roberts
'274': Emma Stone
'275': Emma Thompson
'276': Emma Watson
'277': Emmanuelle Chriqui
'278': Emmy Rossum
'279': Eoin Macken
'280': Eric Bana
'281': Ethan Hawke
'282': Eva Green
'283': Eva Longoria
'284': Eva Mendes
'285': Evan Peters
'286': Evan Rachel Wood
'287': Evangeline Lilly
'288': Ewan McGregor
'289': Ezra Miller
'290': Felicity Huffman
'291': Felicity Jones
'292': Finn Wolfhard
'293': Florence Pugh
'294': Florence Welch
'295': Forest Whitaker
'296': Freddie Highmore
'297': Freddie Prinze Jr.
'298': Freema Agyeman
'299': Freida Pinto
'300': Freya Allan
'301': Gabrielle Union
'302': Gael Garcia Bernal
'303': Gael García Bernal
'304': Gal Gadot
'305': Garrett Hedlund
'306': Gary Oldman
'307': Gemma Arterton
'308': Gemma Chan
'309': Gemma Whelan
'310': George Clooney
'311': George Lucas
'312': Gerard Butler
'313': Giancarlo Esposito
'314': Giannis Antetokounmpo
'315': Gigi Hadid
'316': Gillian Anderson
'317': Gillian Jacobs
'318': Gina Carano
'319': Gina Gershon
'320': Gina Rodriguez
'321': Ginnifer Goodwin
'322': Gisele Bundchen
'323': Glenn Close
'324': Grace Kelly
'325': Greg Kinnear
'326': Greta Gerwig
'327': Greta Scacchi
'328': Greta Thunberg
'329': Gugu Mbatha-Raw
'330': Guy Ritchie
'331': Gwen Stefani
'332': Gwendoline Christie
'333': Gwyneth Paltrow
'334': Hafthor Bjornsson
'335': Hailee Steinfeld
'336': Hailey Bieber
'337': Haley Joel Osment
'338': Halle Berry
'339': Hannah Simone
'340': Harrison Ford
'341': Harry Styles
'342': Harvey Weinstein
'343': Hayden Panettiere
'344': Hayley Atwell
'345': Helen Hunt
'346': Helen Mirren
'347': Helena Bonham Carter
'348': Henry Cavill
'349': Henry Golding
'350': Hilary Swank
'351': Himesh Patel
'352': Hozier
'353': Hugh Bonneville
'354': Hugh Dancy
'355': Hugh Grant
'356': Hugh Jackman
'357': Hugh Laurie
'358': Ian Somerhalder
'359': Idris Elba
'360': Imelda Staunton
'361': Imogen Poots
'362': Ioan Gruffudd
'363': Isabella Rossellini
'364': Isabelle Huppert
'365': Isla Fisher
'366': Issa Rae
'367': Iwan Rheon
'368': J.K. Rowling
'369': J.K. Simmons
'370': Jack Black
'371': Jack Reynor
'372': Jack Whitehall
'373': Jackie Chan
'374': Jada Pinkett Smith
'375': Jaden Smith
'376': Jaimie Alexander
'377': Jake Gyllenhaal
'378': Jake Johnson
'379': Jake T. Austin
'380': James Cameron
'381': James Corden
'382': James Franco
'383': James Marsden
'384': James McAvoy
'385': James Norton
'386': Jamie Bell
'387': Jamie Chung
'388': Jamie Dornan
'389': Jamie Foxx
'390': Jamie Lee Curtis
'391': Jamie Oliver
'392': Jane Fonda
'393': Jane Krakowski
'394': Jane Levy
'395': Jane Lynch
'396': Jane Seymour
'397': Janelle Monáe
'398': January Jones
'399': Jared Leto
'400': Jason Bateman
'401': Jason Clarke
'402': Jason Derulo
'403': Jason Isaacs
'404': Jason Momoa
'405': Jason Mraz
'406': Jason Schwartzman
'407': Jason Segel
'408': Jason Statham
'409': Jason Sudeikis
'410': Javier Bardem
'411': Jay Baruchel
'412': Jay-Z
'413': Jeff Bezos
'414': Jeff Bridges
'415': Jeff Daniels
'416': Jeff Goldblum
'417': Jeffrey Dean Morgan
'418': Jeffrey Donovan
'419': Jeffrey Wright
'420': Jemima Kirke
'421': Jenna Coleman
'422': Jenna Fischer
'423': Jenna Ortega
'424': Jennifer Aniston
'425': Jennifer Connelly
'426': Jennifer Coolidge
'427': Jennifer Esposito
'428': Jennifer Garner
'429': Jennifer Hudson
'430': Jennifer Lawrence
'431': Jennifer Lopez
'432': Jennifer Love Hewitt
'433': Jenny Slate
'434': Jeremy Irons
'435': Jeremy Renner
'436': Jeremy Strong
'437': Jerry Seinfeld
'438': Jesse Eisenberg
'439': Jesse Metcalfe
'440': Jesse Plemons
'441': Jesse Tyler Ferguson
'442': Jesse Williams
'443': Jessica Alba
'444': Jessica Biel
'445': Jessica Chastain
'446': Jessica Lange
'447': Jessie Buckley
'448': Jim Carrey
'449': Jim Parsons
'450': Joan Collins
'451': Joan Cusack
'452': Joanne Froggatt
'453': Joaquin Phoenix
'454': Jodie Comer
'455': Jodie Foster
'456': Joe Jonas
'457': Joe Keery
'458': Joel Edgerton
'459': Joel Kinnaman
'460': Joel McHale
'461': John Boyega
'462': John C. Reilly
'463': John Cena
'464': John Cho
'465': John Cleese
'466': John Corbett
'467': John David Washington
'468': John Goodman
'469': John Hawkes
'470': John Krasinski
'471': John Legend
'472': John Leguizamo
'473': John Lithgow
'474': John Malkovich
'475': John Mayer
'476': John Mulaney
'477': John Oliver
'478': John Slattery
'479': John Travolta
'480': John Turturro
'481': Johnny Depp
'482': Johnny Knoxville
'483': Jon Bernthal
'484': Jon Favreau
'485': Jon Hamm
'486': Jonah Hill
'487': Jonathan Groff
'488': Jonathan Majors
'489': Jonathan Pryce
'490': Jonathan Rhys Meyers
'491': Jordan Peele
'492': Jordana Brewster
'493': Joseph Fiennes
'494': Joseph Gordon-Levitt
'495': Josh Allen
'496': Josh Brolin
'497': Josh Gad
'498': Josh Hartnett
'499': Josh Hutcherson
'500': Josh Radnor
'501': Jude Law
'502': Judy Dench
'503': Judy Greer
'504': Julia Garner
'505': Julia Louis-Dreyfus
'506': Julia Roberts
'507': Julia Stiles
'508': Julian Casablancas
'509': Julian McMahon
'510': Julianna Margulies
'511': Julianne Hough
'512': Julianne Moore
'513': Julianne Nicholson
'514': Juliette Binoche
'515': Juliette Lewis
'516': Juno Temple
'517': Jurnee Smollett
'518': Justin Bartha
'519': Justin Bieber
'520': Justin Hartley
'521': Justin Herbert
'522': Justin Long
'523': Justin Theroux
'524': Justin Timberlake
'525': KJ Apa
'526': Kaitlyn Dever
'527': Kaley Cuoco
'528': Kanye West
'529': Karl Urban
'530': Kat Dennings
'531': Kate Beckinsale
'532': Kate Bosworth
'533': Kate Hudson
'534': Kate Mara
'535': Kate Middleton
'536': Kate Upton
'537': Kate Walsh
'538': Kate Winslet
'539': Katee Sackhoff
'540': Katherine Heigl
'541': Katherine Langford
'542': Katherine Waterston
'543': Kathryn Hahn
'544': Katie Holmes
'545': Katie McGrath
'546': Katy Perry
'547': Kaya Scodelario
'548': Keanu Reeves
'549': Keegan-Michael Key
'550': Keira Knightley
'551': Keke Palmer
'552': Kelly Clarkson
'553': Kelly Macdonald
'554': Kelly Marie Tran
'555': Kelly Reilly
'556': Kelly Ripa
'557': Kelvin Harrison Jr.
'558': Keri Russell
'559': Kerry Washington
'560': Kevin Bacon
'561': Kevin Costner
'562': Kevin Hart
'563': Kevin Spacey
'564': Ki Hong Lee
'565': Kiefer Sutherland
'566': Kieran Culkin
'567': Kiernan Shipka
'568': Kim Dickens
'569': Kim Kardashian
'570': Kirsten Dunst
'571': Kit Harington
'572': Kourtney Kardashian
'573': Kristen Bell
'574': Kristen Stewart
'575': Kristen Wiig
'576': Kristin Davis
'577': Krysten Ritter
'578': Kyle Chandler
'579': Kylie Jenner
'580': Kylie Minogue
'581': Lady Gaga
'582': Lake Bell
'583': Lakeith Stanfield
'584': Lamar Jackson
'585': Lana Del Rey
'586': Laura Dern
'587': Laura Harrier
'588': Laura Linney
'589': Laura Prepon
'590': Laurence Fishburne
'591': Laverne Cox
'592': LeBron James
'593': Lea Michele
'594': Lea Seydoux
'595': Lee Pace
'596': Leighton Meester
'597': Lena Headey
'598': Leonardo Da Vinci
'599': Leonardo DiCaprio
'600': Leslie Mann
'601': Leslie Odom Jr.
'602': Lewis Hamilton
'603': Liam Hemsworth
'604': Liam Neeson
'605': Lili Reinhart
'606': Lily Aldridge
'607': Lily Allen
'608': Lily Collins
'609': Lily James
'610': Lily Rabe
'611': Lily Tomlin
'612': Lin-Manuel Miranda
'613': Linda Cardellini
'614': Lionel Messi
'615': Lisa Bonet
'616': Lisa Kudrow
'617': Liv Tyler
'618': Lizzo
'619': Logan Lerman
'620': Lorde
'621': Lucy Boynton
'622': Lucy Hale
'623': Lucy Lawless
'624': Lucy Liu
'625': Luke Evans
'626': Luke Perry
'627': Luke Wilson
'628': Lupita Nyong'o
'629': Léa Seydoux
'630': Mackenzie Davis
'631': Madelaine Petsch
'632': Mads Mikkelsen
'633': Mae Whitman
'634': Maggie Gyllenhaal
'635': Maggie Q
'636': Maggie Siff
'637': Maggie Smith
'638': Mahershala Ali
'639': Mahira Khan
'640': Maisie Richardson-Sellers
'641': Maisie Williams
'642': Mandy Moore
'643': Mandy Patinkin
'644': Marc Anthony
'645': Margaret Qualley
'646': Margot Robbie
'647': Maria Sharapova
'648': Marion Cotillard
'649': Marisa Tomei
'650': Mariska Hargitay
'651': Mark Hamill
'652': Mark Ruffalo
'653': Mark Strong
'654': Mark Wahlberg
'655': Mark Zuckerberg
'656': Marlon Brando
'657': Martin Freeman
'658': Martin Scorsese
'659': Mary Elizabeth Winstead
'660': Mary J. Blige
'661': Mary Steenburgen
'662': Mary-Louise Parker
'663': Matt Bomer
'664': Matt Damon
'665': Matt LeBlanc
'666': Matt Smith
'667': Matthew Fox
'668': Matthew Goode
'669': Matthew Macfadyen
'670': Matthew McConaughey
'671': Matthew Perry
'672': Matthew Rhys
'673': Matthew Stafford
'674': Max Minghella
'675': Maya Angelou
'676': Maya Hawke
'677': Maya Rudolph
'678': Megan Fox
'679': Megan Rapinoe
'680': Meghan Markle
'681': Mel Gibson
'682': Melanie Lynskey
'683': Melissa Benoist
'684': Melissa McCarthy
'685': Melonie Diaz
'686': Meryl Streep
'687': Mia Wasikowska
'688': Michael B. Jordan
'689': Michael C. Hall
'690': Michael Caine
'691': Michael Cera
'692': Michael Cudlitz
'693': Michael Douglas
'694': Michael Ealy
'695': Michael Fassbender
'696': Michael Jordan
'697': Michael Keaton
'698': Michael Pena
'699': Michael Peña
'700': Michael Phelps
'701': Michael Shannon
'702': Michael Sheen
'703': Michael Stuhlbarg
'704': Michelle Dockery
'705': Michelle Monaghan
'706': Michelle Obama
'707': Michelle Pfeiffer
'708': Michelle Rodriguez
'709': Michelle Williams
'710': Michelle Yeoh
'711': Michiel Huisman
'712': Mila Kunis
'713': Miles Teller
'714': Milla Jovovich
'715': Millie Bobby Brown
'716': Milo Ventimiglia
'717': Mindy Kaling
'718': Miranda Cosgrove
'719': Miranda Kerr
'720': Mireille Enos
'721': Molly Ringwald
'722': Morgan Freeman
'723': Mélanie Laurent
'724': Naomi Campbell
'725': Naomi Harris
'726': Naomi Scott
'727': Naomi Watts
'728': Naomie Harris
'729': Nas
'730': Natalie Dormer
'731': Natalie Imbruglia
'732': Natalie Morales
'733': Natalie Portman
'734': Nathalie Emmanuel
'735': Nathalie Portman
'736': Nathan Fillion
'737': Naya Rivera
'738': Neil Patrick Harris
'739': Neil deGrasse Tyson
'740': Neve Campbell
'741': Neymar Jr.
'742': Nicholas Braun
'743': Nicholas Hoult
'744': Nick Jonas
'745': Nick Kroll
'746': Nick Offerman
'747': Nick Robinson
'748': Nicole Kidman
'749': Nikolaj Coster-Waldau
'750': Nina Dobrev
'751': Noah Centineo
'752': Noomi Rapace
'753': Norman Reedus
'754': Novak Djokovic
'755': Octavia Spencer
'756': Odessa Young
'757': Odette Annable
'758': Olivia Colman
'759': Olivia Cooke
'760': Olivia Holt
'761': Olivia Munn
'762': Olivia Wilde
'763': Oprah Winfrey
'764': Orlando Bloom
'765': Oscar Isaac
'766': Owen Wilson
'767': Pablo Picasso
'768': Patrick Dempsey
'769': Patrick Mahomes
'770': Patrick Stewart
'771': Patrick Wilson
'772': Paul Bettany
'773': Paul Dano
'774': Paul Giamatti
'775': Paul McCartney
'776': Paul Rudd
'777': Paul Wesley
'778': Paula Patton
'779': Pedro Almodóvar
'780': Pedro Pascal
'781': Penelope Cruz
'782': Penélope Cruz
'783': Pete Davidson
'784': Peter Dinklage
'785': Phoebe Dynevor
'786': Phoebe Waller-Bridge
'787': Pierce Brosnan
'788': Portia de Rossi
'789': Priyanka Chopra
'790': Quentin Tarantino
'791': Rachel Bilson
'792': Rachel Brosnahan
'793': Rachel McAdams
'794': Rachel Weisz
'795': Rafe Spall
'796': Rainn Wilson
'797': Ralph Fiennes
'798': Rami Malek
'799': Rashida Jones
'800': Ray Liotta
'801': Ray Romano
'802': Rebecca Ferguson
'803': Rebecca Hall
'804': Reese Witherspoon
'805': Regina Hall
'806': Regina King
'807': Renee Zellweger
'808': Renée Zellweger
'809': Rhys Ifans
'810': Ricardo Montalban
'811': Richard Armitage
'812': Richard Gere
'813': Richard Jenkins
'814': Richard Madden
'815': Ricky Gervais
'816': Ricky Martin
'817': Rihanna
'818': Riley Keough
'819': Rita Ora
'820': River Phoenix
'821': Riz Ahmed
'822': Rob Lowe
'823': Robert Carlyle
'824': Robert De Niro
'825': Robert Downey Jr.
'826': Robert Pattinson
'827': Robert Sheehan
'828': Robin Tunney
'829': Robin Williams
'830': Roger Federer
'831': Rooney Mara
'832': Rosamund Pike
'833': Rosario Dawson
'834': Rose Byrne
'835': Rose Leslie
'836': Roselyn Sanchez
'837': Ruby Rose
'838': Rupert Grint
'839': Russell Brand
'840': Russell Crowe
'841': Russell Wilson
'842': Ruth Bader Ginsburg
'843': Ruth Wilson
'844': Ryan Eggold
'845': Ryan Gosling
'846': Ryan Murphy
'847': Ryan Phillippe
'848': Ryan Reynolds
'849': Ryan Seacrest
'850': Salma Hayek
'851': Sam Claflin
'852': Sam Heughan
'853': Sam Rockwell
'854': Sam Smith
'855': Samara Weaving
'856': Samuel L. Jackson
'857': Sandra Bullock
'858': Sandra Oh
'859': Saoirse Ronan
'860': Sarah Gadon
'861': Sarah Hyland
'862': Sarah Jessica Parker
'863': Sarah Michelle Gellar
'864': Sarah Paulson
'865': Sarah Silverman
'866': Sarah Wayne Callies
'867': Sasha Alexander
'868': Scarlett Johansson
'869': Scott Speedman
'870': Sean Bean
'871': Sebastian Stan
'872': Selena Gomez
'873': Selma Blair
'874': Serena Williams
'875': Seth MacFarlane
'876': Seth Meyers
'877': Seth Rogen
'878': Shailene Woodley
'879': Shakira
'880': Shania Twain
'881': Sharlto Copley
'882': Shawn Mendes
'883': Shia LaBeouf
'884': Shiri Appleby
'885': Shohreh Aghdashloo
'886': Shonda Rhimes
'887': Sienna Miller
'888': Sigourney Weaver
'889': Simon Baker
'890': Simon Cowell
'891': Simon Pegg
'892': Simone Biles
'893': Sofia Boutella
'894': Sofia Vergara
'895': Sophie Turner
'896': Sophie Wessex
'897': Stanley Tucci
'898': Stephen Amell
'899': Stephen Colbert
'900': Stephen Curry
'901': Stephen Dorff
'902': Sterling K. Brown
'903': Sterling Knight
'904': Steve Carell
'905': Steven Yeun
'906': Susan Sarandon
'907': Taika Waititi
'908': Taraji P. Henson
'909': Taron Egerton
'910': Taylor Hill
'911': Taylor Kitsch
'912': Taylor Lautner
'913': Taylor Schilling
'914': Taylor Swift
'915': Teresa Palmer
'916': Terrence Howard
'917': Tessa Thompson
'918': Thandie Newton
'919': The Weeknd
'920': Theo James
'921': Thomas Brodie-Sangster
'922': Thomas Jane
'923': Tiger Woods
'924': Tilda Swinton
'925': Tim Burton
'926': Tim Cook
'927': Timothee Chalamet
'928': Timothy Olyphant
'929': Timothy Spall
'930': Timothée Chalamet
'931': Tina Fey
'932': Tobey Maguire
'933': Toby Jones
'934': Toby Kebbell
'935': Toby Regbo
'936': Tom Brady
'937': Tom Brokaw
'938': Tom Cavanagh
'939': Tom Cruise
'940': Tom Ellis
'941': Tom Felton
'942': Tom Hanks
'943': Tom Hardy
'944': Tom Hiddleston
'945': Tom Holland
'946': Tom Hollander
'947': Tom Hopper
'948': Tom Selleck
'949': Toni Collette
'950': Tony Hale
'951': Topher Grace
'952': Tracee Ellis Ross
'953': Tyra Banks
'954': Tyrese Gibson
'955': Uma Thurman
'956': Usain Bolt
'957': Uzo Aduba
'958': Vanessa Hudgens
'959': Vanessa Kirby
'960': Vera Farmiga
'961': Victoria Pedretti
'962': Viggo Mortensen
'963': Vin Diesel
'964': Vince Vaughn
'965': Vincent Cassel
'966': Vincent D'Onofrio
'967': Vincent Kartheiser
'968': Viola Davis
'969': Walton Goggins
'970': Wes Anderson
'971': Wes Bentley
'972': Whoopi Goldberg
'973': Will Ferrell
'974': Will Poulter
'975': Willem Dafoe
'976': William Jackson Harper
'977': William Shatner
'978': Winona Ryder
'979': Woody Harrelson
'980': Yara Shahidi
'981': Yvonne Strahovski
'982': Zac Efron
'983': Zach Braff
'984': Zach Galifianakis
'985': Zachary Levi
'986': Zachary Quinto
'987': Zayn Malik
'988': Zazie Beetz
'989': Zendaya
'990': Zoe Kazan
'991': Zoe Kravitz
'992': Zoe Saldana
'993': Zoey Deutch
'994': Zooey Deschanel
'995': Zoë Kravitz
'996': Zoë Saldana
splits:
- name: train
num_bytes: 193671657.464
num_examples: 18184
download_size: 190510261
dataset_size: 193671657.464
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Celebrity 1000
Top 1000 celebrities. 18,184 images. 256x256. Square cropped to face. |
ncats/EpiSet4NER-v1 | ---
annotations_creators:
- train: programmatically-generated
- val: programmatically-generated
- test: programmatically-generated, expert-validated
language_creators:
- found
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Github](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard)
- **Paper:** Pending
### Dataset Summary
EpiSet4NER is a bronze-standard dataset for epidemiological entity recognition of location, epidemiologic types (e.g. "prevalence", "annual incidence", "estimated occurrence"), and epidemiological rates (e.g. "1.7 per 1,000,000 live births", "2.1:1.000.000", "one in five million", "0.03%") created by the [Genetic and Rare Diseases Information Center (GARD)](https://rarediseases.info.nih.gov/), a program in [the National Center for Advancing Translational Sciences](https://ncats.nih.gov/), one of the 27 [National Institutes of Health](https://www.nih.gov/). It was labeled programmatically using spaCy NER and rule-based methods. This weakly-supervised teaching method allowed us to construct this imprecise dataset with minimal manual effort and achieve satisfactory performance on a multi-type token classification problem. The test set was manually corrected by 3 NCATS researchers and a GARD curator (genetic and rare disease expert). It was used to train [EpiExtract4GARD](https://huggingface.co/ncats/EpiExtract4GARD), a BioBERT-based model fine-tuned for NER.
An [example](https://pubmed.ncbi.nlm.nih.gov/24237863/) of 'train' looks as follows.
```
{
"id": "333",
"tokens": ['Conclusions', 'The', 'birth', 'prevalence', 'of', 'CLD', 'in', 'the', 'northern', 'Netherlands', 'was', '21.1/10,000', 'births', '.'],
"ner_tags": [0, 0, 0, 3, 0, 0, 0, 0, 0, 1, 0, 5, 6, 0],
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature that indicates sentence number.
- `tokens`: a `list` of `string` features.
- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-LOC` (1), `I-LOC` (2), `B-EPI` (3), `I-EPI` (4),`B-STAT` (5),`I-STAT` (6).
### Data Splits
|name |train |validation|test|
|---------|-----:|----:|----:|
|EpiSet \# of abstracts|456|114|50|
|EpiSet \# tokens |117888|31262|13910|
## Dataset Creation

*Figure 1:* Creation of EpiSet4NER by NIH/NCATS
Comparing the programmatically labeled test set to the manually corrected test set allowed us to measure the precision, recall, and F1 of the programmatic labeling.
*Table 1:* Programmatic labeling of EpiSet4NER
| Evaluation Level | Entity | Precision | Recall | F1 |
|:----------------:|:------------------------:|:---------:|:------:|:-----:|
| Entity-Level | Overall | 0.559 | 0.662 | 0.606 |
| | Location | 0.597 | 0.661 | 0.627 |
| | Epidemiologic Type | 0.854 | 0.911 | 0.882 |
| | Epidemiologic Rate | 0.175 | 0.255 | 0.207 |
| Token-Level | Overall | 0.805 | 0.710 | 0.755 |
| | Location | 0.868 | 0.713 | 0.783 |
| | Epidemiologic Type | 0.908 | 0.908 | 0.908 |
| | Epidemiologic Rate | 0.739 | 0.645 | 0.689 |
An example of the text labeling:

*Figure 2:* Text Labeling using spaCy and rule-based labeling. Ideal labeling is bolded on the left. Actual programmatic output is on the right. [\[Figure citation\]](https://pubmed.ncbi.nlm.nih.gov/33649778/)
### Curation Rationale
To train ML/DL models that automate the process of rare disease epidemiological curation. This is crucial information to patients & families, researchers, grantors, and policy makers, primarily for funding purposes.
### Source Data
620 rare disease abstracts classified as epidemiological by a LSTM RNN rare disease epi classifier from 488 diseases. See Figure 1.
#### Initial Data Collection and Normalization
A random sample of 500 disease names were gathered from a list of ~6061 rare diseases tracked by GARD until ≥50 abstracts had been returned for each disease or the EBI RESTful API results were exhausted. Though we called ~25,000 abstracts from PubMed's db, only 7699 unique abstracts were returned for 488 diseases. Out of 7699 abstracts, only 620 were classified as epidemiological by the LSTM RNN epidemiological classifier.
### Annotations
#### Annotation process
Programmatic labeling. See [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/create_labeled_dataset_V2.ipynb) and then [here](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/modify_existing_labels.ipynb). The test set was manually corrected after creation.
#### Who are the annotators?
Programmatic labeling was done by [@William Kariampuzha](https://github.com/wzkariampuzha), one of the NCATS researchers.
The test set was manually corrected by 2 more NCATS researchers and a GARD curator (genetic and rare disease expert).
### Personal and Sensitive Information
None. These are freely available abstracts from PubMed.
## Considerations for Using the Data
### Social Impact of Dataset
Assisting 25-30 millions Americans with rare diseases. Additionally can be useful for Orphanet or CDC researchers/curators.
### Discussion of Biases and Limitations
- There were errors in the source file that contained rare disease synonyms of names, which may have led to some unrelated abstracts being included in the training, validation, and test sets.
- The abstracts were gathered through the EBI API and is thus subject to any biases that the EBI API had. The NCBI API returns very different results as shown by an API analysis here.
- The [long short-term memory recurrent neural network epi classifier](https://pubmed.ncbi.nlm.nih.gov/34457147/) was used to sift the 7699 rare disease abstracts. This model had a hold-out validation F1 score of 0.886 and a test F1 (which was compared against a GARD curator who used full-text articles to determine truth-value of epidemiological abstract) of 0.701. With 620 epi abstracts filtered from 7699 original rare disease abstracts, there are likely several false positives and false negative epi abstracts.
- Tokenization was done by spaCy which may be a limitation (or not) for current and future models trained on this set.
- The programmatic labeling was very imprecise as seen by Table 1. This is likely the largest limitation of the [BioBERT-based model](https://huggingface.co/ncats/EpiExtract4GARD) trained on this set.
- The test set was difficult to validate even for general NCATS researchers, which is why we relied on a rare disease expert to verify our modifications. As this task of epidemiological information identification is quite difficult for non-expert humans to complete, this set, and especially a gold-standard dataset in the possible future, represents a challenging gauntlet for NLP systems, especially those focusing on numeracy, to compete on.
## Additional Information
### Dataset Curators
[NIH GARD](https://rarediseases.info.nih.gov/about-gard/pages/23/about-gard)
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at NCATS/Axle Informatics for adding this dataset. |
ryan0712/ultra_no_robots | ---
license: unknown
---
|
cmagganas/zero_shot_comparison | ---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: rationale
dtype: string
- name: task
dtype: string
- name: type
dtype: string
- name: decilm_generation
dtype: string
- name: mistral_generation
dtype: string
- name: mpt_generation
dtype: string
splits:
- name: train
num_bytes: 96015
num_examples: 30
download_size: 59751
dataset_size: 96015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AnudeepPeela/starcoder-finetune | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Input
dtype: string
- name: Output
dtype: string
splits:
- name: train
num_bytes: 892
num_examples: 8
- name: test
num_bytes: 3328
num_examples: 20
download_size: 2251
dataset_size: 4220
---
# Dataset Card for "starcoder-finetune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
abhi28577/nennepedia | ---
license: openrail
task_categories:
- question-answering
language:
- en
pretty_name: nennepedia
size_categories:
- n<1K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
shi3z/anthropic_hh_rlhf_japanese | ---
license: mit
---
https://huggingface.co/datasets/Anthropic/hh-rlhf
Japanese Translation |
7eu7d7/HCP-Diffusion-datas | ---
license: apache-2.0
---
Anime prompt dataset (动漫风格数据集):
+ danbooru-160000.parquet
Natural scenes prompt dataset (真实风格数据集):
+ stable-diffusion-prompts-160000.parquet
+ stable-diffusion-prompts2-320000.parquet
Artistic style dataset (艺术风格数据集):
+ Lexica.art.parquet |
blinoff/restaurants_reviews | ---
language:
- ru
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
### Dataset Summary
The dataset contains user reviews about restaurants.
In total it contains 47,139 reviews. A review tagged with the <em>general</em> sentiment and sentiments on 3 aspects: <em>food, interior, service</em>.
### Data Fields
Each sample contains the following fields:
- **review_id**;
- **general**;
- **food**;
- **interior**;
- **service**;
- **text** review text.
### Python
```python3
import pandas as pd
df = pd.read_json('restaurants_reviews.jsonl', lines=True)
df.sample(5)
``` |
cassanof/italian-conversations | ---
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 223555130.20512977
num_examples: 115237
download_size: 113639002
dataset_size: 223555130.20512977
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
autoevaluate/autoeval-staging-eval-project-e438add5-1e56-41ec-9c26-2ad4182383b0-6260 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- autoevaluate/squad-sample
eval_info:
task: extractive_question_answering
model: autoevaluate/extractive-question-answering
metrics: []
dataset_name: autoevaluate/squad-sample
dataset_config: autoevaluate--squad-sample
dataset_split: test
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering
* Dataset: autoevaluate/squad-sample
* Config: autoevaluate--squad-sample
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
elenahuang/primary-sector-top-1k | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8620437
num_examples: 1000
download_size: 4571154
dataset_size: 8620437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "primary-sector-top-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
llm-aes/doc-storygen-v2 | ---
dataset_info:
features:
- name: worker_id
dtype: string
- name: task_id
dtype: string
- name: task_response_id
dtype: string
- name: id
dtype: int64
- name: premise
dtype: string
- name: plan1
dtype: string
- name: plan2
dtype: string
- name: Q1
dtype: string
- name: Q2
dtype: string
- name: Q3
dtype: string
- name: Q4
dtype: string
- name: Q5
dtype: string
- name: Q6
dtype: string
splits:
- name: train
num_bytes: 60995214
num_examples: 7000
download_size: 28333525
dataset_size: 60995214
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
confit/esc50-csv | ---
dataset_info:
- config_name: fold-1
features:
- name: filename
dtype: string
- name: label
dtype:
class_label:
names:
'0': dog
'1': rooster
'2': pig
'3': cow
'4': frog
'5': cat
'6': hen
'7': insects
'8': sheep
'9': crow
'10': rain
'11': sea_waves
'12': crackling_fire
'13': crickets
'14': chirping_birds
'15': water_drops
'16': wind
'17': pouring_water
'18': toilet_flush
'19': thunderstorm
'20': crying_baby
'21': sneezing
'22': clapping
'23': breathing
'24': coughing
'25': footsteps
'26': laughing
'27': brushing_teeth
'28': snoring
'29': drinking_sipping
'30': door_wood_knock
'31': mouse_click
'32': keyboard_typing
'33': door_wood_creaks
'34': can_opening
'35': washing_machine
'36': vacuum_cleaner
'37': clock_alarm
'38': clock_tick
'39': glass_breaking
'40': helicopter
'41': chainsaw
'42': siren
'43': car_horn
'44': engine
'45': train
'46': church_bells
'47': airplane
'48': fireworks
'49': hand_saw
splits:
- name: train
num_bytes: 11167
num_examples: 400
download_size: 7653
dataset_size: 11167
- config_name: fold-2
features:
- name: filename
dtype: string
- name: label
dtype:
class_label:
names:
'0': dog
'1': rooster
'2': pig
'3': cow
'4': frog
'5': cat
'6': hen
'7': insects
'8': sheep
'9': crow
'10': rain
'11': sea_waves
'12': crackling_fire
'13': crickets
'14': chirping_birds
'15': water_drops
'16': wind
'17': pouring_water
'18': toilet_flush
'19': thunderstorm
'20': crying_baby
'21': sneezing
'22': clapping
'23': breathing
'24': coughing
'25': footsteps
'26': laughing
'27': brushing_teeth
'28': snoring
'29': drinking_sipping
'30': door_wood_knock
'31': mouse_click
'32': keyboard_typing
'33': door_wood_creaks
'34': can_opening
'35': washing_machine
'36': vacuum_cleaner
'37': clock_alarm
'38': clock_tick
'39': glass_breaking
'40': helicopter
'41': chainsaw
'42': siren
'43': car_horn
'44': engine
'45': train
'46': church_bells
'47': airplane
'48': fireworks
'49': hand_saw
splits:
- name: train
num_bytes: 11335
num_examples: 400
download_size: 7627
dataset_size: 11335
- config_name: fold-3
features:
- name: filename
dtype: string
- name: label
dtype:
class_label:
names:
'0': dog
'1': rooster
'2': pig
'3': cow
'4': frog
'5': cat
'6': hen
'7': insects
'8': sheep
'9': crow
'10': rain
'11': sea_waves
'12': crackling_fire
'13': crickets
'14': chirping_birds
'15': water_drops
'16': wind
'17': pouring_water
'18': toilet_flush
'19': thunderstorm
'20': crying_baby
'21': sneezing
'22': clapping
'23': breathing
'24': coughing
'25': footsteps
'26': laughing
'27': brushing_teeth
'28': snoring
'29': drinking_sipping
'30': door_wood_knock
'31': mouse_click
'32': keyboard_typing
'33': door_wood_creaks
'34': can_opening
'35': washing_machine
'36': vacuum_cleaner
'37': clock_alarm
'38': clock_tick
'39': glass_breaking
'40': helicopter
'41': chainsaw
'42': siren
'43': car_horn
'44': engine
'45': train
'46': church_bells
'47': airplane
'48': fireworks
'49': hand_saw
splits:
- name: train
num_bytes: 11480
num_examples: 400
download_size: 7594
dataset_size: 11480
- config_name: fold-4
features:
- name: filename
dtype: string
- name: label
dtype:
class_label:
names:
'0': dog
'1': rooster
'2': pig
'3': cow
'4': frog
'5': cat
'6': hen
'7': insects
'8': sheep
'9': crow
'10': rain
'11': sea_waves
'12': crackling_fire
'13': crickets
'14': chirping_birds
'15': water_drops
'16': wind
'17': pouring_water
'18': toilet_flush
'19': thunderstorm
'20': crying_baby
'21': sneezing
'22': clapping
'23': breathing
'24': coughing
'25': footsteps
'26': laughing
'27': brushing_teeth
'28': snoring
'29': drinking_sipping
'30': door_wood_knock
'31': mouse_click
'32': keyboard_typing
'33': door_wood_creaks
'34': can_opening
'35': washing_machine
'36': vacuum_cleaner
'37': clock_alarm
'38': clock_tick
'39': glass_breaking
'40': helicopter
'41': chainsaw
'42': siren
'43': car_horn
'44': engine
'45': train
'46': church_bells
'47': airplane
'48': fireworks
'49': hand_saw
splits:
- name: train
num_bytes: 11508
num_examples: 400
download_size: 7602
dataset_size: 11508
- config_name: fold-5
features:
- name: filename
dtype: string
- name: label
dtype:
class_label:
names:
'0': dog
'1': rooster
'2': pig
'3': cow
'4': frog
'5': cat
'6': hen
'7': insects
'8': sheep
'9': crow
'10': rain
'11': sea_waves
'12': crackling_fire
'13': crickets
'14': chirping_birds
'15': water_drops
'16': wind
'17': pouring_water
'18': toilet_flush
'19': thunderstorm
'20': crying_baby
'21': sneezing
'22': clapping
'23': breathing
'24': coughing
'25': footsteps
'26': laughing
'27': brushing_teeth
'28': snoring
'29': drinking_sipping
'30': door_wood_knock
'31': mouse_click
'32': keyboard_typing
'33': door_wood_creaks
'34': can_opening
'35': washing_machine
'36': vacuum_cleaner
'37': clock_alarm
'38': clock_tick
'39': glass_breaking
'40': helicopter
'41': chainsaw
'42': siren
'43': car_horn
'44': engine
'45': train
'46': church_bells
'47': airplane
'48': fireworks
'49': hand_saw
splits:
- name: train
num_bytes: 11516
num_examples: 400
download_size: 7644
dataset_size: 11516
configs:
- config_name: fold-1
data_files:
- split: train
path: fold-1/train-*
- config_name: fold-2
data_files:
- split: train
path: fold-2/train-*
- config_name: fold-3
data_files:
- split: train
path: fold-3/train-*
- config_name: fold-4
data_files:
- split: train
path: fold-4/train-*
- config_name: fold-5
data_files:
- split: train
path: fold-5/train-*
---
|
eengel7/sentiment_analysis_training | ---
license: apache-2.0
---
|
nyunai/samvaad-hi-v1-tulu-format | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 936264338
num_examples: 101476
download_size: 403495470
dataset_size: 936264338
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
laion/laion2B-en-safety | Invalid username or password. |
nblinh63/twitter_dataset_1712693741 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 79823
num_examples: 200
download_size: 37919
dataset_size: 79823
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
medmac01/OpenHermes-2-AR | ---
dataset_info:
features:
- name: topic
dtype: 'null'
- name: conversations
dtype: string
- name: skip_prompt_formatting
dtype: bool
- name: model
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: custom_instruction
dtype: 'null'
- name: views
dtype: float64
- name: hash
dtype: 'null'
- name: language
dtype: 'null'
- name: idx
dtype: 'null'
- name: model_name
dtype: 'null'
- name: system_prompt
dtype: 'null'
- name: title
dtype: 'null'
- name: source
dtype: string
- name: id
dtype: 'null'
- name: category
dtype: string
splits:
- name: train
num_bytes: 1907745
num_examples: 1001
download_size: 968864
dataset_size: 1907745
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bigscience-data/roots_id_wikimedia | ---
language: id
license: cc-by-sa-3.0
extra_gated_prompt: 'By accessing this dataset, you agree to abide by the BigScience
Ethical Charter. The charter can be found at:
https://hf.co/spaces/bigscience/ethical-charter'
extra_gated_fields:
I have read and agree to abide by the BigScience Ethical Charter: checkbox
---
ROOTS Subset: roots_id_wikimedia
# wikimedia_filtered
- Dataset uid: `wikimedia_filtered`
### Description
### Homepage
### Licensing
### Speaker Locations
### Sizes
- 0.0005 % of total
- 0.0835 % of id
- 0.0126 % of ca
- 0.0054 % of pt
- 0.0005 % of indic-hi
### BigScience processing steps
#### Filters applied to: id
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_id
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: ca
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_ca
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_1024
#### Filters applied to: pt
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_pt
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
#### Filters applied to: indic-hi
- filter_wiki_user_titles
- filter_wiki_non_text_type
- dedup_document
- filter_remove_empty_docs
- split_sentences_indic-hi
- dedup_template_soft
- replace_newline_with_space
- filter_small_docs_bytes_300
|
phanvancongthanh/data_part03 | ---
dataset_info:
features:
- name: smiles
dtype: string
splits:
- name: train
num_bytes: 4911356191
num_examples: 109915148
download_size: 2471976257
dataset_size: 4911356191
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "data_part03"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
plgfro/Kaggles-Galaxy-Zoo-Dataset | ---
license: apache-2.0
---
|
tyzhu/squad_qa_num_v5_full_random_permute_4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 6510358.253911806
num_examples: 4345
- name: validation
num_bytes: 343184
num_examples: 300
download_size: 1336925
dataset_size: 6853542.253911806
---
# Dataset Card for "squad_qa_num_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TingChen-ppmc/Changsha_Dialect_Conversational_Speech_Corpus | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: gender
dtype: string
- name: speaker_id
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 223664136.256
num_examples: 1488
download_size: 215320750
dataset_size: 223664136.256
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Corpus
This dataset is built from Magicdata [ASR-CCHSHDIACSC: A CHINESE CHANGSHA DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/changsha-dialect-conversational-speech-corpus/)
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
```python
from datasets import load_dataset
dialect_corpus = load_dataset("TingChen-ppmc/Changsha_Dialect_Conversational_Speech_Corpus")
```
This dataset only has train split. To split out a test split, use
```python
from datasets import load_dataset
train_split = load_dataset("TingChen-ppmc/Changsha_Dialect_Conversational_Speech_Corpus", split="train")
# where test=0.5 denotes 0.5 of the dataset will be split to test split
corpus = train_split.train_test_split(test=0.5)
```
A sample data would be
```python
# note this data is from the Nanchang Dialect corpus, the data format is shared
{'audio':
{'path': 'A0001_S001_0_G0001_0.WAV',
'array': array([-0.00030518, -0.00039673,
-0.00036621, ..., -0.00064087,
-0.00015259, -0.00042725]),
'sampling_rate': 16000},
'gender': '女',
'speaker_id': 'G0001',
'transcription': '北京爱数智慧语音采集'
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
freshpearYoon/vr_train_free_11 | ---
dataset_info:
features:
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: filename
dtype: string
- name: NumOfUtterance
dtype: int64
- name: text
dtype: string
- name: samplingrate
dtype: int64
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: speaker_id
dtype: string
- name: directory
dtype: string
splits:
- name: train
num_bytes: 6683391354
num_examples: 10000
download_size: 1198445542
dataset_size: 6683391354
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Rewcifer/best_outputs_3models | ---
dataset_info:
features:
- name: true_findings
dtype: string
- name: generated_texts_1
dtype: string
- name: generated_texts_2
dtype: string
- name: generated_texts_3
dtype: string
splits:
- name: train
num_bytes: 1861317
num_examples: 861
download_size: 799511
dataset_size: 1861317
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "best_outputs_3models"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kk2491/int_dataset | ---
license: apache-2.0
---
|
bigcode/the-stack-metadata | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: The-Stack-Metadata
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids: []
extra_gated_prompt: |-
## Terms of Use for The Stack
The Stack Metadata is a collection of additional information for and is part of The Stack dataset, - a collection of source code in over 300 programming languages. We ask that you read and acknowledge the following points before using the dataset:
1. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
2. The Stack is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack to the most recent usable version specified by the maintainers in [the following thread](https://huggingface.co/datasets/bigcode/the-stack/discussions/7). If you have questions about dataset versions and allowed uses, please also ask them in the dataset’s [community discussions](https://huggingface.co/datasets/bigcode/the-stack/discussions/new). We will also notify users via email when the latest usable version changes.
3. To host, share, or otherwise provide access to The Stack dataset, you must include [these Terms of Use](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) and require users to agree to it.
By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
---
# Dataset Card for The Stack Metadata
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Changelog](#changelog)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Usage Example](#usage-example)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
- [Terms of Use for The Stack](#terms-of-use-for-the-stack)
## Dataset Description
- **Homepage:** https://www.bigcode-project.org/
- **Repository:** https://github.com/bigcode-project
- **Paper:** https://arxiv.org/abs/2211.15533
- **Leaderboard:** N/A
- **Point of Contact:** contact@bigcode-project.org
### Changelog
|Release|Description|
|-|-|
|v1.1| This is the first release of the metadata. It is for The Stack v1.1|
|v1.2| Metadata dataset matching The Stack v1.2|
### Dataset Summary
This is a set of additional information for repositories used for The Stack. It contains file paths, detected licenes as well as some other information for the repositories.
### Supported Tasks and Leaderboards
The main task is to recreate repository structure from the files of The Stack. Also, the set can be used for computing statistics and custom filtering or aggregation operations on The Stack.
## Dataset Structure
### Data Fields

The set is split into buckets by repositories. There are 944 buckets. Additionally to the fields in the image, `ri` contains `min_repo_event_datetime` which is the ealiest date and time of an event for a repo after Jan 1 2015.

As an example of an aggregation operation on The Stack, the image above shows conceptually a selection of stars ( and issues and PR count) for a file. Each unique file can be part of multiple repositories. So, The Stack releases unique files and aggregates meta information (e.g stars) from all repositories it belongs to. For example, for max_stars_count we take the maximum number of stars from all repositories the file is part of.
The meta data will allow you to reconstruct repository directory structures. For this, for each repository form `ri` tabele it is needed to take all its files from `fi` table, find them in The Stack by file's `hexsha` and save those files' content under its path for a repository from `fi` table. For speed it is preferable to index The Stack by hexsha first.
### Usage Example
Restore folder structure for python files in numpy repository
```python
import datasets
from pathlib import Path
from tqdm.auto import tqdm
import pandas as pd
# assuming metadata is cloned into the local folder /data/hf_repos/the-stack-metadata
# the stack is cloned into the local folder /data/hf_repos/the-stack-v1.1
# destination folder is in /repo_workdir/numpy_restored
the_stack_meta_path = Path('/data/hf_repos/the-stack-metadata')
the_stack_path = Path('/data/hf_repos/the-stack-v1.1')
repo_dst_root = Path('/repo_workdir/numpy_restored')
repo_name = 'numpy/numpy'
# Get bucket with numpy repo info
# meta_bucket_path = None
#for fn in tqdm(list((the_stack_meta_path/'data').glob('*/ri.parquet'))):
# df = pd.read_parquet(fn)
# if any(df['name'] == repo_name):
# meta_bucket_path = fn
# break
meta_bucket_path = the_stack_meta_path / 'data/255_944'
# Get repository id from repo name
ri_id = pd.read_parquet(
meta_bucket_path / 'ri.parquet'
).query(
f'`name` == "{repo_name}"'
)['id'].to_list()[0]
# Get files information for the reopository
files_info = pd.read_parquet(
meta_bucket_path / 'fi.parquet'
).query(
f'`ri_id` == {ri_id} and `size` != 0 and `is_deleted` == False'
)
# Convert DF with files information to a dictionary by language and then file hexsha
# there can be more than one file with the same hexsha in the repo so we gather
# all instances per unique hexsha
files_info_dict = {
k: v[['hexsha', 'path']].groupby('hexsha').apply(lambda x: list(x['path'])).to_dict()
for k, v in files_info.groupby('lang_ex')
}
# Load Python part of The Stack
ds = datasets.load_dataset(
str(the_stack_path/'data/python'),
num_proc=10, ignore_verifications=True
)
# Save file content of the python files in the numpy reposirotry in their appropriate locations
def save_file_content(example, files_info_dict, repo_dst_root):
if example['hexsha'] in files_info_dict:
for el in files_info_dict[example['hexsha']]:
path = repo_dst_root / el
path.parent.mkdir(parents=True, exist_ok=True)
path.write_text(example['content'])
ds.map(
save_file_content,
fn_kwargs={'files_info_dict': files_info_dict['Python'], 'repo_dst_root': repo_dst_root},
num_proc=10
)
```
## Dataset Creation
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#dataset-creation) in The Stack.
## Considerations for Using the Data
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#considerations-for-using-the-data) in The Stack.
## Additional Information
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#additional-information) in The Stack.
## Terms of Use for The Stack
Please refer to [the section](https://huggingface.co/datasets/bigcode/the-stack#terms-of-use-for-the-stack) in The Stack. |
davanstrien/WikiMuTe | ---
configs:
- config_name: default
data_files:
- split: train
path: all.csv
- config_name: self-filtering
data_files:
- split: train
path: filtered_sf.csv
- config_name: MusicCaps-filtering
data_files:
- split: train
path: filtered_mc.csv
license: cc-by-sa-3.0
language:
- en
--- |
open-llm-leaderboard/details_Weyaxi__Luban-Marcoroni-13B-v3 | ---
pretty_name: Evaluation run of Weyaxi/Luban-Marcoroni-13B-v3
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Weyaxi/Luban-Marcoroni-13B-v3](https://huggingface.co/Weyaxi/Luban-Marcoroni-13B-v3)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 64 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the agregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__Luban-Marcoroni-13B-v3\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-10-29T14:08:44.787529](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Luban-Marcoroni-13B-v3/blob/main/results_2023-10-29T14-08-44.787529.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.00776006711409396,\n\
\ \"em_stderr\": 0.0008986296432392762,\n \"f1\": 0.10252936241610805,\n\
\ \"f1_stderr\": 0.0019829740048614144,\n \"acc\": 0.4340313659926291,\n\
\ \"acc_stderr\": 0.010044205768767243\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.00776006711409396,\n \"em_stderr\": 0.0008986296432392762,\n\
\ \"f1\": 0.10252936241610805,\n \"f1_stderr\": 0.0019829740048614144\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09931766489764973,\n \
\ \"acc_stderr\": 0.008238371412683977\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7687450670876085,\n \"acc_stderr\": 0.01185004012485051\n\
\ }\n}\n```"
repo_url: https://huggingface.co/Weyaxi/Luban-Marcoroni-13B-v3
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|arc:challenge|25_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_drop_3
data_files:
- split: 2023_10_29T14_08_44.787529
path:
- '**/details_harness|drop|3_2023-10-29T14-08-44.787529.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-10-29T14-08-44.787529.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_10_29T14_08_44.787529
path:
- '**/details_harness|gsm8k|5_2023-10-29T14-08-44.787529.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-10-29T14-08-44.787529.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hellaswag|10_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T22-12-25.570871.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T22-12-25.570871.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-09-13T22-12-25.570871.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_10_29T14_08_44.787529
path:
- '**/details_harness|winogrande|5_2023-10-29T14-08-44.787529.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-10-29T14-08-44.787529.parquet'
- config_name: results
data_files:
- split: 2023_09_13T22_12_25.570871
path:
- results_2023-09-13T22-12-25.570871.parquet
- split: 2023_10_29T14_08_44.787529
path:
- results_2023-10-29T14-08-44.787529.parquet
- split: latest
path:
- results_2023-10-29T14-08-44.787529.parquet
---
# Dataset Card for Evaluation run of Weyaxi/Luban-Marcoroni-13B-v3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Weyaxi/Luban-Marcoroni-13B-v3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Weyaxi/Luban-Marcoroni-13B-v3](https://huggingface.co/Weyaxi/Luban-Marcoroni-13B-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__Luban-Marcoroni-13B-v3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T14:08:44.787529](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Luban-Marcoroni-13B-v3/blob/main/results_2023-10-29T14-08-44.787529.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.00776006711409396,
"em_stderr": 0.0008986296432392762,
"f1": 0.10252936241610805,
"f1_stderr": 0.0019829740048614144,
"acc": 0.4340313659926291,
"acc_stderr": 0.010044205768767243
},
"harness|drop|3": {
"em": 0.00776006711409396,
"em_stderr": 0.0008986296432392762,
"f1": 0.10252936241610805,
"f1_stderr": 0.0019829740048614144
},
"harness|gsm8k|5": {
"acc": 0.09931766489764973,
"acc_stderr": 0.008238371412683977
},
"harness|winogrande|5": {
"acc": 0.7687450670876085,
"acc_stderr": 0.01185004012485051
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
lener_br | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pt
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: lener-br
pretty_name: leNER-br
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-ORGANIZACAO
'2': I-ORGANIZACAO
'3': B-PESSOA
'4': I-PESSOA
'5': B-TEMPO
'6': I-TEMPO
'7': B-LOCAL
'8': I-LOCAL
'9': B-LEGISLACAO
'10': I-LEGISLACAO
'11': B-JURISPRUDENCIA
'12': I-JURISPRUDENCIA
config_name: lener_br
splits:
- name: train
num_bytes: 3984189
num_examples: 7828
- name: validation
num_bytes: 719433
num_examples: 1177
- name: test
num_bytes: 823708
num_examples: 1390
download_size: 2983137
dataset_size: 5527330
tags:
- legal
---
# Dataset Card for leNER-br
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [leNER-BR homepage](https://cic.unb.br/~teodecampos/LeNER-Br/)
- **Repository:** [leNER-BR repository](https://github.com/peluz/lener-br)
- **Paper:** [leNER-BR: Long Form Question Answering](https://cic.unb.br/~teodecampos/LeNER-Br/luz_etal_propor2018.pdf)
- **Point of Contact:** [Pedro H. Luz de Araujo](mailto:pedrohluzaraujo@gmail.com)
### Dataset Summary
LeNER-Br is a Portuguese language dataset for named entity recognition
applied to legal documents. LeNER-Br consists entirely of manually annotated
legislation and legal cases texts and contains tags for persons, locations,
time entities, organizations, legislation and legal cases.
To compose the dataset, 66 legal documents from several Brazilian Courts were
collected. Courts of superior and state levels were considered, such as Supremo
Tribunal Federal, Superior Tribunal de Justiça, Tribunal de Justiça de Minas
Gerais and Tribunal de Contas da União. In addition, four legislation documents
were collected, such as "Lei Maria da Penha", giving a total of 70 documents
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The language supported is Portuguese.
## Dataset Structure
### Data Instances
An example from the dataset looks as follows:
```
{
"id": "0",
"ner_tags": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0],
"tokens": [
"EMENTA", ":", "APELAÇÃO", "CÍVEL", "-", "AÇÃO", "DE", "INDENIZAÇÃO", "POR", "DANOS", "MORAIS", "-", "PRELIMINAR", "-", "ARGUIDA", "PELO", "MINISTÉRIO", "PÚBLICO", "EM", "GRAU", "RECURSAL"]
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "B-ORGANIZACAO", "I-ORGANIZACAO", "B-PESSOA", "I-PESSOA", "B-TEMPO", "I-TEMPO", "B-LOCAL", "I-LOCAL", "B-LEGISLACAO", "I-LEGISLACAO", "B-JURISPRUDENCIA", "I-JURISPRUDENCIA"
```
The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word.
### Data Splits
The data is split into train, validation and test set. The split sizes are as follow:
| Train | Val | Test |
| ------ | ----- | ---- |
| 7828 | 1177 | 1390 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{luz_etal_propor2018,
author = {Pedro H. {Luz de Araujo} and Te\'{o}filo E. {de Campos} and
Renato R. R. {de Oliveira} and Matheus Stauffer and
Samuel Couto and Paulo Bermejo},
title = {{LeNER-Br}: a Dataset for Named Entity Recognition in {Brazilian} Legal Text},
booktitle = {International Conference on the Computational Processing of Portuguese ({PROPOR})},
publisher = {Springer},
series = {Lecture Notes on Computer Science ({LNCS})},
pages = {313--323},
year = {2018},
month = {September 24-26},
address = {Canela, RS, Brazil},
doi = {10.1007/978-3-319-99722-3_32},
url = {https://cic.unb.br/~teodecampos/LeNER-Br/},
}
```
### Contributions
Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset. |
natmin322/28k_vietnamese_voice_augmented_of_VigBigData | ---
configs:
- config_name: default
data_files:
- split: train_1
path: data/train_1-*
- split: train_2
path: data/train_2-*
- split: train_3
path: data/train_3-*
- split: train_4
path: data/train_4-*
- split: train_5
path: data/train_5-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train_1
num_bytes: 1433691842.0
num_examples: 5000
- name: train_2
num_bytes: 1026073200.0
num_examples: 5000
- name: train_3
num_bytes: 1113535830.0
num_examples: 5000
- name: train_4
num_bytes: 1489647293.0
num_examples: 5000
- name: train_5
num_bytes: 1416405046.0
num_examples: 5000
- name: test
num_bytes: 886300388.18
num_examples: 3005
download_size: 6939675259
dataset_size: 7365653599.18
---
# Dataset Card for "28k_vietnamese_voice_augmented_of_VigBigData"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
one-sec-cv12/chunk_112 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 24829848720.625
num_examples: 258515
download_size: 22670676655
dataset_size: 24829848720.625
---
# Dataset Card for "chunk_112"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/909d509a | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 221
num_examples: 10
download_size: 1393
dataset_size: 221
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "909d509a"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mHossain/merge_new_para_detection_data_v5 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 8837101.8
num_examples: 50400
- name: test
num_bytes: 981900.2
num_examples: 5600
download_size: 4451360
dataset_size: 9819002.0
---
# Dataset Card for "merge_new_para_detection_data_v5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
xrizs/test.v83i.coco-segmentation | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
dataset_info:
features:
- name: image
dtype: image
- name: annotation
dtype: image
splits:
- name: train
num_bytes: 815324785.5
num_examples: 1814
- name: val
num_bytes: 205298969.0
num_examples: 453
download_size: 1020036030
dataset_size: 1020623754.5
---
# Dataset Card for "test.v83i.coco-segmentation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Elicke/Epicdarkguy | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
splits:
- name: train
num_bytes: 793929.0
num_examples: 1
download_size: 776077
dataset_size: 793929.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kgr123/quality_counter_3000 | ---
dataset_info:
features:
- name: context
dtype: string
- name: word
dtype: string
- name: claim
dtype: string
- name: label
dtype: int64
splits:
- name: test
num_bytes: 16636623
num_examples: 1929
- name: train
num_bytes: 16474873
num_examples: 1935
- name: validation
num_bytes: 16809536
num_examples: 1941
download_size: 11133065
dataset_size: 49921032
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
TinyPixel/based_0 | ---
dataset_info:
features:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 53106
num_examples: 352
download_size: 0
dataset_size: 53106
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wizard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_saucam__mistral-orpo-beta-NeuralBeagle14-7B-dare-ties | ---
pretty_name: Evaluation run of saucam/mistral-orpo-beta-NeuralBeagle14-7B-dare-ties
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [saucam/mistral-orpo-beta-NeuralBeagle14-7B-dare-ties](https://huggingface.co/saucam/mistral-orpo-beta-NeuralBeagle14-7B-dare-ties)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_saucam__mistral-orpo-beta-NeuralBeagle14-7B-dare-ties\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-03-21T11:57:13.922311](https://huggingface.co/datasets/open-llm-leaderboard/details_saucam__mistral-orpo-beta-NeuralBeagle14-7B-dare-ties/blob/main/results_2024-03-21T11-57-13.922311.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.64885588006924,\n\
\ \"acc_stderr\": 0.03206109245911583,\n \"acc_norm\": 0.6502812247567149,\n\
\ \"acc_norm_stderr\": 0.03270979476504535,\n \"mc1\": 0.3659730722154223,\n\
\ \"mc1_stderr\": 0.016862941684088376,\n \"mc2\": 0.5386604944982498,\n\
\ \"mc2_stderr\": 0.014994373950114138\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6262798634812287,\n \"acc_stderr\": 0.014137708601759088,\n\
\ \"acc_norm\": 0.6672354948805461,\n \"acc_norm_stderr\": 0.013769863046192307\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6683927504481179,\n\
\ \"acc_stderr\": 0.004698285350019214,\n \"acc_norm\": 0.859788886675961,\n\
\ \"acc_norm_stderr\": 0.003464963379379926\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621503,\n \
\ \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621503\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n\
\ \"acc_stderr\": 0.04115324610336953,\n \"acc_norm\": 0.6518518518518519,\n\
\ \"acc_norm_stderr\": 0.04115324610336953\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.038234289699266046,\n\
\ \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.038234289699266046\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n\
\ \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \
\ \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.720754716981132,\n \"acc_stderr\": 0.027611163402399715,\n\
\ \"acc_norm\": 0.720754716981132,\n \"acc_norm_stderr\": 0.027611163402399715\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n\
\ \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n\
\ \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \
\ \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n\
\ \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6936416184971098,\n\
\ \"acc_stderr\": 0.03514942551267438,\n \"acc_norm\": 0.6936416184971098,\n\
\ \"acc_norm_stderr\": 0.03514942551267438\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n\
\ \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n\
\ \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n\
\ \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n\
\ \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.42063492063492064,\n \"acc_stderr\": 0.025424835086924006,\n \"\
acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.025424835086924006\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4126984126984127,\n\
\ \"acc_stderr\": 0.04403438954768176,\n \"acc_norm\": 0.4126984126984127,\n\
\ \"acc_norm_stderr\": 0.04403438954768176\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \
\ \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7870967741935484,\n\
\ \"acc_stderr\": 0.02328766512726855,\n \"acc_norm\": 0.7870967741935484,\n\
\ \"acc_norm_stderr\": 0.02328766512726855\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n\
\ \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\"\
: 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7878787878787878,\n \"acc_stderr\": 0.029126522834586818,\n \"\
acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.029126522834586818\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8808290155440415,\n \"acc_stderr\": 0.02338193534812142,\n\
\ \"acc_norm\": 0.8808290155440415,\n \"acc_norm_stderr\": 0.02338193534812142\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657266,\n\
\ \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657266\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.35555555555555557,\n \"acc_stderr\": 0.029185714949857413,\n \
\ \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.029185714949857413\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n\
\ \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"\
acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8330275229357799,\n \"acc_stderr\": 0.01599015488507338,\n \"\
acc_norm\": 0.8330275229357799,\n \"acc_norm_stderr\": 0.01599015488507338\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n \"\
acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8137254901960784,\n \"acc_stderr\": 0.027325470966716312,\n \"\
acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.027325470966716312\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7848101265822784,\n \"acc_stderr\": 0.026750826994676166,\n \
\ \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.026750826994676166\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7130044843049327,\n\
\ \"acc_stderr\": 0.030360379710291954,\n \"acc_norm\": 0.7130044843049327,\n\
\ \"acc_norm_stderr\": 0.030360379710291954\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.816793893129771,\n \"acc_stderr\": 0.03392770926494733,\n\
\ \"acc_norm\": 0.816793893129771,\n \"acc_norm_stderr\": 0.03392770926494733\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\
acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n\
\ \"acc_stderr\": 0.04236511258094633,\n \"acc_norm\": 0.7407407407407407,\n\
\ \"acc_norm_stderr\": 0.04236511258094633\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n\
\ \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n\
\ \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n\
\ \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n\
\ \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n\
\ \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n\
\ \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \
\ \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8314176245210728,\n\
\ \"acc_stderr\": 0.013387895731543604,\n \"acc_norm\": 0.8314176245210728,\n\
\ \"acc_norm_stderr\": 0.013387895731543604\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.02410571260775431,\n\
\ \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.02410571260775431\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.34301675977653634,\n\
\ \"acc_stderr\": 0.015876912673057745,\n \"acc_norm\": 0.34301675977653634,\n\
\ \"acc_norm_stderr\": 0.015876912673057745\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292456,\n\
\ \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292456\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n\
\ \"acc_stderr\": 0.025494259350694912,\n \"acc_norm\": 0.7202572347266881,\n\
\ \"acc_norm_stderr\": 0.025494259350694912\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7808641975308642,\n \"acc_stderr\": 0.023016705640262196,\n\
\ \"acc_norm\": 0.7808641975308642,\n \"acc_norm_stderr\": 0.023016705640262196\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \
\ \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4576271186440678,\n\
\ \"acc_stderr\": 0.01272429655098019,\n \"acc_norm\": 0.4576271186440678,\n\
\ \"acc_norm_stderr\": 0.01272429655098019\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6691176470588235,\n \"acc_stderr\": 0.028582709753898452,\n\
\ \"acc_norm\": 0.6691176470588235,\n \"acc_norm_stderr\": 0.028582709753898452\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.673202614379085,\n \"acc_stderr\": 0.01897542792050721,\n \
\ \"acc_norm\": 0.673202614379085,\n \"acc_norm_stderr\": 0.01897542792050721\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n\
\ \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n\
\ \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142773,\n\
\ \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142773\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8606965174129353,\n\
\ \"acc_stderr\": 0.024484487162913973,\n \"acc_norm\": 0.8606965174129353,\n\
\ \"acc_norm_stderr\": 0.024484487162913973\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \
\ \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3659730722154223,\n\
\ \"mc1_stderr\": 0.016862941684088376,\n \"mc2\": 0.5386604944982498,\n\
\ \"mc2_stderr\": 0.014994373950114138\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8121546961325967,\n \"acc_stderr\": 0.010977481103435091\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6338134950720242,\n \
\ \"acc_stderr\": 0.013270100238748835\n }\n}\n```"
repo_url: https://huggingface.co/saucam/mistral-orpo-beta-NeuralBeagle14-7B-dare-ties
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|arc:challenge|25_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|gsm8k|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hellaswag|10_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T11-57-13.922311.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-03-21T11-57-13.922311.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- '**/details_harness|winogrande|5_2024-03-21T11-57-13.922311.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-03-21T11-57-13.922311.parquet'
- config_name: results
data_files:
- split: 2024_03_21T11_57_13.922311
path:
- results_2024-03-21T11-57-13.922311.parquet
- split: latest
path:
- results_2024-03-21T11-57-13.922311.parquet
---
# Dataset Card for Evaluation run of saucam/mistral-orpo-beta-NeuralBeagle14-7B-dare-ties
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [saucam/mistral-orpo-beta-NeuralBeagle14-7B-dare-ties](https://huggingface.co/saucam/mistral-orpo-beta-NeuralBeagle14-7B-dare-ties) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_saucam__mistral-orpo-beta-NeuralBeagle14-7B-dare-ties",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-03-21T11:57:13.922311](https://huggingface.co/datasets/open-llm-leaderboard/details_saucam__mistral-orpo-beta-NeuralBeagle14-7B-dare-ties/blob/main/results_2024-03-21T11-57-13.922311.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.64885588006924,
"acc_stderr": 0.03206109245911583,
"acc_norm": 0.6502812247567149,
"acc_norm_stderr": 0.03270979476504535,
"mc1": 0.3659730722154223,
"mc1_stderr": 0.016862941684088376,
"mc2": 0.5386604944982498,
"mc2_stderr": 0.014994373950114138
},
"harness|arc:challenge|25": {
"acc": 0.6262798634812287,
"acc_stderr": 0.014137708601759088,
"acc_norm": 0.6672354948805461,
"acc_norm_stderr": 0.013769863046192307
},
"harness|hellaswag|10": {
"acc": 0.6683927504481179,
"acc_stderr": 0.004698285350019214,
"acc_norm": 0.859788886675961,
"acc_norm_stderr": 0.003464963379379926
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621503,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621503
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.04115324610336953,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.04115324610336953
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6710526315789473,
"acc_stderr": 0.038234289699266046,
"acc_norm": 0.6710526315789473,
"acc_norm_stderr": 0.038234289699266046
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.720754716981132,
"acc_stderr": 0.027611163402399715,
"acc_norm": 0.720754716981132,
"acc_norm_stderr": 0.027611163402399715
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.03514942551267438,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.03514942551267438
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5829787234042553,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.5829787234042553,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42063492063492064,
"acc_stderr": 0.025424835086924006,
"acc_norm": 0.42063492063492064,
"acc_norm_stderr": 0.025424835086924006
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.04403438954768176,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.04403438954768176
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7870967741935484,
"acc_stderr": 0.02328766512726855,
"acc_norm": 0.7870967741935484,
"acc_norm_stderr": 0.02328766512726855
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.029126522834586818,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.029126522834586818
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8808290155440415,
"acc_stderr": 0.02338193534812142,
"acc_norm": 0.8808290155440415,
"acc_norm_stderr": 0.02338193534812142
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657266,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657266
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35555555555555557,
"acc_stderr": 0.029185714949857413,
"acc_norm": 0.35555555555555557,
"acc_norm_stderr": 0.029185714949857413
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.030176808288974337,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.030176808288974337
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8330275229357799,
"acc_stderr": 0.01599015488507338,
"acc_norm": 0.8330275229357799,
"acc_norm_stderr": 0.01599015488507338
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.03407632093854051,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.03407632093854051
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8137254901960784,
"acc_stderr": 0.027325470966716312,
"acc_norm": 0.8137254901960784,
"acc_norm_stderr": 0.027325470966716312
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.026750826994676166,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.026750826994676166
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7130044843049327,
"acc_stderr": 0.030360379710291954,
"acc_norm": 0.7130044843049327,
"acc_norm_stderr": 0.030360379710291954
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.816793893129771,
"acc_stderr": 0.03392770926494733,
"acc_norm": 0.816793893129771,
"acc_norm_stderr": 0.03392770926494733
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.04236511258094633,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.04236511258094633
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8314176245210728,
"acc_stderr": 0.013387895731543604,
"acc_norm": 0.8314176245210728,
"acc_norm_stderr": 0.013387895731543604
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.02410571260775431,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.02410571260775431
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.34301675977653634,
"acc_stderr": 0.015876912673057745,
"acc_norm": 0.34301675977653634,
"acc_norm_stderr": 0.015876912673057745
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.025494259350694912,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.025494259350694912
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7808641975308642,
"acc_stderr": 0.023016705640262196,
"acc_norm": 0.7808641975308642,
"acc_norm_stderr": 0.023016705640262196
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4576271186440678,
"acc_stderr": 0.01272429655098019,
"acc_norm": 0.4576271186440678,
"acc_norm_stderr": 0.01272429655098019
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6691176470588235,
"acc_stderr": 0.028582709753898452,
"acc_norm": 0.6691176470588235,
"acc_norm_stderr": 0.028582709753898452
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.673202614379085,
"acc_stderr": 0.01897542792050721,
"acc_norm": 0.673202614379085,
"acc_norm_stderr": 0.01897542792050721
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142773,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142773
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8606965174129353,
"acc_stderr": 0.024484487162913973,
"acc_norm": 0.8606965174129353,
"acc_norm_stderr": 0.024484487162913973
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640038,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640038
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3659730722154223,
"mc1_stderr": 0.016862941684088376,
"mc2": 0.5386604944982498,
"mc2_stderr": 0.014994373950114138
},
"harness|winogrande|5": {
"acc": 0.8121546961325967,
"acc_stderr": 0.010977481103435091
},
"harness|gsm8k|5": {
"acc": 0.6338134950720242,
"acc_stderr": 0.013270100238748835
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
tyzhu/find_last_sent_train_500_eval_20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 1144818
num_examples: 904
- name: validation
num_bytes: 20896
num_examples: 20
download_size: 499234
dataset_size: 1165714
---
# Dataset Card for "find_last_sent_train_500_eval_20"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
veezbo/akkadian_english_corpus | ---
license: mit
task_categories:
- text-generation
language:
- en
pretty_name: English-translated Akkadian Corpus
size_categories:
- 1K<n<10K
---
# Akkadian English Corpus
This dataset is a cleaned English-translated Akkadian language dataset. This dataset can and has been used for text generation tasks, for example to fine-tune LLMs.
## How it was generated
Please visit my [repo](https://github.com/veezbo/akkadian_english_corpus) on Github which explains the steps that were taken to prepare this dataset for a text generation task.
At a high level, these are steps that were taken:
- Sourced a high-quality dataset of English-translated Akkadian by experts
- Enforced a minimum line length
- Removed duplicate lines
- Removed textual notes and other generic notes within parantheses
- Inserted translation notes and literal notes in place (preserving grammar and adding clarity to the corpus)
## Credit
Credit for the aggregation of the raw data belongs to the [Akkademia](https://github.com/gaigutherz/Akkademia/tree/master) project. Specifically, the exact data file used as the starting dataset is linked [here](https://github.com/gaigutherz/Akkademia/blob/master/NMT_input/train.en) and was also used to train their SOTA neural machine translation Akkadian->English model as described in their recent [paper](https://academic.oup.com/pnasnexus/article/2/5/pgad096/7147349) Gutherz et al. 2023 [1].
Credit for the original source of the raw data belongs to the incredible Open Richly Annotated Cuneiform Corpus ([ORACC](http://oracc.org)) project [2]. Specifically, as noted by the Akkademia project above, the RINAP 1, 3, 4, and 5 datasets are the source of the original raw data.
## Citations
[1] Gai Gutherz, Shai Gordin, Luis Sáenz, Omer Levy, Jonathan Berant, Translating Akkadian to English with neural machine translation, PNAS Nexus, Volume 2, Issue 5, May 2023, pgad096, https://doi.org/10.1093/pnasnexus/pgad096
[2] Jamie Novotny, Eleanor Robson, Steve Tinney, Niek Veldhuis, et al. Open Richly Annotated Cuneiform Corpus, http://oracc.org |
Penguin-N/github-issues | ---
dataset_info:
features:
- name: id
dtype: int64
- name: labels_url
dtype: string
- name: body
dtype: string
- name: updated_at
dtype: string
- name: number
dtype: int64
- name: milestone
struct:
- name: closed_at
dtype: 'null'
- name: closed_issues
dtype: int64
- name: created_at
dtype: string
- name: creator
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: description
dtype: string
- name: due_on
dtype: 'null'
- name: html_url
dtype: string
- name: id
dtype: int64
- name: labels_url
dtype: string
- name: node_id
dtype: string
- name: number
dtype: int64
- name: open_issues
dtype: int64
- name: state
dtype: string
- name: title
dtype: string
- name: updated_at
dtype: string
- name: url
dtype: string
- name: repository_url
dtype: string
- name: draft
dtype: bool
- name: labels
list:
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: id
dtype: int64
- name: name
dtype: string
- name: node_id
dtype: string
- name: url
dtype: string
- name: created_at
dtype: string
- name: comments_url
dtype: string
- name: assignee
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: timeline_url
dtype: string
- name: title
dtype: string
- name: events_url
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: user
struct:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: assignees
list:
- name: avatar_url
dtype: string
- name: events_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: gravatar_id
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: login
dtype: string
- name: node_id
dtype: string
- name: organizations_url
dtype: string
- name: received_events_url
dtype: string
- name: repos_url
dtype: string
- name: site_admin
dtype: bool
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: type
dtype: string
- name: url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: author_association
dtype: string
- name: closed_at
dtype: string
- name: pull_request
struct:
- name: diff_url
dtype: string
- name: html_url
dtype: string
- name: merged_at
dtype: string
- name: patch_url
dtype: string
- name: url
dtype: string
- name: node_id
dtype: string
- name: comments
sequence: string
- name: reactions
struct:
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: confused
dtype: int64
- name: eyes
dtype: int64
- name: heart
dtype: int64
- name: hooray
dtype: int64
- name: laugh
dtype: int64
- name: rocket
dtype: int64
- name: total_count
dtype: int64
- name: url
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: url
dtype: string
- name: html_url
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 23420904
num_examples: 3000
download_size: 6966686
dataset_size: 23420904
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nexdata/47811_Sentences_Intention_Annotation_Data_in_Interactive_Scenes | ---
license: cc-by-nc-nd-4.0
---
## Description
Intent-like single-sentence annotated textual data, the data size is 47811 sentences, annotated with intent classes, including slot and slot value information; the intent field includes music, weather, date, schedule, home equipment, etc.; it is applied to intent recognition research and related fields.
For more details, please refer to the link: https://www.nexdata.ai/dataset/1085?source=Huggingface
# Specifications
## Storage format
Json
## Data content
47,811 sentences of 16 domains, and each sentence is marked with intention
## Annotation
Manually write sentences for each intent type, and then mark the slot;
## Language
Chinese
## Accuracy Rate
99%
## Application scenario
intention understanding in speech interaction
# Licensing Information
Commercial License
|
juancopi81/test-audio | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: playlist_title
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: description
dtype: string
splits:
- name: train
num_bytes: 1052419.0
num_examples: 3
download_size: 0
dataset_size: 1052419.0
---
# Dataset Card for "test-audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liuyanchen1015/MULTI_VALUE_sst2_drop_aux_wh | ---
dataset_info:
features:
- name: sentence
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 393
num_examples: 4
- name: test
num_bytes: 600
num_examples: 6
- name: train
num_bytes: 7762
num_examples: 62
download_size: 10707
dataset_size: 8755
---
# Dataset Card for "MULTI_VALUE_sst2_drop_aux_wh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/sakakibara_satomi_idolmastercinderellagirls | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of sakakibara_satomi/榊原里美 (THE iDOLM@STER: Cinderella Girls)
This is the dataset of sakakibara_satomi/榊原里美 (THE iDOLM@STER: Cinderella Girls), containing 71 images and their tags.
The core tags of this character are `grey_hair, breasts, long_hair, large_breasts, purple_eyes, drill_hair, braid, twintails`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 71 | 46.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 71 | 38.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 134 | 67.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 71 | 44.32 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 134 | 76.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakakibara_satomi_idolmastercinderellagirls/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sakakibara_satomi_idolmastercinderellagirls',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, solo, cleavage, open_mouth, necklace, hairband, looking_at_viewer, :d, blush, microphone |
| 1 | 10 |  |  |  |  |  | 1girl, solo, dress, necklace, blush, simple_background, twin_braids, looking_at_viewer, open_mouth, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | cleavage | open_mouth | necklace | hairband | looking_at_viewer | :d | blush | microphone | dress | simple_background | twin_braids | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-----------|:-------------|:-----------|:-----------|:--------------------|:-----|:--------|:-------------|:--------|:--------------------|:--------------|:-------------------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | | | | |
| 1 | 10 |  |  |  |  |  | X | X | | X | X | | X | | X | | X | X | X | X |
|
manu/french_librispeech_text_only | ---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 62120933
num_examples: 258213
download_size: 37959942
dataset_size: 62120933
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "french_librispeech_text_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_qa_context_v5_full_recite_ans_sent_random_permute_rerun_4 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4850217.0
num_examples: 2385
- name: validation
num_bytes: 631113
num_examples: 300
download_size: 1204825
dataset_size: 5481330.0
---
# Dataset Card for "squad_qa_context_v5_full_recite_ans_sent_random_permute_rerun_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DopeorNope/new_instruct6 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: tag
dtype: string
splits:
- name: train
num_bytes: 525389527
num_examples: 127811
download_size: 259744648
dataset_size: 525389527
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Elfsong/Mercury | ---
dataset_info:
features:
- name: slug_name
dtype: string
- name: meta_info
struct:
- name: data
struct:
- name: question
struct:
- name: categoryTitle
dtype: string
- name: content
dtype: string
- name: difficulty
dtype: string
- name: questionFrontendId
dtype: string
- name: questionId
dtype: string
- name: questionTitle
dtype: string
- name: questionTitleSlug
dtype: string
- name: similarQuestions
dtype: string
- name: stats
dtype: string
- name: topicTags
list:
- name: name
dtype: string
- name: slug
dtype: string
- name: id
dtype: string
- name: difficulty
dtype: string
- name: pretty_content
sequence: string
- name: solutions
list:
- name: hash
dtype: int64
- name: runtime
dtype: string
- name: solution
dtype: string
- name: prompt
dtype: string
- name: generator_code
dtype: string
- name: convert_online
dtype: string
- name: convert_offline
dtype: string
- name: evaluate_offline
dtype: string
- name: entry_point
dtype: string
- name: test_cases
dtype: string
splits:
- name: train
num_bytes: 24879611
num_examples: 1633
- name: eval
num_bytes: 7028101
num_examples: 256
download_size: 10526574
dataset_size: 31907712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
language:
- en
size_categories:
- 1K<n<10K
---
# Welcome to Mercury 🪐!
## It is a code efficiency benchmark.
## Please consider citing our paper: https://arxiv.org/abs/2402.07844 |
dchatca/vn-economic-articles-summary-remove-1 | ---
dataset_info:
features:
- name: Title
dtype: string
- name: Content
dtype: string
- name: Sum-Content
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 12177542.253918495
num_examples: 1148
- name: test
num_bytes: 1357774.7460815047
num_examples: 128
download_size: 6349365
dataset_size: 13535317.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
kamilakesbi/cv_for_spd_fr_augmented | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: speakers
sequence: string
- name: timestamps_start
sequence: float64
- name: timestamps_end
sequence: float64
splits:
- name: train
num_bytes: 174115378.0
num_examples: 120
- name: validation
num_bytes: 39951832.0
num_examples: 24
- name: test
num_bytes: 40056748.0
num_examples: 24
download_size: 236615435
dataset_size: 254123958.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ademax/ocr_scan_vi | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 628707658.0172987
num_examples: 10000
- name: test
num_bytes: 32755668.982701264
num_examples: 521
download_size: 660712028
dataset_size: 661463327.0
---
# Dataset Card for "ocr_scan_vi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
CyberHarem/rudewell_emilia_maougakuinnofutekigousha | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of Rudewell Emilia/エミリア・ルードウェル (Maou Gakuin no Futekigousha)
This is the dataset of Rudewell Emilia/エミリア・ルードウェル (Maou Gakuin no Futekigousha), containing 36 images and their tags.
The core tags of this character are `long_hair, purple_hair, purple_eyes, hair_between_eyes, ponytail, pink_eyes, asymmetrical_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 36 | 25.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rudewell_emilia_maougakuinnofutekigousha/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 36 | 25.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rudewell_emilia_maougakuinnofutekigousha/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 81 | 48.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/rudewell_emilia_maougakuinnofutekigousha/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/rudewell_emilia_maougakuinnofutekigousha',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------|
| 0 | 5 |  |  |  |  |  | 1girl, closed_mouth, solo, frown, looking_at_viewer, necklace, asymmetrical_bangs, portrait, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | closed_mouth | solo | frown | looking_at_viewer | necklace | asymmetrical_bangs | portrait | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------|:--------|:--------------------|:-----------|:---------------------|:-----------|:-------------|
| 0 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X |
|
LexiconShiftInnovations/sinhala_facebook_stories | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 15882438
num_examples: 776
download_size: 6524881
dataset_size: 15882438
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/56d4a1b5 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 184
num_examples: 10
download_size: 1335
dataset_size: 184
---
# Dataset Card for "56d4a1b5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zeroman1318/daegu-ai-06 | ---
configs:
- config_name: default
data_files:
- split: train
path: data.csv
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
tyzhu/squad_qa_context_v5_full_recite_ans_sent_random_permute_rerun_8 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: answer
dtype: string
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 4850217.0
num_examples: 2385
- name: validation
num_bytes: 631113
num_examples: 300
download_size: 1204825
dataset_size: 5481330.0
---
# Dataset Card for "squad_qa_context_v5_full_recite_ans_sent_random_permute_rerun_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chloecchng/biomedical_cpgQA | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- biology
- medical
size_categories:
- 1K<n<10K
---
# Dataset Card for the Biomedical Domain
### Dataset Summary
This dataset was obtain through github (https://github.com/mmahbub/cpgQA/blob/main/dataset/cpgQA-v1.0.csv?plain=1) to Huggin Face for easier access while fine tuning.
### Languages
English (en)
## Dataset Structure
The dataset is in a CSV format, with each row representing a single review. The following columns are included:
* **Title:** Categorises the QA.
* **Context:** Gives a context of the QA.
* **Question:** The question asked.
* **Answer:** The expected and appropriate answer to the question asked. |
kiennguyen1703/DiroxCompany | ---
license: apache-2.0
language:
- en
--- |
magixxixx/shopee-product-reviews-on-computer-category | ---
task_categories:
- text-classification
language:
- tl
- en
tags:
- shopee
- reviews
pretty_name: Shopee reviews on computer category, 20k positives and 20k negatives
size_categories:
- 10K<n<100K
--- |
CAiRE/prosocial-dialog-eng_Latn-bloomz-rlhf | ---
dataset_info:
features:
- name: context
dtype: string
- name: response
dtype: string
- name: rots
sequence: string
- name: safety_label
dtype: string
- name: safety_annotations
sequence: string
- name: safety_annotation_reasons
sequence: string
- name: source
dtype: string
- name: etc
dtype: string
- name: dialogue_id
dtype: int64
- name: response_id
dtype: int64
- name: episode_done
dtype: bool
- name: agent_response
dtype: string
splits:
- name: train
num_bytes: 144916538
num_examples: 120236
- name: validation
num_bytes: 29794902
num_examples: 20416
- name: test
num_bytes: 36311173
num_examples: 25029
download_size: 107425838
dataset_size: 211022613
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
kaleemWaheed/twitter_dataset_1712994766 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 23267
num_examples: 51
download_size: 11096
dataset_size: 23267
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TinyPixel/lima-u | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1804087
num_examples: 780
download_size: 1044055
dataset_size: 1804087
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "lima-u"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
haoranxu/WMT22-Test | ---
dataset_info:
- config_name: cs-en
features:
- name: cs-en
struct:
- name: cs
dtype: string
- name: en
dtype: string
splits:
- name: test
num_bytes: 325040
num_examples: 1448
download_size: 224193
dataset_size: 325040
- config_name: de-en
features:
- name: de-en
struct:
- name: de
dtype: string
- name: en
dtype: string
splits:
- name: test
num_bytes: 403424
num_examples: 1984
download_size: 267107
dataset_size: 403424
- config_name: en-cs
features:
- name: en-cs
struct:
- name: cs
dtype: string
- name: en
dtype: string
splits:
- name: test
num_bytes: 422875
num_examples: 2037
download_size: 281086
dataset_size: 422875
- config_name: en-de
features:
- name: en-de
struct:
- name: de
dtype: string
- name: en
dtype: string
splits:
- name: test
num_bytes: 442576
num_examples: 2037
download_size: 280415
dataset_size: 442576
- config_name: en-is
features:
- name: en-is
struct:
- name: en
dtype: string
- name: is
dtype: string
splits:
- name: test
num_bytes: 310807
num_examples: 1000
download_size: 197437
dataset_size: 310807
- config_name: en-ru
features:
- name: en-ru
struct:
- name: en
dtype: string
- name: ru
dtype: string
splits:
- name: test
num_bytes: 598414
num_examples: 2037
download_size: 333784
dataset_size: 598414
- config_name: en-zh
features:
- name: en-zh
struct:
- name: en
dtype: string
- name: zh
dtype: string
splits:
- name: test
num_bytes: 383751
num_examples: 2037
download_size: 257805
dataset_size: 383751
- config_name: is-en
features:
- name: is-en
struct:
- name: en
dtype: string
- name: is
dtype: string
splits:
- name: test
num_bytes: 248029
num_examples: 1000
download_size: 152885
dataset_size: 248029
- config_name: ru-en
features:
- name: ru-en
struct:
- name: en
dtype: string
- name: ru
dtype: string
splits:
- name: test
num_bytes: 579656
num_examples: 2016
download_size: 340830
dataset_size: 579656
- config_name: zh-en
features:
- name: zh-en
struct:
- name: en
dtype: string
- name: zh
dtype: string
splits:
- name: test
num_bytes: 526074
num_examples: 1875
download_size: 333078
dataset_size: 526074
configs:
- config_name: cs-en
data_files:
- split: test
path: cs-en/test-*
- config_name: de-en
data_files:
- split: test
path: de-en/test-*
- config_name: en-cs
data_files:
- split: test
path: en-cs/test-*
- config_name: en-de
data_files:
- split: test
path: en-de/test-*
- config_name: en-is
data_files:
- split: test
path: en-is/test-*
- config_name: en-ru
data_files:
- split: test
path: en-ru/test-*
- config_name: en-zh
data_files:
- split: test
path: en-zh/test-*
- config_name: is-en
data_files:
- split: test
path: is-en/test-*
- config_name: ru-en
data_files:
- split: test
path: ru-en/test-*
- config_name: zh-en
data_files:
- split: test
path: zh-en/test-*
---
# Dataset Card for "WMT22-Test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mb7419/legal-advice-reddit | ---
dataset_info:
features:
- name: ques_title
dtype: string
- name: ques_text
dtype: string
- name: ques_created
dtype: string
- name: ques_score
dtype: int64
- name: ans_text
dtype: string
- name: ans_created
dtype: string
- name: ans_score
dtype: float64
- name: dominant_topic_name
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 228451850
num_examples: 115359
- name: validation
num_bytes: 49046245
num_examples: 24720
- name: test
num_bytes: 49058903
num_examples: 24720
download_size: 197856704
dataset_size: 326556998
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
shrinath-suresh/so_5k_with_short_answer_cleaned | ---
license: apache-2.0
---
|
Rami/Diabetic_Retinopathy_Preprocessed_Dataset_256x256 | ---
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- image-classification
tags:
- medical
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 354568127.0
num_examples: 2750
download_size: 0
dataset_size: 354568127.0
---
This is dataset comes from this [Kaggle Dataset](https://www.kaggle.com/datasets/sachinkumar413/diabetic-retinopathy-dataset/)
from the user [Sachin Kumar](https://www.kaggle.com/sachinkumar413).
- The goal of the dataset is for the Varun AIM Projects to easily start running and download the dataset on their local computer in the HF libraries as the directory I strongly recommedn to use. |
DBQ/Mr.Porter.Product.prices.Hungary | ---
annotations_creators:
- other
language_creators:
- other
language:
- en
license:
- unknown
multilinguality:
- monolingual
source_datasets:
- original
task_categories:
- text-classification
- image-classification
- feature-extraction
- image-segmentation
- image-to-image
- image-to-text
- object-detection
- summarization
- zero-shot-image-classification
pretty_name: Hungary - Mr Porter - Product-level price list
tags:
- webscraping
- ecommerce
- Mr Porter
- fashion
- fashion product
- image
- fashion image
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: website_name
dtype: string
- name: competence_date
dtype: string
- name: country_code
dtype: string
- name: currency_code
dtype: string
- name: brand
dtype: string
- name: category1_code
dtype: string
- name: category2_code
dtype: string
- name: category3_code
dtype: string
- name: product_code
dtype: int64
- name: title
dtype: string
- name: itemurl
dtype: string
- name: imageurl
dtype: string
- name: full_price
dtype: float64
- name: price
dtype: float64
- name: full_price_eur
dtype: float64
- name: price_eur
dtype: float64
- name: flg_discount
dtype: int64
splits:
- name: train
num_bytes: 9097423
num_examples: 27686
download_size: 2070861
dataset_size: 9097423
---
# Mr Porter web scraped data
## About the website
Mr Porter operates within the **online luxury fashion retail industry** in the **EMEA** region, particularly focusing on the arena of mens fashion in **Hungary**. This industry has been seeing notable growth owing to increasing internet penetration and the rising consumer interest in high-end fashion. **Ecommerce** has now become a significant component of the luxury fashion business model. The dataset studied particularly consists of **product-list page (PLP) data** for **Mr Porter in Hungary**, which is a valuable source of insight into consumer preferences for luxury mens fashion and the overall performance of the industry in this market.
## Link to **dataset**
[Hungary - Mr Porter - Product-level price list dataset](https://www.databoutique.com/buy-data-page/Mr%20Porter%20Product-prices%20Hungary/r/recevRWtytAuVgwUz)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.