datasetId stringlengths 2 117 | card stringlengths 19 1.01M |
|---|---|
moooji/controlnet_test4 | ---
dataset_info:
features:
- name: conditioning_image
dtype: image
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 450892980.0
num_examples: 10000
download_size: 0
dataset_size: 450892980.0
---
# Dataset Card for "controlnet_test4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RikoteMaster/dataset_dair_ai_4_llama2_v2 | ---
dataset_info:
features:
- name: Text_processed
dtype: string
- name: Emotion
dtype: string
- name: text
dtype: string
- name: Augmented
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 7397641
num_examples: 17640
download_size: 2690781
dataset_size: 7397641
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dataset_dair_ai_4_llama2_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
fashxp/cars-manufacturers | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': ac cars
'1': alfa romeo
'2': amc
'3': aston martin
'4': audi
'5': austin healey
'6': bmw
'7': buick
'8': cadillac
'9': chevrolet
'10': chrysler
'11': citroen
'12': datsun
'13': dodge
'14': ferrari
'15': fiat
'16': ford
'17': jaguar
'18': lamborghini
'19': lincoln
'20': mazda
'21': mercedes
'22': mercury
'23': mga
'24': morris
'25': oldsmobile
'26': opel
'27': peugeot
'28': plymouth
'29': pontiac
'30': porsche
'31': renault
'32': saab
'33': toyota
'34': trabant
'35': triumph
'36': volkswagen
'37': volvo
splits:
- name: train
num_bytes: 246560655.3800738
num_examples: 433
- name: test
num_bytes: 91605127.6199262
num_examples: 109
download_size: 337714704
dataset_size: 338165783.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
jp1924/MeetingSpeech | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: id
dtype: string
- name: sentence
dtype: string
- name: original_form
dtype: string
- name: start
dtype: float32
- name: end
dtype: float32
- name: term
dtype: string
- name: environment
dtype: string
- name: isIdiom
dtype: bool
- name: hangeulToEnglish
list:
- name: id
dtype: int16
- name: hangeul
dtype: string
- name: english
dtype: string
- name: begin
dtype: int16
- name: end
dtype: int16
- name: hangeulToNumber
list:
- name: id
dtype: int16
- name: hangeul
dtype: string
- name: number
dtype: string
- name: begin
dtype: int16
- name: end
dtype: int16
- name: speaker
struct:
- name: id
dtype: string
- name: name
dtype: string
- name: age
dtype: string
- name: occupation
dtype: string
- name: role
dtype: string
- name: sex
dtype: string
- name: metadata
struct:
- name: title
dtype: string
- name: creator
dtype: string
- name: distributor
dtype: string
- name: year
dtype: int16
- name: category
dtype: string
- name: sampling
dtype: string
- name: date
dtype: string
- name: topic
dtype: string
- name: media
dtype: string
- name: communication
dtype: string
- name: type
dtype: string
- name: domain
dtype: string
- name: speaker_num
dtype: int16
- name: organization
dtype: string
- name: annotation_level
dtype: string
splits:
- name: train
num_bytes: 649259099466.0
num_examples: 3446200
- name: validation
num_bytes: 75950798309.0
num_examples: 374680
download_size: 715527121692
dataset_size: 725209897775.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
icelab/ntrs_meta | ---
language:
- en
multilinguality:
- monolingual
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- other
pretty_name: NTRS
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for NASA technical report server metadata
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
**Homepage: https://ntrs.nasa.gov/**
### Dataset Summary
The NTRS collects scientific and technical information funded or created by NASA and provides metadata but also access to abstracts and full texts.
The dataset contains all abstracts, titles, and associated metadata indexed on the NTRS.
The most recent bulk download can be aquired via the NTRS directly at:
https://sti.nasa.gov/harvesting-data-from-ntrs/
This repository does not claim any ownership on the provided data, it only is supposed to provide an easily accesible gateway to the data, through the Huggingface API.
The original author and source should always be credited.
## Dataset Structure
### Data Instances
The dataset contain over 508000 objects (abstracts) and associated metadata from NASA funded projects in time range of 1917 to today (18.06.2022).
It therefore is a rich data source for language modeling in the domain of spacecraft design and space science.
### Data Fields
```yaml
"copyright": {"licenseType":"NO,"determinationType":"GOV_PUBLIC_USE_PERMITTED", "thirdPartyContentCondition":"NOT_SET",...},
"subjectCategories": ["Space Transportation and Safety"],
"exportControl": {"isExportControl":"NO","ear":"NO","itar":"NO",...},
"created": "2022-01-28T15:19:38.8948330+00:00",
"distributionDate": "2019-07-12T00:00:00.0000000+00:00",
"otherReportNumbers": ["NACA-AR-1"],
"center": {"code":"CDMS","name":"Legacy CDMS","id":"092d6e0881874968859b972d39a888dc"},
"onlyAbstract": False,
"sensitiveInformation": 2,
"abstract": "Report includes the National Advisory Committe...",
"title": "Annual Report of the National Advisory Committ...",
"stiType": "CONTRACTOR_OR_GRANTEE_REPORT",
"distribution": "PUBLIC",
"submittedDate": "2013-09-06T18:26:00.0000000+00:00",
"isLessonsLearned": 0.0,
"disseminated": "DOCUMENT_AND_METADATA",
"stiTypeDetails": "Contractor or Grantee Report",
"technicalReviewType": "TECHNICAL_REVIEW_TYPE_NONE",
"modified": "2013-08-29 00:00:00.000000",
"id": 19930091025,
"publications": [{"submissionId":19930091025,"publicationDate":1916-01-01T00:00:00.0000000+00:00,"issn":"https://doi.org/10.1109/BigData52589.2021.9671853",...},...]
"status": "CURATED",
"authorAffiliations": [{"sequence":0,"meta":{"author":{"name":"Author_name_1","orcidId":"ID"},"organization":{"name":"NASA",...}},"id":ID},{"sequence":1,...,}]
"keywords": [Web scraping, data mining, epidemiology],
"meetings": [{"country":"US","name":"2021 IEEE",...},...]
"fundingNumbers": [{"number":"920121", "type":"CONTRACT_GRANT"},...]
"redactedDate": "2022-04-20T14:36:15.0925240",
"sourceIdentifiers": []}
```
## Dataset Creation
### Curation Rationale
The last bulk download was done on 18.06.2022. The dataset was cleaned from abstracts that occur multiple times.
## Considerations for Using the Data
Main field that probably interest people:
"abstract", "subjectCategory", "keywords", "center"
## Additional Information
### Licensing Information
"Generally, United States government works (works prepared by officers and employees of the U.S. Government as part of their official duties) are not protected by copyright in the U.S. (17 U.S.C. §105) and may be used without obtaining permission from NASA. However, U.S. government works may contain privately created, copyrighted works (e.g., quote, photograph, chart, drawing, etc.) used under license or with permission of the copyright owner. Incorporation in a U.S. government work does not place the private work in the public domain.
place the private work in the public domain.
Moreover, not all materials on or available through download from this Web site are U.S. government works. Some materials available from this Web site may be protected by copyrights owned by private individuals or organizations and may be subject to restrictions on use. For example, contractors and grantees are not considered Government employees; generally, they hold copyright to works they produce for the Government. Other materials may be the result of joint authorship due to collaboration between a Government employee and a private individual wherein the private individual will hold a copyright to the work jointly with U.S. Government. The Government is granted a worldwide license to use, modify, reproduce, release, perform, display, or disclose these works by or on behalf of the Government.
While NASA may publicly release copyrighted works in which it has government purpose licenses or specific permission to release, such licenses or permission do not necessarily transfer to others. Thus, such works are still protected by copyright, and recipients of the works must comply with the copyright law (Title 17 United States Code). Such copyrighted works may not be modified, reproduced, or redistributed without permission of the copyright owner."
Taken from https://sti.nasa.gov/disclaimers/, please visit for more information.
### Contributions
For any any inquiries about this data set please contact [@pauldrm](https://github.com/<github-username>)
|
CVasNLPExperiments/DTD_parition1_test_google_flan_t5_xxl_mode_T_SPECIFIC_A_ns_30 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: true_label
dtype: string
- name: prediction
dtype: string
splits:
- name: fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices
num_bytes: 14213
num_examples: 30
download_size: 7529
dataset_size: 14213
configs:
- config_name: default
data_files:
- split: fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices
path: data/fewshot_0__Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full_clip_tags_LAION_ViT_H_14_2B_simple_specific_rices-*
---
|
lokesh2002/construction_sample_dataset2 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4214015.0
num_examples: 10
download_size: 4162284
dataset_size: 4214015.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "construction_sample_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AyoubChLin/Product_Matching | ---
license: apache-2.0
---
|
maximdolphin/multi-request-identifier | ---
license: mit
---
|
Asad321/MKBHD-jsza7d5q-scraped-data-Final-Evaluation-Demo | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 870
num_examples: 2
download_size: 4462
dataset_size: 870
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "MKBHD-jsza7d5q-scraped-data-Final-Evaluation-Demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down | ---
pretty_name: Evaluation run of huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down](https://huggingface.co/huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-14T17:36:45.221009](https://huggingface.co/datasets/open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down/blob/main/results_2024-01-14T17-36-45.221009.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6355178040599482,\n\
\ \"acc_stderr\": 0.03241610229663876,\n \"acc_norm\": 0.641571442422577,\n\
\ \"acc_norm_stderr\": 0.033065020971592085,\n \"mc1\": 0.3047735618115055,\n\
\ \"mc1_stderr\": 0.016114124156882452,\n \"mc2\": 0.45435317672164416,\n\
\ \"mc2_stderr\": 0.014528686611193308\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.5665529010238908,\n \"acc_stderr\": 0.014481376224558902,\n\
\ \"acc_norm\": 0.6126279863481229,\n \"acc_norm_stderr\": 0.014235872487909872\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6271659032065325,\n\
\ \"acc_stderr\": 0.004825702533920412,\n \"acc_norm\": 0.8319059948217487,\n\
\ \"acc_norm_stderr\": 0.0037318549570309373\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \
\ \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n\
\ \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n\
\ \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n\
\ \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\"\
: 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"\
acc\": 0.6792452830188679,\n \"acc_stderr\": 0.02872750295788027,\n \
\ \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.02872750295788027\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7013888888888888,\n\
\ \"acc_stderr\": 0.03827052357950756,\n \"acc_norm\": 0.7013888888888888,\n\
\ \"acc_norm_stderr\": 0.03827052357950756\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\"\
: 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\"\
: {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \
\ \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n \
\ },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.41,\n\
\ \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \
\ \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_medicine|5\"\
: {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n\
\ \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n\
\ },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n\
\ \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n\
\ \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\"\
: {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \
\ \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n\
\ \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.574468085106383,\n\
\ \"acc_stderr\": 0.03232146916224468,\n \"acc_norm\": 0.574468085106383,\n\
\ \"acc_norm_stderr\": 0.03232146916224468\n },\n \"harness|hendrycksTest-econometrics|5\"\
: {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.04692008381368909,\n\
\ \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.04692008381368909\n\
\ },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\"\
: 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"\
acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.43386243386243384,\n \"acc_stderr\": 0.025525034382474884,\n \"\
acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.025525034382474884\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \
\ \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\"\
: 0.7451612903225806,\n \"acc_stderr\": 0.024790118459332208,\n \"\
acc_norm\": 0.7451612903225806,\n \"acc_norm_stderr\": 0.024790118459332208\n\
\ },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\"\
: 0.5221674876847291,\n \"acc_stderr\": 0.035145285621750094,\n \"\
acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.035145285621750094\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.65,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\"\
: 0.65,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n\
\ \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.7676767676767676,\n \"acc_stderr\": 0.03008862949021749,\n \"\
acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.03008862949021749\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.025033870583015184,\n\
\ \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.025033870583015184\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6282051282051282,\n \"acc_stderr\": 0.024503472557110936,\n\
\ \"acc_norm\": 0.6282051282051282,\n \"acc_norm_stderr\": 0.024503472557110936\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465076,\n \
\ \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465076\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.030778057422931673,\n\
\ \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.030778057422931673\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"\
acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8348623853211009,\n \"acc_stderr\": 0.015919557829976044,\n \"\
acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.015919557829976044\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"\
acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.7843137254901961,\n \"acc_stderr\": 0.02886743144984932,\n \"\
acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.02886743144984932\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601453,\n \
\ \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601453\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n\
\ \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n\
\ \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n\
\ \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"\
acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n\
\ \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n\
\ \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n\
\ \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n\
\ \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n\
\ \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n\
\ \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n\
\ \"acc_stderr\": 0.023086635086841407,\n \"acc_norm\": 0.8547008547008547,\n\
\ \"acc_norm_stderr\": 0.023086635086841407\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \
\ \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8186462324393359,\n\
\ \"acc_stderr\": 0.013778693778464085,\n \"acc_norm\": 0.8186462324393359,\n\
\ \"acc_norm_stderr\": 0.013778693778464085\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7341040462427746,\n \"acc_stderr\": 0.02378620325550829,\n\
\ \"acc_norm\": 0.7341040462427746,\n \"acc_norm_stderr\": 0.02378620325550829\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38212290502793295,\n\
\ \"acc_stderr\": 0.016251139711570762,\n \"acc_norm\": 0.38212290502793295,\n\
\ \"acc_norm_stderr\": 0.016251139711570762\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.024404394928087873,\n\
\ \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.024404394928087873\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n\
\ \"acc_stderr\": 0.025583062489984813,\n \"acc_norm\": 0.7170418006430869,\n\
\ \"acc_norm_stderr\": 0.025583062489984813\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.024288533637726095,\n\
\ \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.024288533637726095\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \
\ \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46870925684485004,\n\
\ \"acc_stderr\": 0.012745204626083143,\n \"acc_norm\": 0.46870925684485004,\n\
\ \"acc_norm_stderr\": 0.012745204626083143\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.02833295951403121,\n\
\ \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.02833295951403121\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495155,\n \
\ \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495155\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n\
\ \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n\
\ \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.689795918367347,\n \"acc_stderr\": 0.029613459872484378,\n\
\ \"acc_norm\": 0.689795918367347,\n \"acc_norm_stderr\": 0.029613459872484378\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n\
\ \"acc_stderr\": 0.026193923544454132,\n \"acc_norm\": 0.835820895522388,\n\
\ \"acc_norm_stderr\": 0.026193923544454132\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710905,\n \
\ \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710905\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n\
\ \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n\
\ \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.02954774168764004,\n\
\ \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.02954774168764004\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3047735618115055,\n\
\ \"mc1_stderr\": 0.016114124156882452,\n \"mc2\": 0.45435317672164416,\n\
\ \"mc2_stderr\": 0.014528686611193308\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698332\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3912054586808188,\n \
\ \"acc_stderr\": 0.0134425024027943\n }\n}\n```"
repo_url: https://huggingface.co/huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|arc:challenge|25_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|gsm8k|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hellaswag|10_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-14T17-36-45.221009.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-14T17-36-45.221009.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- '**/details_harness|winogrande|5_2024-01-14T17-36-45.221009.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-14T17-36-45.221009.parquet'
- config_name: results
data_files:
- split: 2024_01_14T17_36_45.221009
path:
- results_2024-01-14T17-36-45.221009.parquet
- split: latest
path:
- results_2024-01-14T17-36-45.221009.parquet
---
# Dataset Card for Evaluation run of huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down](https://huggingface.co/huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T17:36:45.221009](https://huggingface.co/datasets/open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down/blob/main/results_2024-01-14T17-36-45.221009.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6355178040599482,
"acc_stderr": 0.03241610229663876,
"acc_norm": 0.641571442422577,
"acc_norm_stderr": 0.033065020971592085,
"mc1": 0.3047735618115055,
"mc1_stderr": 0.016114124156882452,
"mc2": 0.45435317672164416,
"mc2_stderr": 0.014528686611193308
},
"harness|arc:challenge|25": {
"acc": 0.5665529010238908,
"acc_stderr": 0.014481376224558902,
"acc_norm": 0.6126279863481229,
"acc_norm_stderr": 0.014235872487909872
},
"harness|hellaswag|10": {
"acc": 0.6271659032065325,
"acc_stderr": 0.004825702533920412,
"acc_norm": 0.8319059948217487,
"acc_norm_stderr": 0.0037318549570309373
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.02872750295788027,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.02872750295788027
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7013888888888888,
"acc_stderr": 0.03827052357950756,
"acc_norm": 0.7013888888888888,
"acc_norm_stderr": 0.03827052357950756
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.574468085106383,
"acc_stderr": 0.03232146916224468,
"acc_norm": 0.574468085106383,
"acc_norm_stderr": 0.03232146916224468
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.04692008381368909,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.04692008381368909
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43386243386243384,
"acc_stderr": 0.025525034382474884,
"acc_norm": 0.43386243386243384,
"acc_norm_stderr": 0.025525034382474884
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7451612903225806,
"acc_stderr": 0.024790118459332208,
"acc_norm": 0.7451612903225806,
"acc_norm_stderr": 0.024790118459332208
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.035145285621750094,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.035145285621750094
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.65,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.03008862949021749,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.03008862949021749
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8601036269430051,
"acc_stderr": 0.025033870583015184,
"acc_norm": 0.8601036269430051,
"acc_norm_stderr": 0.025033870583015184
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6282051282051282,
"acc_stderr": 0.024503472557110936,
"acc_norm": 0.6282051282051282,
"acc_norm_stderr": 0.024503472557110936
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465076,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465076
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6596638655462185,
"acc_stderr": 0.030778057422931673,
"acc_norm": 0.6596638655462185,
"acc_norm_stderr": 0.030778057422931673
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8348623853211009,
"acc_stderr": 0.015919557829976044,
"acc_norm": 0.8348623853211009,
"acc_norm_stderr": 0.015919557829976044
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.02886743144984932,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.02886743144984932
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601453,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601453
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.023086635086841407,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.023086635086841407
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8186462324393359,
"acc_stderr": 0.013778693778464085,
"acc_norm": 0.8186462324393359,
"acc_norm_stderr": 0.013778693778464085
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7341040462427746,
"acc_stderr": 0.02378620325550829,
"acc_norm": 0.7341040462427746,
"acc_norm_stderr": 0.02378620325550829
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.38212290502793295,
"acc_stderr": 0.016251139711570762,
"acc_norm": 0.38212290502793295,
"acc_norm_stderr": 0.016251139711570762
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.761437908496732,
"acc_stderr": 0.024404394928087873,
"acc_norm": 0.761437908496732,
"acc_norm_stderr": 0.024404394928087873
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.025583062489984813,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.025583062489984813
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7438271604938271,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.7438271604938271,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46870925684485004,
"acc_stderr": 0.012745204626083143,
"acc_norm": 0.46870925684485004,
"acc_norm_stderr": 0.012745204626083143
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495155,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495155
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.689795918367347,
"acc_stderr": 0.029613459872484378,
"acc_norm": 0.689795918367347,
"acc_norm_stderr": 0.029613459872484378
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454132,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454132
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710905,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710905
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.02954774168764004,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.02954774168764004
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3047735618115055,
"mc1_stderr": 0.016114124156882452,
"mc2": 0.45435317672164416,
"mc2_stderr": 0.014528686611193308
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.011764149054698332
},
"harness|gsm8k|5": {
"acc": 0.3912054586808188,
"acc_stderr": 0.0134425024027943
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
chau/ink_test01 | ---
license: other
---
|
scwoods/guanaco-llama2-200 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 338808
num_examples: 200
download_size: 201257
dataset_size: 338808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "guanaco-llama2-200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
luizfsjunior/idades | ---
license: unknown
---
|
CreativeLang/ColBERT_Humor_Detection | ---
license: cc-by-2.0
---
# ColBERT_Humor
## Dataset Description
- **Paper:** [Colbert: Using bert sentence embedding for humor detection](https://arxiv.org/abs/2004.12765)
## Dataset Summary
ColBERT Humor contains 200,000 labeled short texts, equally distributed between humorous and non-humorous content. The dataset was created to overcome the limitations of prior humor detection datasets, which were characterized by inconsistencies in text length, word count, and formality, making them easy to predict with simple models without truly understanding the nuances of humor. The two sources for this dataset are the News Category dataset, featuring 200k news headlines from the Huffington Post (2012-2018), and a collection of 231,657 Reddit jokes. The texts have been rigorously preprocessed to ensure syntactic similarity, requiring models to delve into the linguistic intricacies to distinguish humor, effectively providing a more complex and substantial platform for humor detection research.
For the details of this dataset, we refer you to the original [paper](https://arxiv.org/abs/2004.12765).
Metadata in Creative Language Toolkit ([CLTK](https://github.com/liyucheng09/cltk))
- CL Type: Humor
- Task Type: detection
- Size: 200k
- Created time: 2020
### Citation Information
If you find this dataset helpful, please cite:
```
@article{annamoradnejad2020colbert,
title={Colbert: Using bert sentence embedding for humor detection},
author={Annamoradnejad, Issa and Zoghi, Gohar},
journal={arXiv preprint arXiv:2004.12765},
year={2020}
}
```
### Contributions
If you have any queries, please open an issue or direct your queries to [mail](mailto:yucheng.li@surrey.ac.uk). |
causal-lm/instructions | ---
language: en
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 24084342913.39447
num_examples: 19176870
- name: validation
num_bytes: 2830664216.3492484
num_examples: 2317180
download_size: 14194738316
dataset_size: 26915007129.743717
license: apache-2.0
task_categories:
- text-generation
size_categories:
- 10M<n<100M
---
# Merged Instructions Dataset
Merged Dataset for the response of instructions. |
Straive-Kripa/sp500_sga | ---
license: apache-2.0
---
https://www.kaggle.com/datasets/pierrelouisdanieau/financial-data-sp500-companies |
bgspaditya/mal-minpro-3 | ---
license: mit
dataset_info:
features:
- name: url
dtype: string
- name: type
dtype: string
- name: type_code
dtype: int64
splits:
- name: train
num_bytes: 39365781.85384319
num_examples: 473593
- name: val
num_bytes: 4920795.46308226
num_examples: 59200
- name: test
num_bytes: 9841424.68307455
num_examples: 118398
download_size: 32733388
dataset_size: 54128002.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
nicholasbien/custom-txt-tokenized-gpt2 | ---
dataset_info:
features:
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 13811767
num_examples: 2042
- name: test
num_bytes: 3399576
num_examples: 511
download_size: 2514434
dataset_size: 17211343
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
open-llm-leaderboard/details_01-ai__Yi-6B | ---
pretty_name: Evaluation run of 01-ai/Yi-6B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 3 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_01-ai__Yi-6B_public\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-11-08T12:58:11.094136](https://huggingface.co/datasets/open-llm-leaderboard/details_01-ai__Yi-6B_public/blob/main/results_2023-11-08T12-58-11.094136.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4384437919463087,\n\
\ \"em_stderr\": 0.005081515214965134,\n \"f1\": 0.47321203859060423,\n\
\ \"f1_stderr\": 0.004951302124232466,\n \"acc\": 0.43228738137822953,\n\
\ \"acc_stderr\": 0.010759329857359324\n },\n \"harness|drop|3\": {\n\
\ \"em\": 0.4384437919463087,\n \"em_stderr\": 0.005081515214965134,\n\
\ \"f1\": 0.47321203859060423,\n \"f1_stderr\": 0.004951302124232466\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12661106899166036,\n \
\ \"acc_stderr\": 0.009159715283081087\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7379636937647988,\n \"acc_stderr\": 0.012358944431637561\n\
\ }\n}\n```"
repo_url: https://huggingface.co/01-ai/Yi-6B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_drop_3
data_files:
- split: 2023_11_08T12_58_11.094136
path:
- '**/details_harness|drop|3_2023-11-08T12-58-11.094136.parquet'
- split: latest
path:
- '**/details_harness|drop|3_2023-11-08T12-58-11.094136.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_11_08T12_58_11.094136
path:
- '**/details_harness|gsm8k|5_2023-11-08T12-58-11.094136.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-11-08T12-58-11.094136.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_11_08T12_58_11.094136
path:
- '**/details_harness|winogrande|5_2023-11-08T12-58-11.094136.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-11-08T12-58-11.094136.parquet'
- config_name: results
data_files:
- split: 2023_11_08T12_58_11.094136
path:
- results_2023-11-08T12-58-11.094136.parquet
- split: latest
path:
- results_2023-11-08T12-58-11.094136.parquet
---
# Dataset Card for Evaluation run of 01-ai/Yi-6B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/01-ai/Yi-6B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_01-ai__Yi-6B_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T12:58:11.094136](https://huggingface.co/datasets/open-llm-leaderboard/details_01-ai__Yi-6B_public/blob/main/results_2023-11-08T12-58-11.094136.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4384437919463087,
"em_stderr": 0.005081515214965134,
"f1": 0.47321203859060423,
"f1_stderr": 0.004951302124232466,
"acc": 0.43228738137822953,
"acc_stderr": 0.010759329857359324
},
"harness|drop|3": {
"em": 0.4384437919463087,
"em_stderr": 0.005081515214965134,
"f1": 0.47321203859060423,
"f1_stderr": 0.004951302124232466
},
"harness|gsm8k|5": {
"acc": 0.12661106899166036,
"acc_stderr": 0.009159715283081087
},
"harness|winogrande|5": {
"acc": 0.7379636937647988,
"acc_stderr": 0.012358944431637561
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
arbml/AQAD | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 23343014
num_examples: 17911
download_size: 3581662
dataset_size: 23343014
---
# Dataset Card for "AQAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Muvaraimodel/Textsummarization | ---
license: other
---
|
davidadamczyk/election | ---
dataset_info:
features:
- name: text
dtype: string
- name: text_label
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 65745.4
num_examples: 350
- name: test
num_bytes: 28176.6
num_examples: 150
download_size: 50277
dataset_size: 93922.0
---
# Dataset Card for "election"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
matekadlicsko/hungarian-news-translations | ---
language:
- hu
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- translation
dataset_info:
features:
- name: en
dtype: string
- name: hu
dtype: string
splits:
- name: train
num_bytes: 6957683.788345884
num_examples: 14840
- name: test
num_bytes: 2319384.2116541164
num_examples: 4947
download_size: 6204740
dataset_size: 9277068.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ELRAFA/RAFA123 | ---
license: openrail
---
|
leeseeun/tokenzied_512_news_2gb_data | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 2232750420
num_examples: 1088085
download_size: 0
dataset_size: 2232750420
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "tokenzied_512_news_2gb_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
OdiaGenAI/hindi_alpaca_dolly_67k_formatted | ---
task_categories:
- question-answering
language:
- hi
size_categories:
- 10K<n<100K
--- |
CyberHarem/suzuka_gozen_summevaca_fgo | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of suzuka_gozen_summevaca/鈴鹿御前〔サマバケ〕/铃鹿御前〔暑假〕 (Fate/Grand Order)
This is the dataset of suzuka_gozen_summevaca/鈴鹿御前〔サマバケ〕/铃鹿御前〔暑假〕 (Fate/Grand Order), containing 334 images and their tags.
The core tags of this character are `animal_ears, fox_ears, long_hair, breasts, animal_ear_fluff, yellow_eyes, fox_girl, fox_tail, large_breasts, blonde_hair, tail, dark_skin, dark-skinned_female`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 334 | 529.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzuka_gozen_summevaca_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 334 | 458.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzuka_gozen_summevaca_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 857 | 931.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/suzuka_gozen_summevaca_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/suzuka_gozen_summevaca_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, bare_shoulders, bracelet, cleavage, eyewear_on_head, gradient_hair, katana, leopard_print, looking_at_viewer, pink_bikini, pink_hair, solo, sunglasses, thighs, collarbone, necklace, smile, belly_chain, navel, one_eye_closed, sidelocks, thighlet, gyaru, blush, open_mouth, sheath |
| 1 | 6 |  |  |  |  |  | 1girl, bare_shoulders, belly_chain, bracelet, cleavage, collarbone, eyewear_on_head, gradient_hair, leopard_print, looking_at_viewer, navel, necklace, pink_bikini, pink_hair, solo, sunglasses, tan, thighs, grin, gyaru, sidelocks, thighlet |
| 2 | 8 |  |  |  |  |  | 1girl, bare_shoulders, blue_sky, bracelet, cleavage, collarbone, day, gradient_hair, leopard_print, pink_bikini, pink_hair, solo, tan, thighs, looking_at_viewer, necklace, ocean, thighlet, beach, belly_chain, navel, outdoors, blush, eyewear_on_head, open_mouth, sunglasses, sitting, grin, gyaru |
| 3 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, navel, nipples, open_mouth, penis, sex, solo_focus, thighs, vaginal, completely_nude, gradient_hair, looking_at_viewer, mosaic_censoring, pink_hair, pov, smile, spread_legs, sweat, collarbone, cowgirl_position, girl_on_top, necklace, tan, cum_in_pussy, eyewear_on_head, gyaru, missionary, on_back, pillow, sunglasses |
| 4 | 10 |  |  |  |  |  | 1girl, bare_shoulders, choker, cleavage, detached_sleeves, energy_wings, fur_trim, gyaru, looking_at_viewer, santa_costume, santa_hat, smile, solo, thighs, wide_sleeves, neck_bell, tan, blush, collarbone, navel, open_mouth, skirt |
| 5 | 6 |  |  |  |  |  | 1girl, bare_shoulders, choker, cleavage, collarbone, detached_sleeves, energy_wings, fur_trim, grin, gyaru, looking_at_viewer, neck_bell, santa_costume, solo, tan, wide_sleeves, navel, santa_hat, one_eye_closed, blush, skirt |
| 6 | 5 |  |  |  |  |  | 1boy, 1girl, bare_shoulders, blush, choker, cleavage, cum_on_breasts, fur_trim, gyaru, heart, hetero, neck_bell, paizuri, santa_costume, santa_hat, smile, solo_focus, tan, collarbone, ejaculation, energy_wings, penis, breasts_squeezed_together, detached_sleeves, looking_at_viewer, one_eye_closed, open_mouth, projectile_cum, tongue_out, wide_sleeves, facial, mosaic_censoring |
| 7 | 23 |  |  |  |  |  | 1girl, red_skirt, hakama_skirt, short_sleeves, solo, looking_at_viewer, katana, holding_sword, open_mouth, blush, shirt, wrist_scrunchie, bag, :d, socks |
| 8 | 9 |  |  |  |  |  | 1girl, smile, solo, katana, looking_at_viewer, tate_eboshi, wide_sleeves, open_mouth, hakama_short_skirt, red_skirt, ribbon-trimmed_sleeves, holding_weapon, bag, orange_hair |
| 9 | 9 |  |  |  |  |  | 1girl, elbow_gloves, highleg_leotard, looking_at_viewer, pink_leotard, race_queen, smile, solo, thighs, bare_shoulders, eyewear_on_head, sunglasses, tan, gradient_hair, pink_gloves, pink_hair, blush, thighhighs, cleavage, covered_navel, high_heels, sidelocks |
| 10 | 9 |  |  |  |  |  | 1boy, 1girl, blush, hetero, smile, solo_focus, breasts_squeezed_together, paizuri, pov, collarbone, looking_at_viewer, open_shirt, short_sleeves, white_shirt, cum_on_breasts, ejaculation, nipples, penis, sweat, orange_hair, simple_background, heart, skirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | bracelet | cleavage | eyewear_on_head | gradient_hair | katana | leopard_print | looking_at_viewer | pink_bikini | pink_hair | solo | sunglasses | thighs | collarbone | necklace | smile | belly_chain | navel | one_eye_closed | sidelocks | thighlet | gyaru | blush | open_mouth | sheath | tan | grin | blue_sky | day | ocean | beach | outdoors | sitting | 1boy | hetero | nipples | penis | sex | solo_focus | vaginal | completely_nude | mosaic_censoring | pov | spread_legs | sweat | cowgirl_position | girl_on_top | cum_in_pussy | missionary | on_back | pillow | choker | detached_sleeves | energy_wings | fur_trim | santa_costume | santa_hat | wide_sleeves | neck_bell | skirt | cum_on_breasts | heart | paizuri | ejaculation | breasts_squeezed_together | projectile_cum | tongue_out | facial | red_skirt | hakama_skirt | short_sleeves | holding_sword | shirt | wrist_scrunchie | bag | :d | socks | tate_eboshi | hakama_short_skirt | ribbon-trimmed_sleeves | holding_weapon | orange_hair | elbow_gloves | highleg_leotard | pink_leotard | race_queen | pink_gloves | thighhighs | covered_navel | high_heels | open_shirt | white_shirt | simple_background |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------------|:-----------|:-----------|:------------------|:----------------|:---------|:----------------|:--------------------|:--------------|:------------|:-------|:-------------|:---------|:-------------|:-----------|:--------|:--------------|:--------|:-----------------|:------------|:-----------|:--------|:--------|:-------------|:---------|:------|:-------|:-----------|:------|:--------|:--------|:-----------|:----------|:-------|:---------|:----------|:--------|:------|:-------------|:----------|:------------------|:-------------------|:------|:--------------|:--------|:-------------------|:--------------|:---------------|:-------------|:----------|:---------|:---------|:-------------------|:---------------|:-----------|:----------------|:------------|:---------------|:------------|:--------|:-----------------|:--------|:----------|:--------------|:----------------------------|:-----------------|:-------------|:---------|:------------|:---------------|:----------------|:----------------|:--------|:------------------|:------|:-----|:--------|:--------------|:---------------------|:-------------------------|:-----------------|:--------------|:---------------|:------------------|:---------------|:-------------|:--------------|:-------------|:----------------|:-------------|:-------------|:--------------|:--------------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | | X | X | | X | X | X | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 8 |  |  |  |  |  | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | | X | X | | | X | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | | X | X | | | X | | X | | X | X | X | X | X | | X | | | | X | X | X | | X | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 10 |  |  |  |  |  | X | X | | X | | | | | X | | | X | | X | X | | X | | X | | | | X | X | X | | X | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | X | | X | | | | | X | | | X | | | X | | | | X | X | | | X | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 5 |  |  |  |  |  | X | X | | X | | | | | X | | | | | | X | | X | | | X | | | X | X | X | | X | | | | | | | | X | X | | X | | X | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 23 |  |  |  |  |  | X | | | | | | X | | X | | | X | | | | | | | | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 8 | 9 |  |  |  |  |  | X | | | | | | X | | X | | | X | | | | | X | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | X | | | X | X | X | X | X | | | | | | | | | | | |
| 9 | 9 |  |  |  |  |  | X | X | | X | X | X | | | X | | X | X | X | X | | | X | | | | X | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | | | |
| 10 | 9 |  |  |  |  |  | X | | | | | | | | X | | | | | | X | | X | | | | | | | X | | | | | | | | | | | X | X | X | X | | X | | | | X | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | X | | | | | | | | | | | X | | | | | | | | | X | X | X |
|
sachintripathi04/gen_data | ---
license: apache-2.0
dataset_info:
features:
- name: context
dtype: string
- name: statements
list:
- name: source
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 3057
num_examples: 5
download_size: 6161
dataset_size: 3057
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
d0rj/geo-reviews-dataset-2023 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: address
dtype: string
- name: name_ru
dtype: string
- name: rating
dtype: int64
- name: rubrics
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 369301294
num_examples: 500000
download_size: 170880716
dataset_size: 369301294
license: mit
task_categories:
- text-classification
- sentence-similarity
- token-classification
- text2text-generation
language: ru
multilinguality:
- monolingual
tags:
- reviews
- yandex
pretty_name: Geo Reviews Dataset 2023
size_categories:
- 100K<n<1M
source_datasets:
- original
---
# Geo Reviews Dataset 2023
Yandex is making available the largest Russian-language dataset of reviews about organizations published on Yandex Maps.
Use it for academic and research purposes, share your results with us in Issues.
## Dataset Description
- **Repository:** https://github.com/yandex/geo-reviews-dataset-2023
- **Paper:** [Яндекс Карты открывают крупнейший русскоязычный датасет отзывов на организации](https://habr.com/ru/companies/yandex/articles/763832/)
- **Point of Contact:** [opensource@yandex-team.ru](mailto:opensource@yandex-team.ru)
## Description
- 500,000 unique reviews
- Only reviews about organizations in Russia
- Available on Yandex Maps
- Published from January to July 2023
- The dataset does not contain short one-word reviews
- Reviews have been cleared of personal data (phone numbers, email addresses)
## Dataset Fields
The dataset contains the following attributes:
- Organization address (`address`)
- Organization name (`name_ru`)
- List of categories to which the organization belongs (`rubrics`)
- User rating from 0 to 5 (`rating`)
- Review text (`text`)
## License
Distributed under MIT license.
## Contacts
For any inquiries or questions regarding the dataset, please contact us at [opensource@yandex-team.ru](mailto:opensource@yandex-team.ru). |
open-llm-leaderboard/details_Biomimicry-AI__ANIMA-Nectar-v2 | ---
pretty_name: Evaluation run of Biomimicry-AI/ANIMA-Nectar-v2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Biomimicry-AI/ANIMA-Nectar-v2](https://huggingface.co/Biomimicry-AI/ANIMA-Nectar-v2)\
\ on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Biomimicry-AI__ANIMA-Nectar-v2\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2023-12-08T01:10:29.378715](https://huggingface.co/datasets/open-llm-leaderboard/details_Biomimicry-AI__ANIMA-Nectar-v2/blob/main/results_2023-12-08T01-10-29.378715.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5370126868865708,\n\
\ \"acc_stderr\": 0.03419039205847925,\n \"acc_norm\": 0.5457430578340143,\n\
\ \"acc_norm_stderr\": 0.03502116674648576,\n \"mc1\": 0.3427172582619339,\n\
\ \"mc1_stderr\": 0.016614949385347036,\n \"mc2\": 0.49035413533539823,\n\
\ \"mc2_stderr\": 0.014716498676044533\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.4974402730375427,\n \"acc_stderr\": 0.014611199329843784,\n\
\ \"acc_norm\": 0.5324232081911263,\n \"acc_norm_stderr\": 0.014580637569995416\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5637323242381995,\n\
\ \"acc_stderr\": 0.004949080334816018,\n \"acc_norm\": 0.7662816172077276,\n\
\ \"acc_norm_stderr\": 0.004223302177263008\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \
\ \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n\
\ \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.45925925925925926,\n\
\ \"acc_stderr\": 0.04304979692464243,\n \"acc_norm\": 0.45925925925925926,\n\
\ \"acc_norm_stderr\": 0.04304979692464243\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.5460526315789473,\n \"acc_stderr\": 0.04051646342874142,\n\
\ \"acc_norm\": 0.5460526315789473,\n \"acc_norm_stderr\": 0.04051646342874142\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n\
\ \"acc_stderr\": 0.04999999999999999,\n \"acc_norm\": 0.55,\n \
\ \"acc_norm_stderr\": 0.04999999999999999\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6037735849056604,\n \"acc_stderr\": 0.030102793781791197,\n\
\ \"acc_norm\": 0.6037735849056604,\n \"acc_norm_stderr\": 0.030102793781791197\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5347222222222222,\n\
\ \"acc_stderr\": 0.041711158581816184,\n \"acc_norm\": 0.5347222222222222,\n\
\ \"acc_norm_stderr\": 0.041711158581816184\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \
\ \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n\
\ \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5375722543352601,\n\
\ \"acc_stderr\": 0.0380168510452446,\n \"acc_norm\": 0.5375722543352601,\n\
\ \"acc_norm_stderr\": 0.0380168510452446\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04690650298201943,\n\
\ \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04690650298201943\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n\
\ \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.4808510638297872,\n \"acc_stderr\": 0.03266204299064678,\n\
\ \"acc_norm\": 0.4808510638297872,\n \"acc_norm_stderr\": 0.03266204299064678\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n\
\ \"acc_stderr\": 0.04514496132873633,\n \"acc_norm\": 0.35964912280701755,\n\
\ \"acc_norm_stderr\": 0.04514496132873633\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.04164188720169375,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.04164188720169375\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.3915343915343915,\n \"acc_stderr\": 0.025138091388851102,\n \"\
acc_norm\": 0.3915343915343915,\n \"acc_norm_stderr\": 0.025138091388851102\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3333333333333333,\n\
\ \"acc_stderr\": 0.04216370213557835,\n \"acc_norm\": 0.3333333333333333,\n\
\ \"acc_norm_stderr\": 0.04216370213557835\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \
\ \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6387096774193548,\n\
\ \"acc_stderr\": 0.027327548447957543,\n \"acc_norm\": 0.6387096774193548,\n\
\ \"acc_norm_stderr\": 0.027327548447957543\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.43842364532019706,\n \"acc_stderr\": 0.034912078574865175,\n\
\ \"acc_norm\": 0.43842364532019706,\n \"acc_norm_stderr\": 0.034912078574865175\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\"\
: 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.6303030303030303,\n \"acc_stderr\": 0.03769430314512567,\n\
\ \"acc_norm\": 0.6303030303030303,\n \"acc_norm_stderr\": 0.03769430314512567\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.696969696969697,\n \"acc_stderr\": 0.032742879140268674,\n \"\
acc_norm\": 0.696969696969697,\n \"acc_norm_stderr\": 0.032742879140268674\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.7305699481865285,\n \"acc_stderr\": 0.03201867122877794,\n\
\ \"acc_norm\": 0.7305699481865285,\n \"acc_norm_stderr\": 0.03201867122877794\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.4564102564102564,\n \"acc_stderr\": 0.02525448542479961,\n \
\ \"acc_norm\": 0.4564102564102564,\n \"acc_norm_stderr\": 0.02525448542479961\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.2814814814814815,\n \"acc_stderr\": 0.02742001935094527,\n \
\ \"acc_norm\": 0.2814814814814815,\n \"acc_norm_stderr\": 0.02742001935094527\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.49159663865546216,\n \"acc_stderr\": 0.03247390276569669,\n\
\ \"acc_norm\": 0.49159663865546216,\n \"acc_norm_stderr\": 0.03247390276569669\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.37748344370860926,\n \"acc_stderr\": 0.0395802723112157,\n \"\
acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.0395802723112157\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.7302752293577982,\n \"acc_stderr\": 0.019028486711115438,\n \"\
acc_norm\": 0.7302752293577982,\n \"acc_norm_stderr\": 0.019028486711115438\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.35648148148148145,\n \"acc_stderr\": 0.032664783315272714,\n \"\
acc_norm\": 0.35648148148148145,\n \"acc_norm_stderr\": 0.032664783315272714\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.6470588235294118,\n \"acc_stderr\": 0.03354092437591518,\n \"\
acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.03354092437591518\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.679324894514768,\n \"acc_stderr\": 0.03038193194999041,\n \
\ \"acc_norm\": 0.679324894514768,\n \"acc_norm_stderr\": 0.03038193194999041\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6143497757847534,\n\
\ \"acc_stderr\": 0.03266842214289201,\n \"acc_norm\": 0.6143497757847534,\n\
\ \"acc_norm_stderr\": 0.03266842214289201\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.04186445163013751,\n\
\ \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.04186445163013751\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.6528925619834711,\n \"acc_stderr\": 0.043457245702925335,\n \"\
acc_norm\": 0.6528925619834711,\n \"acc_norm_stderr\": 0.043457245702925335\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6203703703703703,\n\
\ \"acc_stderr\": 0.04691521224077742,\n \"acc_norm\": 0.6203703703703703,\n\
\ \"acc_norm_stderr\": 0.04691521224077742\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.6625766871165644,\n \"acc_stderr\": 0.03714908409935574,\n\
\ \"acc_norm\": 0.6625766871165644,\n \"acc_norm_stderr\": 0.03714908409935574\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n\
\ \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \
\ \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.6796116504854369,\n \"acc_stderr\": 0.04620284082280041,\n\
\ \"acc_norm\": 0.6796116504854369,\n \"acc_norm_stderr\": 0.04620284082280041\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8162393162393162,\n\
\ \"acc_stderr\": 0.025372139671722933,\n \"acc_norm\": 0.8162393162393162,\n\
\ \"acc_norm_stderr\": 0.025372139671722933\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.722860791826309,\n\
\ \"acc_stderr\": 0.016005636294122425,\n \"acc_norm\": 0.722860791826309,\n\
\ \"acc_norm_stderr\": 0.016005636294122425\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.5549132947976878,\n \"acc_stderr\": 0.026756255129663762,\n\
\ \"acc_norm\": 0.5549132947976878,\n \"acc_norm_stderr\": 0.026756255129663762\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.35977653631284917,\n\
\ \"acc_stderr\": 0.016051419760310267,\n \"acc_norm\": 0.35977653631284917,\n\
\ \"acc_norm_stderr\": 0.016051419760310267\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.5392156862745098,\n \"acc_stderr\": 0.028541722692618874,\n\
\ \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.028541722692618874\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6012861736334405,\n\
\ \"acc_stderr\": 0.027809322585774503,\n \"acc_norm\": 0.6012861736334405,\n\
\ \"acc_norm_stderr\": 0.027809322585774503\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.6141975308641975,\n \"acc_stderr\": 0.027085401226132146,\n\
\ \"acc_norm\": 0.6141975308641975,\n \"acc_norm_stderr\": 0.027085401226132146\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.35106382978723405,\n \"acc_stderr\": 0.028473501272963768,\n \
\ \"acc_norm\": 0.35106382978723405,\n \"acc_norm_stderr\": 0.028473501272963768\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3767926988265971,\n\
\ \"acc_stderr\": 0.012376459593894402,\n \"acc_norm\": 0.3767926988265971,\n\
\ \"acc_norm_stderr\": 0.012376459593894402\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.49264705882352944,\n \"acc_stderr\": 0.030369552523902173,\n\
\ \"acc_norm\": 0.49264705882352944,\n \"acc_norm_stderr\": 0.030369552523902173\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.5016339869281046,\n \"acc_stderr\": 0.020227726838150117,\n \
\ \"acc_norm\": 0.5016339869281046,\n \"acc_norm_stderr\": 0.020227726838150117\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n\
\ \"acc_stderr\": 0.04631381319425464,\n \"acc_norm\": 0.6272727272727273,\n\
\ \"acc_norm_stderr\": 0.04631381319425464\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.6244897959183674,\n \"acc_stderr\": 0.03100120903989484,\n\
\ \"acc_norm\": 0.6244897959183674,\n \"acc_norm_stderr\": 0.03100120903989484\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6915422885572139,\n\
\ \"acc_stderr\": 0.03265819588512698,\n \"acc_norm\": 0.6915422885572139,\n\
\ \"acc_norm_stderr\": 0.03265819588512698\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \
\ \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.42168674698795183,\n\
\ \"acc_stderr\": 0.03844453181770917,\n \"acc_norm\": 0.42168674698795183,\n\
\ \"acc_norm_stderr\": 0.03844453181770917\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.7134502923976608,\n \"acc_stderr\": 0.03467826685703826,\n\
\ \"acc_norm\": 0.7134502923976608,\n \"acc_norm_stderr\": 0.03467826685703826\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3427172582619339,\n\
\ \"mc1_stderr\": 0.016614949385347036,\n \"mc2\": 0.49035413533539823,\n\
\ \"mc2_stderr\": 0.014716498676044533\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7411207576953434,\n \"acc_stderr\": 0.012310515810993376\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.05534495830174375,\n \
\ \"acc_stderr\": 0.006298221796179562\n }\n}\n```"
repo_url: https://huggingface.co/Biomimicry-AI/ANIMA-Nectar-v2
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|arc:challenge|25_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|gsm8k|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hellaswag|10_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-management|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T01-10-29.378715.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2023-12-08T01-10-29.378715.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- '**/details_harness|winogrande|5_2023-12-08T01-10-29.378715.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2023-12-08T01-10-29.378715.parquet'
- config_name: results
data_files:
- split: 2023_12_08T01_10_29.378715
path:
- results_2023-12-08T01-10-29.378715.parquet
- split: latest
path:
- results_2023-12-08T01-10-29.378715.parquet
---
# Dataset Card for Evaluation run of Biomimicry-AI/ANIMA-Nectar-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Biomimicry-AI/ANIMA-Nectar-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** clementine@hf.co
### Dataset Summary
Dataset automatically created during the evaluation run of model [Biomimicry-AI/ANIMA-Nectar-v2](https://huggingface.co/Biomimicry-AI/ANIMA-Nectar-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Biomimicry-AI__ANIMA-Nectar-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-08T01:10:29.378715](https://huggingface.co/datasets/open-llm-leaderboard/details_Biomimicry-AI__ANIMA-Nectar-v2/blob/main/results_2023-12-08T01-10-29.378715.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5370126868865708,
"acc_stderr": 0.03419039205847925,
"acc_norm": 0.5457430578340143,
"acc_norm_stderr": 0.03502116674648576,
"mc1": 0.3427172582619339,
"mc1_stderr": 0.016614949385347036,
"mc2": 0.49035413533539823,
"mc2_stderr": 0.014716498676044533
},
"harness|arc:challenge|25": {
"acc": 0.4974402730375427,
"acc_stderr": 0.014611199329843784,
"acc_norm": 0.5324232081911263,
"acc_norm_stderr": 0.014580637569995416
},
"harness|hellaswag|10": {
"acc": 0.5637323242381995,
"acc_stderr": 0.004949080334816018,
"acc_norm": 0.7662816172077276,
"acc_norm_stderr": 0.004223302177263008
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.45925925925925926,
"acc_stderr": 0.04304979692464243,
"acc_norm": 0.45925925925925926,
"acc_norm_stderr": 0.04304979692464243
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5460526315789473,
"acc_stderr": 0.04051646342874142,
"acc_norm": 0.5460526315789473,
"acc_norm_stderr": 0.04051646342874142
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.04999999999999999,
"acc_norm": 0.55,
"acc_norm_stderr": 0.04999999999999999
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6037735849056604,
"acc_stderr": 0.030102793781791197,
"acc_norm": 0.6037735849056604,
"acc_norm_stderr": 0.030102793781791197
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5347222222222222,
"acc_stderr": 0.041711158581816184,
"acc_norm": 0.5347222222222222,
"acc_norm_stderr": 0.041711158581816184
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5375722543352601,
"acc_stderr": 0.0380168510452446,
"acc_norm": 0.5375722543352601,
"acc_norm_stderr": 0.0380168510452446
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04690650298201943,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04690650298201943
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4808510638297872,
"acc_stderr": 0.03266204299064678,
"acc_norm": 0.4808510638297872,
"acc_norm_stderr": 0.03266204299064678
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.35964912280701755,
"acc_stderr": 0.04514496132873633,
"acc_norm": 0.35964912280701755,
"acc_norm_stderr": 0.04514496132873633
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.04164188720169375,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.04164188720169375
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3915343915343915,
"acc_stderr": 0.025138091388851102,
"acc_norm": 0.3915343915343915,
"acc_norm_stderr": 0.025138091388851102
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04216370213557835,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04216370213557835
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6387096774193548,
"acc_stderr": 0.027327548447957543,
"acc_norm": 0.6387096774193548,
"acc_norm_stderr": 0.027327548447957543
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.43842364532019706,
"acc_stderr": 0.034912078574865175,
"acc_norm": 0.43842364532019706,
"acc_norm_stderr": 0.034912078574865175
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6303030303030303,
"acc_stderr": 0.03769430314512567,
"acc_norm": 0.6303030303030303,
"acc_norm_stderr": 0.03769430314512567
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.696969696969697,
"acc_stderr": 0.032742879140268674,
"acc_norm": 0.696969696969697,
"acc_norm_stderr": 0.032742879140268674
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7305699481865285,
"acc_stderr": 0.03201867122877794,
"acc_norm": 0.7305699481865285,
"acc_norm_stderr": 0.03201867122877794
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4564102564102564,
"acc_stderr": 0.02525448542479961,
"acc_norm": 0.4564102564102564,
"acc_norm_stderr": 0.02525448542479961
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.02742001935094527,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.02742001935094527
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.49159663865546216,
"acc_stderr": 0.03247390276569669,
"acc_norm": 0.49159663865546216,
"acc_norm_stderr": 0.03247390276569669
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.0395802723112157,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.0395802723112157
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7302752293577982,
"acc_stderr": 0.019028486711115438,
"acc_norm": 0.7302752293577982,
"acc_norm_stderr": 0.019028486711115438
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.35648148148148145,
"acc_stderr": 0.032664783315272714,
"acc_norm": 0.35648148148148145,
"acc_norm_stderr": 0.032664783315272714
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.03354092437591518,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.03354092437591518
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.679324894514768,
"acc_stderr": 0.03038193194999041,
"acc_norm": 0.679324894514768,
"acc_norm_stderr": 0.03038193194999041
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6143497757847534,
"acc_stderr": 0.03266842214289201,
"acc_norm": 0.6143497757847534,
"acc_norm_stderr": 0.03266842214289201
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.648854961832061,
"acc_stderr": 0.04186445163013751,
"acc_norm": 0.648854961832061,
"acc_norm_stderr": 0.04186445163013751
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6528925619834711,
"acc_stderr": 0.043457245702925335,
"acc_norm": 0.6528925619834711,
"acc_norm_stderr": 0.043457245702925335
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6203703703703703,
"acc_stderr": 0.04691521224077742,
"acc_norm": 0.6203703703703703,
"acc_norm_stderr": 0.04691521224077742
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6625766871165644,
"acc_stderr": 0.03714908409935574,
"acc_norm": 0.6625766871165644,
"acc_norm_stderr": 0.03714908409935574
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.6796116504854369,
"acc_stderr": 0.04620284082280041,
"acc_norm": 0.6796116504854369,
"acc_norm_stderr": 0.04620284082280041
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8162393162393162,
"acc_stderr": 0.025372139671722933,
"acc_norm": 0.8162393162393162,
"acc_norm_stderr": 0.025372139671722933
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.722860791826309,
"acc_stderr": 0.016005636294122425,
"acc_norm": 0.722860791826309,
"acc_norm_stderr": 0.016005636294122425
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5549132947976878,
"acc_stderr": 0.026756255129663762,
"acc_norm": 0.5549132947976878,
"acc_norm_stderr": 0.026756255129663762
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.35977653631284917,
"acc_stderr": 0.016051419760310267,
"acc_norm": 0.35977653631284917,
"acc_norm_stderr": 0.016051419760310267
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5392156862745098,
"acc_stderr": 0.028541722692618874,
"acc_norm": 0.5392156862745098,
"acc_norm_stderr": 0.028541722692618874
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6012861736334405,
"acc_stderr": 0.027809322585774503,
"acc_norm": 0.6012861736334405,
"acc_norm_stderr": 0.027809322585774503
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6141975308641975,
"acc_stderr": 0.027085401226132146,
"acc_norm": 0.6141975308641975,
"acc_norm_stderr": 0.027085401226132146
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.35106382978723405,
"acc_stderr": 0.028473501272963768,
"acc_norm": 0.35106382978723405,
"acc_norm_stderr": 0.028473501272963768
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3767926988265971,
"acc_stderr": 0.012376459593894402,
"acc_norm": 0.3767926988265971,
"acc_norm_stderr": 0.012376459593894402
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.49264705882352944,
"acc_stderr": 0.030369552523902173,
"acc_norm": 0.49264705882352944,
"acc_norm_stderr": 0.030369552523902173
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5016339869281046,
"acc_stderr": 0.020227726838150117,
"acc_norm": 0.5016339869281046,
"acc_norm_stderr": 0.020227726838150117
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6272727272727273,
"acc_stderr": 0.04631381319425464,
"acc_norm": 0.6272727272727273,
"acc_norm_stderr": 0.04631381319425464
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6244897959183674,
"acc_stderr": 0.03100120903989484,
"acc_norm": 0.6244897959183674,
"acc_norm_stderr": 0.03100120903989484
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6915422885572139,
"acc_stderr": 0.03265819588512698,
"acc_norm": 0.6915422885572139,
"acc_norm_stderr": 0.03265819588512698
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-virology|5": {
"acc": 0.42168674698795183,
"acc_stderr": 0.03844453181770917,
"acc_norm": 0.42168674698795183,
"acc_norm_stderr": 0.03844453181770917
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7134502923976608,
"acc_stderr": 0.03467826685703826,
"acc_norm": 0.7134502923976608,
"acc_norm_stderr": 0.03467826685703826
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3427172582619339,
"mc1_stderr": 0.016614949385347036,
"mc2": 0.49035413533539823,
"mc2_stderr": 0.014716498676044533
},
"harness|winogrande|5": {
"acc": 0.7411207576953434,
"acc_stderr": 0.012310515810993376
},
"harness|gsm8k|5": {
"acc": 0.05534495830174375,
"acc_stderr": 0.006298221796179562
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
htdung167/vin100h-preprocessed | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: original_sentence
dtype: string
- name: preprocessed_sentence
dtype: string
splits:
- name: train
num_bytes: 11482140431.422
num_examples: 56427
download_size: 11661321581
dataset_size: 11482140431.422
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
multidefmod/dore | ---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- pt
---
[<img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" />](http://creativecommons.org/licenses/by-nc-sa/4.0/)
***You must agree to the [license](https://huggingface.co/datasets/multidefmod/dore/blob/main/DORE_license.txt) and terms of use before using the dataset in this repo.***
# DORE: Definition MOdelling in PoRtuguEse
This repository introduces **DORE**, a comprehensive corpus of over 100,000 definitions from Portuguese dictionaries. Alongside **DORE**, we also introduce the models used to perform Portuguese DM. The release of **DORE** aims to fill in the gap of resources for Automatic Definition Generation, or Definition Modelling (DM), in Portuguese. **DORE** is the first dataset released for Portuguese DM.
## Data Collection
For **version 1.0**, we collected pairs of lemma, definition from two e-dictionaries in Portuguese. See the following table for more details.
| Source | Amount |
|-------------------|----------|
| Wiktionary *( <https://pt.wiktionary.org/wiki/Wikcion%C3%A1rio:P%C3%A1gina_principal> )* | 19,038 |
| Dicio *( <https://www.dicio.com.br/> )* | 83,981 |
| **Total** | **103,019** |
One of the .json files is shown below.
```json
[{"id": "pt.024", "lemma": "trouxa", "gloss": "pessoa que se deixa enganar com facilidade; quem é facilmente enganado ou iludido: o trouxa ainda acredita em tudo que ele fala."},
{"id": "pt.025", "lemma": "boxeador", "gloss": "pugilista; lutador de boxe; pessoa que, profissionalmente ou não, pratica boxe ou pugilismo."}]
```
## Data
**DORE** is available in [HuggingFace](https://huggingface.co/datasets/multidefmod/dore) and can be downloaded using the following code.
```python
from datasets import load_dataset
dore = load_dataset('multidefmod/dore')
```
## Citation
If you are using the dataset or the models, please cite the following paper.
~~~
@inproceedings{dore2024,
author={Furtado, Anna B Dimas and Ranasinghe, Tharindu and Blain, Fréderic and Mitkov, Ruslan},
title={{DORE: A Dataset For Portuguese Definition Generation}},
booktitle={The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
year={2024},
month={May},
}
~~~ |
hackathon-pln-es/readability-es-caes | ---
annotations_creators:
- other
language_creators:
- other
language:
- es
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: readability-es-caes
tags:
- readability
---
# Dataset Card for [readability-es-caes]
## Dataset Description
### Dataset Summary
This dataset is a compilation of short articles from websites dedicated to learn Spanish as a second language. These articles have been compiled from the following sources:
- [CAES corpus](http://galvan.usc.es/caes/) (Martínez et al., 2019): the "Corpus de Aprendices del Español" is a collection of texts produced by Spanish L2 learners from Spanish learning centers and universities. These text are produced by students of all levels (A1 to C1), with different backgrounds (11 native languages) and levels of experience.
### Languages
Spanish
## Dataset Structure
Texts are tokenized to create a paragraph-based dataset
### Data Fields
The dataset is formatted as a json lines and includes the following fields:
- **Category:** when available, this includes the level of this text according to the Common European Framework of Reference for Languages (CEFR).
- **Level:** standardized readability level: simple or complex.
- **Level-3:** standardized readability level: basic, intermediate or advanced.
- **Text:** original text formatted into sentences.
## Additional Information
### Licensing Information
https://creativecommons.org/licenses/by-nc-sa/4.0/
### Citation Information
Please cite this page to give credit to the authors :)
### Team
- [Laura Vásquez-Rodríguez](https://lmvasque.github.io/)
- [Pedro Cuenca](https://twitter.com/pcuenq)
- [Sergio Morales](https://www.fireblend.com/)
- [Fernando Alva-Manchego](https://feralvam.github.io/)
|
kpriyanshu256/MultiTabQA-multitable_pretraining-train-v2-30000 | ---
dataset_info:
features:
- name: tables
sequence: string
- name: table_names
sequence: string
- name: query
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: target
dtype: string
- name: source_latex
dtype: string
- name: target_latex
dtype: string
- name: source_html
dtype: string
- name: target_html
dtype: string
- name: source_markdown
dtype: string
- name: target_markdown
dtype: string
splits:
- name: train
num_bytes: 6927335221
num_examples: 1000
download_size: 1391575372
dataset_size: 6927335221
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ssbuild/alpaca_firefly | ---
license: agpl-3.0
---
|
bnv20/llama2dataset1 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 415874
num_examples: 1288
download_size: 129036
dataset_size: 415874
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Mizukico/SD-colab | ---
license: openrail
---
|
iamnamas/2letter-condgentext2image | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 4861791.8
num_examples: 9600
download_size: 4927847
dataset_size: 4861791.8
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
CyberHarem/minori_bluearchive | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of minori/安守ミノリ/实梨 (Blue Archive)
This is the dataset of minori/安守ミノリ/实梨 (Blue Archive), containing 45 images and their tags.
The core tags of this character are `long_hair, halo, hard_hat, black_hair, hair_between_eyes, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 45 | 81.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minori_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 1200 | 45 | 66.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minori_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 119 | 141.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/minori_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/minori_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 14 |  |  |  |  |  | 1girl, helmet, jacket, long_sleeves, solo, looking_at_viewer, open_clothes, white_scarf, white_skirt, armband, black_pantyhose, fur_trim, belt, holding_megaphone, pleated_skirt, simple_background, blush, coat, white_shirt, hand_in_pocket, white_background, green_eyes, grey_eyes, open_mouth |
| 1 | 6 |  |  |  |  |  | 1girl, armband, blush, fur_trim, helmet, simple_background, solo, upper_body, white_background, black_jacket, long_sleeves, white_scarf, open_mouth, looking_at_viewer, open_jacket, sweat |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | helmet | jacket | long_sleeves | solo | looking_at_viewer | open_clothes | white_scarf | white_skirt | armband | black_pantyhose | fur_trim | belt | holding_megaphone | pleated_skirt | simple_background | blush | coat | white_shirt | hand_in_pocket | white_background | green_eyes | grey_eyes | open_mouth | upper_body | black_jacket | open_jacket | sweat |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:---------|:---------------|:-------|:--------------------|:---------------|:--------------|:--------------|:----------|:------------------|:-----------|:-------|:--------------------|:----------------|:--------------------|:--------|:-------|:--------------|:-----------------|:-------------------|:-------------|:------------|:-------------|:-------------|:---------------|:--------------|:--------|
| 0 | 14 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | |
| 1 | 6 |  |  |  |  |  | X | X | | X | X | X | | X | | X | | X | | | | X | X | | | | X | | | X | X | X | X | X |
|
anvelezec/maderapp | ---
license: mit
---
Regarding image classification automation, Maderapp's botanical team worked many hours to collect, validate, and correctly label 25000 tree macroscopic images of 25 species from the Peruvian Amazonia.
The team captured these images with a mobile device's camera and a digital microscope. Each image has a resolution of 480X640 pixels and three channels.
|
surajbijjahalli/semantic_seg_atl_resized | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 914897071.975
num_examples: 1407
download_size: 909035241
dataset_size: 914897071.975
---
# Dataset Card for "semantic_seg_atl_resized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
maya2023/edit_fazcolab | ---
license: openrail
---
|
qgallouedec/prj_gia_dataset_metaworld_faucet_close_v2_1111 | ---
library_name: gia
tags:
- deep-reinforcement-learning
- reinforcement-learning
- gia
- multi-task
- multi-modal
- imitation-learning
- offline-reinforcement-learning
---
An imitation learning environment for the faucet-close-v2 environment, sample for the policy faucet-close-v2
This environment was created as part of the Generally Intelligent Agents project gia: https://github.com/huggingface/gia
## Load dataset
First, clone it with
```sh
git clone https://huggingface.co/datasets/qgallouedec/prj_gia_dataset_metaworld_faucet_close_v2_1111
```
Then, load it with
```python
import numpy as np
dataset = np.load("prj_gia_dataset_metaworld_faucet_close_v2_1111/dataset.npy", allow_pickle=True).item()
print(dataset.keys()) # dict_keys(['observations', 'actions', 'dones', 'rewards'])
```
|
GenP/Synthetic_Face_Images_Academic_Dataset | ---
license: afl-3.0
task_categories:
- image-classification
- image-segmentation
size_categories:
- 1K<n<10K
---
Academic Dataset by Generated Photos
See at https://generated.photos/datasets#research-dataset
The free dataset is made to help students and teachers with any research. It contains 10,000 photos with equal distribution of race and gender parameters.
If you need a dataset with different parameters or quantity, contact us at work.with@generated.photos.
We will appreciate it if you let us know about the research outcome!
----------------------------------------------------------
Terms of use
----------------------------------------------------------
You can use and adapt it for any research purposes, as long as you:
(a) give appropriate credit by citing in your paper,
(b) put a link to Generated Photos website in case of publishing your paper or results of your research or a related article. Example of an attribution line: Academic Dataset by Generated Photos https://generated.photos/datasets
You can redistribute it within your university, but please follow these rules:
(a) indicate any changes that you've made,
(b) make sure that your fellow student or teacher you pass this dataset is aware of the terms of use described in this file.
For more information about datasets and license, please visit Generated Photos website:
https://generated.photos/datasets
https://generated.photos/faq
https://generated.photos/terms-and-conditions
----------------------------------------------------------
Photos
----------------------------------------------------------
All the photos are 100% synthetic. Based on model-released photos. Royalty-free. Can be used for any research purpose except for the ones violating the law. Worldwide. No time limitations.
Quantity 10,000
Quality 256x256px
Diversity Ethnicity, gender
----------------------------------------------------------
Metadata
----------------------------------------------------------
The JSON files contain the metadata for each image in a machine-readable format, including:
(1) FaceLandmarks: mouth, right_eyebrow, left_eyebrow, right_eye, left_eye, nose, jaw.
(2) FaceAttributes: headPose, gender, makeup, emotion, facialHair, hair (hairColor, hairLength, bald), occlusion, ethnicity, eye_color, smile, age
----------------------------------------------------------
Please contact work.with@generated.photos for business and press inquiries and other questions. |
open-llm-leaderboard/details_AA051612__A0124 | ---
pretty_name: Evaluation run of AA051612/A0124
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [AA051612/A0124](https://huggingface.co/AA051612/A0124) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AA051612__A0124\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-01-25T14:19:16.198603](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051612__A0124/blob/main/results_2024-01-25T14-19-16.198603.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.8268443438842564,\n\
\ \"acc_stderr\": 0.024801319555947502,\n \"acc_norm\": 0.8344552297563383,\n\
\ \"acc_norm_stderr\": 0.0252029147367926,\n \"mc1\": 0.390452876376989,\n\
\ \"mc1_stderr\": 0.017078230743431448,\n \"mc2\": 0.5652174373687721,\n\
\ \"mc2_stderr\": 0.015479461186777867\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6399317406143344,\n \"acc_stderr\": 0.014027516814585188,\n\
\ \"acc_norm\": 0.6783276450511946,\n \"acc_norm_stderr\": 0.013650488084494164\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6504680342561243,\n\
\ \"acc_stderr\": 0.004758476684324042,\n \"acc_norm\": 0.8471420035849433,\n\
\ \"acc_norm_stderr\": 0.003591151323268329\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \
\ \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.837037037037037,\n\
\ \"acc_stderr\": 0.03190541474482841,\n \"acc_norm\": 0.837037037037037,\n\
\ \"acc_norm_stderr\": 0.03190541474482841\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.9210526315789473,\n \"acc_stderr\": 0.021944342818247923,\n\
\ \"acc_norm\": 0.9210526315789473,\n \"acc_norm_stderr\": 0.021944342818247923\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.83,\n\
\ \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \
\ \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.879245283018868,\n \"acc_stderr\": 0.020054189400972373,\n\
\ \"acc_norm\": 0.879245283018868,\n \"acc_norm_stderr\": 0.020054189400972373\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9375,\n\
\ \"acc_stderr\": 0.02024219611347799,\n \"acc_norm\": 0.9375,\n \
\ \"acc_norm_stderr\": 0.02024219611347799\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \
\ \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"\
acc\": 0.73,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\"\
: 0.73,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \
\ \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.8208092485549133,\n\
\ \"acc_stderr\": 0.02924251305906329,\n \"acc_norm\": 0.8208092485549133,\n\
\ \"acc_norm_stderr\": 0.02924251305906329\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.696078431372549,\n \"acc_stderr\": 0.04576665403207763,\n\
\ \"acc_norm\": 0.696078431372549,\n \"acc_norm_stderr\": 0.04576665403207763\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.84,\n \"acc_stderr\": 0.0368452949177471,\n \"acc_norm\": 0.84,\n\
\ \"acc_norm_stderr\": 0.0368452949177471\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.8425531914893617,\n \"acc_stderr\": 0.023809905196619702,\n\
\ \"acc_norm\": 0.8425531914893617,\n \"acc_norm_stderr\": 0.023809905196619702\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.7105263157894737,\n\
\ \"acc_stderr\": 0.04266339443159394,\n \"acc_norm\": 0.7105263157894737,\n\
\ \"acc_norm_stderr\": 0.04266339443159394\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.8551724137931035,\n \"acc_stderr\": 0.029327243269363392,\n\
\ \"acc_norm\": 0.8551724137931035,\n \"acc_norm_stderr\": 0.029327243269363392\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.8174603174603174,\n \"acc_stderr\": 0.019894879367175548,\n \"\
acc_norm\": 0.8174603174603174,\n \"acc_norm_stderr\": 0.019894879367175548\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5396825396825397,\n\
\ \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.5396825396825397,\n\
\ \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \
\ \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n\
\ \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.932258064516129,\n\
\ \"acc_stderr\": 0.014296101903893372,\n \"acc_norm\": 0.932258064516129,\n\
\ \"acc_norm_stderr\": 0.014296101903893372\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.7339901477832512,\n \"acc_stderr\": 0.03108982600293753,\n\
\ \"acc_norm\": 0.7339901477832512,\n \"acc_norm_stderr\": 0.03108982600293753\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977704,\n \"acc_norm\"\
: 0.86,\n \"acc_norm_stderr\": 0.034873508801977704\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.9333333333333333,\n \"acc_stderr\": 0.019478290326359282,\n\
\ \"acc_norm\": 0.9333333333333333,\n \"acc_norm_stderr\": 0.019478290326359282\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.9494949494949495,\n \"acc_stderr\": 0.015602012491972255,\n \"\
acc_norm\": 0.9494949494949495,\n \"acc_norm_stderr\": 0.015602012491972255\n\
\ },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n\
\ \"acc\": 0.9792746113989638,\n \"acc_stderr\": 0.010281417011909029,\n\
\ \"acc_norm\": 0.9792746113989638,\n \"acc_norm_stderr\": 0.010281417011909029\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.8641025641025641,\n \"acc_stderr\": 0.01737454649323547,\n \
\ \"acc_norm\": 0.8641025641025641,\n \"acc_norm_stderr\": 0.01737454649323547\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.6111111111111112,\n \"acc_stderr\": 0.029723278961476664,\n \
\ \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.029723278961476664\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.9369747899159664,\n \"acc_stderr\": 0.015785085223670926,\n\
\ \"acc_norm\": 0.9369747899159664,\n \"acc_norm_stderr\": 0.015785085223670926\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.6291390728476821,\n \"acc_stderr\": 0.03943966699183629,\n \"\
acc_norm\": 0.6291390728476821,\n \"acc_norm_stderr\": 0.03943966699183629\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.9559633027522936,\n \"acc_stderr\": 0.008796877218234045,\n \"\
acc_norm\": 0.9559633027522936,\n \"acc_norm_stderr\": 0.008796877218234045\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.7731481481481481,\n \"acc_stderr\": 0.028561650102422273,\n \"\
acc_norm\": 0.7731481481481481,\n \"acc_norm_stderr\": 0.028561650102422273\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.9509803921568627,\n \"acc_stderr\": 0.01515383934021268,\n \"\
acc_norm\": 0.9509803921568627,\n \"acc_norm_stderr\": 0.01515383934021268\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.9409282700421941,\n \"acc_stderr\": 0.015346597463888697,\n \
\ \"acc_norm\": 0.9409282700421941,\n \"acc_norm_stderr\": 0.015346597463888697\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8430493273542601,\n\
\ \"acc_stderr\": 0.024413587174907412,\n \"acc_norm\": 0.8430493273542601,\n\
\ \"acc_norm_stderr\": 0.024413587174907412\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.9312977099236641,\n \"acc_stderr\": 0.022184936922745042,\n\
\ \"acc_norm\": 0.9312977099236641,\n \"acc_norm_stderr\": 0.022184936922745042\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.9338842975206612,\n \"acc_stderr\": 0.022683403691723312,\n \"\
acc_norm\": 0.9338842975206612,\n \"acc_norm_stderr\": 0.022683403691723312\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.9351851851851852,\n\
\ \"acc_stderr\": 0.023800937426629205,\n \"acc_norm\": 0.9351851851851852,\n\
\ \"acc_norm_stderr\": 0.023800937426629205\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.9386503067484663,\n \"acc_stderr\": 0.01885387414579323,\n\
\ \"acc_norm\": 0.9386503067484663,\n \"acc_norm_stderr\": 0.01885387414579323\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6875,\n\
\ \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.6875,\n\
\ \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.9223300970873787,\n \"acc_stderr\": 0.026501440784762752,\n\
\ \"acc_norm\": 0.9223300970873787,\n \"acc_norm_stderr\": 0.026501440784762752\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9658119658119658,\n\
\ \"acc_stderr\": 0.011904341997629818,\n \"acc_norm\": 0.9658119658119658,\n\
\ \"acc_norm_stderr\": 0.011904341997629818\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776348,\n \
\ \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776348\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.946360153256705,\n\
\ \"acc_stderr\": 0.00805691182236487,\n \"acc_norm\": 0.946360153256705,\n\
\ \"acc_norm_stderr\": 0.00805691182236487\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.8410404624277457,\n \"acc_stderr\": 0.019685307033571946,\n\
\ \"acc_norm\": 0.8410404624277457,\n \"acc_norm_stderr\": 0.019685307033571946\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.8502793296089386,\n\
\ \"acc_stderr\": 0.011933090460111657,\n \"acc_norm\": 0.8502793296089386,\n\
\ \"acc_norm_stderr\": 0.011933090460111657\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.9019607843137255,\n \"acc_stderr\": 0.01702722293558219,\n\
\ \"acc_norm\": 0.9019607843137255,\n \"acc_norm_stderr\": 0.01702722293558219\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.887459807073955,\n\
\ \"acc_stderr\": 0.017949292186800647,\n \"acc_norm\": 0.887459807073955,\n\
\ \"acc_norm_stderr\": 0.017949292186800647\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.9228395061728395,\n \"acc_stderr\": 0.014847704893944928,\n\
\ \"acc_norm\": 0.9228395061728395,\n \"acc_norm_stderr\": 0.014847704893944928\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.7588652482269503,\n \"acc_stderr\": 0.02551873104953777,\n \
\ \"acc_norm\": 0.7588652482269503,\n \"acc_norm_stderr\": 0.02551873104953777\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.7672750977835724,\n\
\ \"acc_stderr\": 0.01079259555388848,\n \"acc_norm\": 0.7672750977835724,\n\
\ \"acc_norm_stderr\": 0.01079259555388848\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.9301470588235294,\n \"acc_stderr\": 0.015484012441056329,\n\
\ \"acc_norm\": 0.9301470588235294,\n \"acc_norm_stderr\": 0.015484012441056329\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.8774509803921569,\n \"acc_stderr\": 0.013266175773054252,\n \
\ \"acc_norm\": 0.8774509803921569,\n \"acc_norm_stderr\": 0.013266175773054252\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.8,\n\
\ \"acc_stderr\": 0.03831305140884601,\n \"acc_norm\": 0.8,\n \
\ \"acc_norm_stderr\": 0.03831305140884601\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.8857142857142857,\n \"acc_stderr\": 0.020367976491952145,\n\
\ \"acc_norm\": 0.8857142857142857,\n \"acc_norm_stderr\": 0.020367976491952145\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9303482587064676,\n\
\ \"acc_stderr\": 0.018000052253856254,\n \"acc_norm\": 0.9303482587064676,\n\
\ \"acc_norm_stderr\": 0.018000052253856254\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.96,\n \"acc_stderr\": 0.01969463855669321,\n \
\ \"acc_norm\": 0.96,\n \"acc_norm_stderr\": 0.01969463855669321\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.6506024096385542,\n\
\ \"acc_stderr\": 0.037117251907407514,\n \"acc_norm\": 0.6506024096385542,\n\
\ \"acc_norm_stderr\": 0.037117251907407514\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.9415204678362573,\n \"acc_stderr\": 0.017996678857280124,\n\
\ \"acc_norm\": 0.9415204678362573,\n \"acc_norm_stderr\": 0.017996678857280124\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.390452876376989,\n\
\ \"mc1_stderr\": 0.017078230743431448,\n \"mc2\": 0.5652174373687721,\n\
\ \"mc2_stderr\": 0.015479461186777867\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.8074191002367798,\n \"acc_stderr\": 0.0110825388474919\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6322971948445792,\n \
\ \"acc_stderr\": 0.013281630503395482\n }\n}\n```"
repo_url: https://huggingface.co/AA051612/A0124
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|arc:challenge|25_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|gsm8k|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hellaswag|10_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T14-19-16.198603.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-01-25T14-19-16.198603.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- '**/details_harness|winogrande|5_2024-01-25T14-19-16.198603.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-01-25T14-19-16.198603.parquet'
- config_name: results
data_files:
- split: 2024_01_25T14_19_16.198603
path:
- results_2024-01-25T14-19-16.198603.parquet
- split: latest
path:
- results_2024-01-25T14-19-16.198603.parquet
---
# Dataset Card for Evaluation run of AA051612/A0124
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [AA051612/A0124](https://huggingface.co/AA051612/A0124) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_AA051612__A0124",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-25T14:19:16.198603](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051612__A0124/blob/main/results_2024-01-25T14-19-16.198603.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.8268443438842564,
"acc_stderr": 0.024801319555947502,
"acc_norm": 0.8344552297563383,
"acc_norm_stderr": 0.0252029147367926,
"mc1": 0.390452876376989,
"mc1_stderr": 0.017078230743431448,
"mc2": 0.5652174373687721,
"mc2_stderr": 0.015479461186777867
},
"harness|arc:challenge|25": {
"acc": 0.6399317406143344,
"acc_stderr": 0.014027516814585188,
"acc_norm": 0.6783276450511946,
"acc_norm_stderr": 0.013650488084494164
},
"harness|hellaswag|10": {
"acc": 0.6504680342561243,
"acc_stderr": 0.004758476684324042,
"acc_norm": 0.8471420035849433,
"acc_norm_stderr": 0.003591151323268329
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.837037037037037,
"acc_stderr": 0.03190541474482841,
"acc_norm": 0.837037037037037,
"acc_norm_stderr": 0.03190541474482841
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.9210526315789473,
"acc_stderr": 0.021944342818247923,
"acc_norm": 0.9210526315789473,
"acc_norm_stderr": 0.021944342818247923
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.879245283018868,
"acc_stderr": 0.020054189400972373,
"acc_norm": 0.879245283018868,
"acc_norm_stderr": 0.020054189400972373
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9375,
"acc_stderr": 0.02024219611347799,
"acc_norm": 0.9375,
"acc_norm_stderr": 0.02024219611347799
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.8208092485549133,
"acc_stderr": 0.02924251305906329,
"acc_norm": 0.8208092485549133,
"acc_norm_stderr": 0.02924251305906329
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.696078431372549,
"acc_stderr": 0.04576665403207763,
"acc_norm": 0.696078431372549,
"acc_norm_stderr": 0.04576665403207763
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.84,
"acc_stderr": 0.0368452949177471,
"acc_norm": 0.84,
"acc_norm_stderr": 0.0368452949177471
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.8425531914893617,
"acc_stderr": 0.023809905196619702,
"acc_norm": 0.8425531914893617,
"acc_norm_stderr": 0.023809905196619702
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.04266339443159394,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.04266339443159394
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.8551724137931035,
"acc_stderr": 0.029327243269363392,
"acc_norm": 0.8551724137931035,
"acc_norm_stderr": 0.029327243269363392
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.8174603174603174,
"acc_stderr": 0.019894879367175548,
"acc_norm": 0.8174603174603174,
"acc_norm_stderr": 0.019894879367175548
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5396825396825397,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.5396825396825397,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.932258064516129,
"acc_stderr": 0.014296101903893372,
"acc_norm": 0.932258064516129,
"acc_norm_stderr": 0.014296101903893372
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.7339901477832512,
"acc_stderr": 0.03108982600293753,
"acc_norm": 0.7339901477832512,
"acc_norm_stderr": 0.03108982600293753
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.86,
"acc_stderr": 0.034873508801977704,
"acc_norm": 0.86,
"acc_norm_stderr": 0.034873508801977704
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.9333333333333333,
"acc_stderr": 0.019478290326359282,
"acc_norm": 0.9333333333333333,
"acc_norm_stderr": 0.019478290326359282
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9494949494949495,
"acc_stderr": 0.015602012491972255,
"acc_norm": 0.9494949494949495,
"acc_norm_stderr": 0.015602012491972255
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9792746113989638,
"acc_stderr": 0.010281417011909029,
"acc_norm": 0.9792746113989638,
"acc_norm_stderr": 0.010281417011909029
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8641025641025641,
"acc_stderr": 0.01737454649323547,
"acc_norm": 0.8641025641025641,
"acc_norm_stderr": 0.01737454649323547
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.6111111111111112,
"acc_stderr": 0.029723278961476664,
"acc_norm": 0.6111111111111112,
"acc_norm_stderr": 0.029723278961476664
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.9369747899159664,
"acc_stderr": 0.015785085223670926,
"acc_norm": 0.9369747899159664,
"acc_norm_stderr": 0.015785085223670926
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.6291390728476821,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.6291390728476821,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9559633027522936,
"acc_stderr": 0.008796877218234045,
"acc_norm": 0.9559633027522936,
"acc_norm_stderr": 0.008796877218234045
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.7731481481481481,
"acc_stderr": 0.028561650102422273,
"acc_norm": 0.7731481481481481,
"acc_norm_stderr": 0.028561650102422273
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9509803921568627,
"acc_stderr": 0.01515383934021268,
"acc_norm": 0.9509803921568627,
"acc_norm_stderr": 0.01515383934021268
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9409282700421941,
"acc_stderr": 0.015346597463888697,
"acc_norm": 0.9409282700421941,
"acc_norm_stderr": 0.015346597463888697
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8430493273542601,
"acc_stderr": 0.024413587174907412,
"acc_norm": 0.8430493273542601,
"acc_norm_stderr": 0.024413587174907412
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.9312977099236641,
"acc_stderr": 0.022184936922745042,
"acc_norm": 0.9312977099236641,
"acc_norm_stderr": 0.022184936922745042
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.9338842975206612,
"acc_stderr": 0.022683403691723312,
"acc_norm": 0.9338842975206612,
"acc_norm_stderr": 0.022683403691723312
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.9351851851851852,
"acc_stderr": 0.023800937426629205,
"acc_norm": 0.9351851851851852,
"acc_norm_stderr": 0.023800937426629205
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.9386503067484663,
"acc_stderr": 0.01885387414579323,
"acc_norm": 0.9386503067484663,
"acc_norm_stderr": 0.01885387414579323
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6875,
"acc_stderr": 0.043994650575715215,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.043994650575715215
},
"harness|hendrycksTest-management|5": {
"acc": 0.9223300970873787,
"acc_stderr": 0.026501440784762752,
"acc_norm": 0.9223300970873787,
"acc_norm_stderr": 0.026501440784762752
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9658119658119658,
"acc_stderr": 0.011904341997629818,
"acc_norm": 0.9658119658119658,
"acc_norm_stderr": 0.011904341997629818
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776348,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776348
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.946360153256705,
"acc_stderr": 0.00805691182236487,
"acc_norm": 0.946360153256705,
"acc_norm_stderr": 0.00805691182236487
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8410404624277457,
"acc_stderr": 0.019685307033571946,
"acc_norm": 0.8410404624277457,
"acc_norm_stderr": 0.019685307033571946
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.8502793296089386,
"acc_stderr": 0.011933090460111657,
"acc_norm": 0.8502793296089386,
"acc_norm_stderr": 0.011933090460111657
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.9019607843137255,
"acc_stderr": 0.01702722293558219,
"acc_norm": 0.9019607843137255,
"acc_norm_stderr": 0.01702722293558219
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.887459807073955,
"acc_stderr": 0.017949292186800647,
"acc_norm": 0.887459807073955,
"acc_norm_stderr": 0.017949292186800647
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.9228395061728395,
"acc_stderr": 0.014847704893944928,
"acc_norm": 0.9228395061728395,
"acc_norm_stderr": 0.014847704893944928
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.7588652482269503,
"acc_stderr": 0.02551873104953777,
"acc_norm": 0.7588652482269503,
"acc_norm_stderr": 0.02551873104953777
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.7672750977835724,
"acc_stderr": 0.01079259555388848,
"acc_norm": 0.7672750977835724,
"acc_norm_stderr": 0.01079259555388848
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.9301470588235294,
"acc_stderr": 0.015484012441056329,
"acc_norm": 0.9301470588235294,
"acc_norm_stderr": 0.015484012441056329
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8774509803921569,
"acc_stderr": 0.013266175773054252,
"acc_norm": 0.8774509803921569,
"acc_norm_stderr": 0.013266175773054252
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.8,
"acc_stderr": 0.03831305140884601,
"acc_norm": 0.8,
"acc_norm_stderr": 0.03831305140884601
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8857142857142857,
"acc_stderr": 0.020367976491952145,
"acc_norm": 0.8857142857142857,
"acc_norm_stderr": 0.020367976491952145
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.9303482587064676,
"acc_stderr": 0.018000052253856254,
"acc_norm": 0.9303482587064676,
"acc_norm_stderr": 0.018000052253856254
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.96,
"acc_stderr": 0.01969463855669321,
"acc_norm": 0.96,
"acc_norm_stderr": 0.01969463855669321
},
"harness|hendrycksTest-virology|5": {
"acc": 0.6506024096385542,
"acc_stderr": 0.037117251907407514,
"acc_norm": 0.6506024096385542,
"acc_norm_stderr": 0.037117251907407514
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.9415204678362573,
"acc_stderr": 0.017996678857280124,
"acc_norm": 0.9415204678362573,
"acc_norm_stderr": 0.017996678857280124
},
"harness|truthfulqa:mc|0": {
"mc1": 0.390452876376989,
"mc1_stderr": 0.017078230743431448,
"mc2": 0.5652174373687721,
"mc2_stderr": 0.015479461186777867
},
"harness|winogrande|5": {
"acc": 0.8074191002367798,
"acc_stderr": 0.0110825388474919
},
"harness|gsm8k|5": {
"acc": 0.6322971948445792,
"acc_stderr": 0.013281630503395482
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
dtthanh/200_question_evaluate_mixtral | ---
dataset_info:
features:
- name: question
dtype: string
- name: contexts
sequence: string
- name: ground_truths
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 262783
num_examples: 200
download_size: 64297
dataset_size: 262783
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
baebee/chatgpt-custom_inst | ---
license: mit
task_categories:
- summarization
- question-answering
- conversational
language:
- en
- tl
size_categories:
- n<1K
---
# Languages: English, Tagalog
## Collection Process:
- Dialogs generated by instructing ChatGPT to respond concisely
- Responses edited by Nuph researchers for naturalness
- Bilingual exchanges added for diversity
## Intended Use:
- Train conversational agents
- Research in straightforward dialog
# Limitations:
- Small scale (300 rows)
- Biased toward English
- Limited to text conversations
# Ethics and Privacy:
- No personal or offensive content
- ChatGPT instructed to avoid unethical responses
- Data anonymized - no personally identifiable information |
CVasNLPExperiments/VQAv2_sample_validation_benchmarks_partition_5 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: response
dtype: string
splits:
- name: train
num_bytes: 55
num_examples: 2
download_size: 1356
dataset_size: 55
---
# Dataset Card for "VQAv2_sample_validation_benchmarks_partition_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
OneFly7/llama2-sst2-fine-tuning-without-system-info | ---
dataset_info:
features:
- name: label_text
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 8318449
num_examples: 67349
- name: validation
num_bytes: 141132
num_examples: 872
download_size: 3302789
dataset_size: 8459581
---
# Dataset Card for "llama2-sst2-fine-tuning-without-system_info"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mask-distilled-one-sec-cv12/chunk_204 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 1123503972
num_examples: 220641
download_size: 1147209933
dataset_size: 1123503972
---
# Dataset Card for "chunk_204"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Intuit-GenSRF/combined_toxicity_profanity_v2_train_eval | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence: string
- name: encoded_labels
sequence: int64
splits:
- name: train
num_bytes: 2803997548
num_examples: 6344950
- name: validation
num_bytes: 313551093
num_examples: 710497
download_size: 1607228317
dataset_size: 3117548641
---
# Dataset Card for "combined_toxicity_profanity_v2_train_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/details_Weyaxi__Newton-7B | ---
pretty_name: Evaluation run of Weyaxi/Newton-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [Weyaxi/Newton-7B](https://huggingface.co/Weyaxi/Newton-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\
\nThe dataset is composed of 63 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run (and is used to compute and display the aggregated metrics on the [Open LLM\
\ Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__Newton-7B\"\
,\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-02-01T20:57:35.185949](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Newton-7B/blob/main/results_2024-02-01T20-57-35.185949.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6202881703209492,\n\
\ \"acc_stderr\": 0.032138997104958926,\n \"acc_norm\": 0.6312377539040873,\n\
\ \"acc_norm_stderr\": 0.0329280849118904,\n \"mc1\": 0.2876376988984088,\n\
\ \"mc1_stderr\": 0.01584631510139481,\n \"mc2\": 0.4436037082395254,\n\
\ \"mc2_stderr\": 0.015171870706558463\n },\n \"harness|arc:challenge|25\"\
: {\n \"acc\": 0.6023890784982935,\n \"acc_stderr\": 0.01430175222327954,\n\
\ \"acc_norm\": 0.6399317406143344,\n \"acc_norm_stderr\": 0.014027516814585188\n\
\ },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6266679944234216,\n\
\ \"acc_stderr\": 0.0048270065208028835,\n \"acc_norm\": 0.817167894841665,\n\
\ \"acc_norm_stderr\": 0.0038573886135331035\n },\n \"harness|hendrycksTest-abstract_algebra|5\"\
: {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \
\ \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n \
\ },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n\
\ \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.5703703703703704,\n\
\ \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\"\
: {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n\
\ \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n\
\ },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.66,\n\
\ \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \
\ \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-clinical_knowledge|5\"\
: {\n \"acc\": 0.6716981132075471,\n \"acc_stderr\": 0.02890159361241178,\n\
\ \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.02890159361241178\n\
\ },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\"\
: {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \
\ \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n \
\ },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\"\
: 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n\
\ \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\"\
: {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \
\ \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n \
\ },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n\
\ \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n\
\ \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\"\
: {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n\
\ \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n\
\ },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\":\
\ 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n\
\ \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\"\
: {\n \"acc\": 0.548936170212766,\n \"acc_stderr\": 0.03252909619613197,\n\
\ \"acc_norm\": 0.548936170212766,\n \"acc_norm_stderr\": 0.03252909619613197\n\
\ },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n\
\ \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n\
\ \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\"\
: {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.041227371113703316,\n\
\ \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.041227371113703316\n\
\ },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\"\
: 0.4126984126984127,\n \"acc_stderr\": 0.02535574126305527,\n \"\
acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.02535574126305527\n\
\ },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.47619047619047616,\n\
\ \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.47619047619047616,\n\
\ \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\"\
: {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \
\ \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n \
\ },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n\
\ \"acc_stderr\": 0.023904914311782655,\n \"acc_norm\": 0.7709677419354839,\n\
\ \"acc_norm_stderr\": 0.023904914311782655\n },\n \"harness|hendrycksTest-high_school_chemistry|5\"\
: {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n\
\ \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n\
\ },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \
\ \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\"\
: 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_european_history|5\"\
: {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.032568666616811015,\n\
\ \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.032568666616811015\n\
\ },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\"\
: 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\"\
: 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n\
\ \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \
\ \"acc\": 0.8808290155440415,\n \"acc_stderr\": 0.023381935348121417,\n\
\ \"acc_norm\": 0.8808290155440415,\n \"acc_norm_stderr\": 0.023381935348121417\n\
\ },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \
\ \"acc\": 0.6230769230769231,\n \"acc_stderr\": 0.024570975364225995,\n\
\ \"acc_norm\": 0.6230769230769231,\n \"acc_norm_stderr\": 0.024570975364225995\n\
\ },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"\
acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524586,\n \
\ \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524586\n\
\ },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \
\ \"acc\": 0.6386554621848739,\n \"acc_stderr\": 0.031204691225150016,\n\
\ \"acc_norm\": 0.6386554621848739,\n \"acc_norm_stderr\": 0.031204691225150016\n\
\ },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\"\
: 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"\
acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n\
\ },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\"\
: 0.8440366972477065,\n \"acc_stderr\": 0.015555802713590172,\n \"\
acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.015555802713590172\n\
\ },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\"\
: 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"\
acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n\
\ },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\"\
: 0.8088235294117647,\n \"acc_stderr\": 0.027599174300640766,\n \"\
acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.027599174300640766\n\
\ },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"\
acc\": 0.7763713080168776,\n \"acc_stderr\": 0.027123298205229966,\n \
\ \"acc_norm\": 0.7763713080168776,\n \"acc_norm_stderr\": 0.027123298205229966\n\
\ },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7130044843049327,\n\
\ \"acc_stderr\": 0.030360379710291947,\n \"acc_norm\": 0.7130044843049327,\n\
\ \"acc_norm_stderr\": 0.030360379710291947\n },\n \"harness|hendrycksTest-human_sexuality|5\"\
: {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313728,\n\
\ \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313728\n\
\ },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\":\
\ 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"\
acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n\
\ },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n\
\ \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \
\ \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\"\
: {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615623,\n\
\ \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615623\n\
\ },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3392857142857143,\n\
\ \"acc_stderr\": 0.04493949068613539,\n \"acc_norm\": 0.3392857142857143,\n\
\ \"acc_norm_stderr\": 0.04493949068613539\n },\n \"harness|hendrycksTest-management|5\"\
: {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n\
\ \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n\
\ },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8974358974358975,\n\
\ \"acc_stderr\": 0.01987565502786744,\n \"acc_norm\": 0.8974358974358975,\n\
\ \"acc_norm_stderr\": 0.01987565502786744\n },\n \"harness|hendrycksTest-medical_genetics|5\"\
: {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \
\ \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n \
\ },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n\
\ \"acc_stderr\": 0.013625556907993457,\n \"acc_norm\": 0.8237547892720306,\n\
\ \"acc_norm_stderr\": 0.013625556907993457\n },\n \"harness|hendrycksTest-moral_disputes|5\"\
: {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468365,\n\
\ \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468365\n\
\ },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.30837988826815643,\n\
\ \"acc_stderr\": 0.015445716910998884,\n \"acc_norm\": 0.30837988826815643,\n\
\ \"acc_norm_stderr\": 0.015445716910998884\n },\n \"harness|hendrycksTest-nutrition|5\"\
: {\n \"acc\": 0.6993464052287581,\n \"acc_stderr\": 0.026256053835718964,\n\
\ \"acc_norm\": 0.6993464052287581,\n \"acc_norm_stderr\": 0.026256053835718964\n\
\ },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n\
\ \"acc_stderr\": 0.025839898334877983,\n \"acc_norm\": 0.707395498392283,\n\
\ \"acc_norm_stderr\": 0.025839898334877983\n },\n \"harness|hendrycksTest-prehistory|5\"\
: {\n \"acc\": 0.7191358024691358,\n \"acc_stderr\": 0.025006469755799208,\n\
\ \"acc_norm\": 0.7191358024691358,\n \"acc_norm_stderr\": 0.025006469755799208\n\
\ },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"\
acc\": 0.4326241134751773,\n \"acc_stderr\": 0.02955545423677885,\n \
\ \"acc_norm\": 0.4326241134751773,\n \"acc_norm_stderr\": 0.02955545423677885\n\
\ },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4634941329856584,\n\
\ \"acc_stderr\": 0.012736153390214963,\n \"acc_norm\": 0.4634941329856584,\n\
\ \"acc_norm_stderr\": 0.012736153390214963\n },\n \"harness|hendrycksTest-professional_medicine|5\"\
: {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462916,\n\
\ \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462916\n\
\ },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"\
acc\": 0.6552287581699346,\n \"acc_stderr\": 0.019228322018696647,\n \
\ \"acc_norm\": 0.6552287581699346,\n \"acc_norm_stderr\": 0.019228322018696647\n\
\ },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n\
\ \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.6363636363636364,\n\
\ \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\"\
: {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.029043088683304328,\n\
\ \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.029043088683304328\n\
\ },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8159203980099502,\n\
\ \"acc_stderr\": 0.027403859410786848,\n \"acc_norm\": 0.8159203980099502,\n\
\ \"acc_norm_stderr\": 0.027403859410786848\n },\n \"harness|hendrycksTest-us_foreign_policy|5\"\
: {\n \"acc\": 0.85,\n \"acc_stderr\": 0.035887028128263686,\n \
\ \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.035887028128263686\n \
\ },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n\
\ \"acc_stderr\": 0.03885425420866766,\n \"acc_norm\": 0.5301204819277109,\n\
\ \"acc_norm_stderr\": 0.03885425420866766\n },\n \"harness|hendrycksTest-world_religions|5\"\
: {\n \"acc\": 0.8070175438596491,\n \"acc_stderr\": 0.030267457554898458,\n\
\ \"acc_norm\": 0.8070175438596491,\n \"acc_norm_stderr\": 0.030267457554898458\n\
\ },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2876376988984088,\n\
\ \"mc1_stderr\": 0.01584631510139481,\n \"mc2\": 0.4436037082395254,\n\
\ \"mc2_stderr\": 0.015171870706558463\n },\n \"harness|winogrande|5\"\
: {\n \"acc\": 0.7884767166535123,\n \"acc_stderr\": 0.011477747684223187\n\
\ },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03411675511751327,\n \
\ \"acc_stderr\": 0.005000212600773276\n }\n}\n```"
repo_url: https://huggingface.co/Weyaxi/Newton-7B
leaderboard_url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
point_of_contact: clementine@hf.co
configs:
- config_name: harness_arc_challenge_25
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|arc:challenge|25_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|arc:challenge|25_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_gsm8k_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|gsm8k|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|gsm8k|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hellaswag_10
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hellaswag|10_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hellaswag|10_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-management|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T20-57-35.185949.parquet'
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_abstract_algebra_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-abstract_algebra|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_anatomy_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-anatomy|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_astronomy_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-astronomy|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_business_ethics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-business_ethics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_clinical_knowledge_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-clinical_knowledge|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_college_biology_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_biology|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_college_chemistry_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_chemistry|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_college_computer_science_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_computer_science|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_college_mathematics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_college_medicine_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_medicine|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_college_physics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-college_physics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_computer_security_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-computer_security|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_conceptual_physics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-conceptual_physics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_econometrics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-econometrics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_electrical_engineering_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-electrical_engineering|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_elementary_mathematics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-elementary_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_formal_logic_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-formal_logic|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_global_facts_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-global_facts|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_biology_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_biology|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_chemistry_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_chemistry|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_computer_science_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_computer_science|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_european_history_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_european_history|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_geography_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_geography|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_government_and_politics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_macroeconomics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_mathematics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_mathematics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_microeconomics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_physics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_physics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_psychology_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_psychology|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_statistics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_statistics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_us_history_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_us_history|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_high_school_world_history_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-high_school_world_history|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_human_aging_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_aging|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_human_sexuality_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-human_sexuality|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_international_law_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-international_law|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_jurisprudence_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-jurisprudence|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_logical_fallacies_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-logical_fallacies|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_machine_learning_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-machine_learning|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_management_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-management|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_marketing_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-marketing|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_medical_genetics_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-medical_genetics|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_miscellaneous_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-miscellaneous|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_moral_disputes_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_disputes|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_moral_scenarios_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-moral_scenarios|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_nutrition_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-nutrition|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_philosophy_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-philosophy|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_prehistory_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-prehistory|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_professional_accounting_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_accounting|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_professional_law_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_law|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_professional_medicine_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_medicine|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_professional_psychology_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-professional_psychology|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_public_relations_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-public_relations|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_security_studies_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-security_studies|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_sociology_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-sociology|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_us_foreign_policy_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-us_foreign_policy|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_virology_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-virology|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_hendrycksTest_world_religions_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|hendrycksTest-world_religions|5_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_truthfulqa_mc_0
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|truthfulqa:mc|0_2024-02-01T20-57-35.185949.parquet'
- config_name: harness_winogrande_5
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- '**/details_harness|winogrande|5_2024-02-01T20-57-35.185949.parquet'
- split: latest
path:
- '**/details_harness|winogrande|5_2024-02-01T20-57-35.185949.parquet'
- config_name: results
data_files:
- split: 2024_02_01T20_57_35.185949
path:
- results_2024-02-01T20-57-35.185949.parquet
- split: latest
path:
- results_2024-02-01T20-57-35.185949.parquet
---
# Dataset Card for Evaluation run of Weyaxi/Newton-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Weyaxi/Newton-7B](https://huggingface.co/Weyaxi/Newton-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Weyaxi__Newton-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-02-01T20:57:35.185949](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__Newton-7B/blob/main/results_2024-02-01T20-57-35.185949.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6202881703209492,
"acc_stderr": 0.032138997104958926,
"acc_norm": 0.6312377539040873,
"acc_norm_stderr": 0.0329280849118904,
"mc1": 0.2876376988984088,
"mc1_stderr": 0.01584631510139481,
"mc2": 0.4436037082395254,
"mc2_stderr": 0.015171870706558463
},
"harness|arc:challenge|25": {
"acc": 0.6023890784982935,
"acc_stderr": 0.01430175222327954,
"acc_norm": 0.6399317406143344,
"acc_norm_stderr": 0.014027516814585188
},
"harness|hellaswag|10": {
"acc": 0.6266679944234216,
"acc_stderr": 0.0048270065208028835,
"acc_norm": 0.817167894841665,
"acc_norm_stderr": 0.0038573886135331035
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5703703703703704,
"acc_stderr": 0.04276349494376599,
"acc_norm": 0.5703703703703704,
"acc_norm_stderr": 0.04276349494376599
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6716981132075471,
"acc_stderr": 0.02890159361241178,
"acc_norm": 0.6716981132075471,
"acc_norm_stderr": 0.02890159361241178
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.548936170212766,
"acc_stderr": 0.03252909619613197,
"acc_norm": 0.548936170212766,
"acc_norm_stderr": 0.03252909619613197
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.49122807017543857,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.49122807017543857,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.041227371113703316,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.041227371113703316
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.02535574126305527,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.02535574126305527
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782655,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782655
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.032568666616811015,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.032568666616811015
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8808290155440415,
"acc_stderr": 0.023381935348121417,
"acc_norm": 0.8808290155440415,
"acc_norm_stderr": 0.023381935348121417
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6230769230769231,
"acc_stderr": 0.024570975364225995,
"acc_norm": 0.6230769230769231,
"acc_norm_stderr": 0.024570975364225995
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524586,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524586
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6386554621848739,
"acc_stderr": 0.031204691225150016,
"acc_norm": 0.6386554621848739,
"acc_norm_stderr": 0.031204691225150016
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31788079470198677,
"acc_stderr": 0.038020397601079024,
"acc_norm": 0.31788079470198677,
"acc_norm_stderr": 0.038020397601079024
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.015555802713590172,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.015555802713590172
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4675925925925926,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.4675925925925926,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.027599174300640766,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.027599174300640766
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7763713080168776,
"acc_stderr": 0.027123298205229966,
"acc_norm": 0.7763713080168776,
"acc_norm_stderr": 0.027123298205229966
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7130044843049327,
"acc_stderr": 0.030360379710291947,
"acc_norm": 0.7130044843049327,
"acc_norm_stderr": 0.030360379710291947
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313728,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313728
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.03462419931615623,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.03462419931615623
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3392857142857143,
"acc_stderr": 0.04493949068613539,
"acc_norm": 0.3392857142857143,
"acc_norm_stderr": 0.04493949068613539
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8974358974358975,
"acc_stderr": 0.01987565502786744,
"acc_norm": 0.8974358974358975,
"acc_norm_stderr": 0.01987565502786744
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8237547892720306,
"acc_stderr": 0.013625556907993457,
"acc_norm": 0.8237547892720306,
"acc_norm_stderr": 0.013625556907993457
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7283236994219653,
"acc_stderr": 0.023948512905468365,
"acc_norm": 0.7283236994219653,
"acc_norm_stderr": 0.023948512905468365
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.30837988826815643,
"acc_stderr": 0.015445716910998884,
"acc_norm": 0.30837988826815643,
"acc_norm_stderr": 0.015445716910998884
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6993464052287581,
"acc_stderr": 0.026256053835718964,
"acc_norm": 0.6993464052287581,
"acc_norm_stderr": 0.026256053835718964
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.025839898334877983,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.025839898334877983
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7191358024691358,
"acc_stderr": 0.025006469755799208,
"acc_norm": 0.7191358024691358,
"acc_norm_stderr": 0.025006469755799208
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4326241134751773,
"acc_stderr": 0.02955545423677885,
"acc_norm": 0.4326241134751773,
"acc_norm_stderr": 0.02955545423677885
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4634941329856584,
"acc_stderr": 0.012736153390214963,
"acc_norm": 0.4634941329856584,
"acc_norm_stderr": 0.012736153390214963
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462916,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462916
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6552287581699346,
"acc_stderr": 0.019228322018696647,
"acc_norm": 0.6552287581699346,
"acc_norm_stderr": 0.019228322018696647
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.710204081632653,
"acc_stderr": 0.029043088683304328,
"acc_norm": 0.710204081632653,
"acc_norm_stderr": 0.029043088683304328
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8159203980099502,
"acc_stderr": 0.027403859410786848,
"acc_norm": 0.8159203980099502,
"acc_norm_stderr": 0.027403859410786848
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.035887028128263686,
"acc_norm": 0.85,
"acc_norm_stderr": 0.035887028128263686
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866766,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866766
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8070175438596491,
"acc_stderr": 0.030267457554898458,
"acc_norm": 0.8070175438596491,
"acc_norm_stderr": 0.030267457554898458
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2876376988984088,
"mc1_stderr": 0.01584631510139481,
"mc2": 0.4436037082395254,
"mc2_stderr": 0.015171870706558463
},
"harness|winogrande|5": {
"acc": 0.7884767166535123,
"acc_stderr": 0.011477747684223187
},
"harness|gsm8k|5": {
"acc": 0.03411675511751327,
"acc_stderr": 0.005000212600773276
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
woctordho/img-256-shinkai-2 | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 11515086349.93
num_examples: 811410
download_size: 11660877157
dataset_size: 11515086349.93
---
# Dataset Card for "img-256-shinkai-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shreevigneshs/iwslt-2023-en-ko-train-split | ---
license: gpl-3.0
dataset_info:
features:
- name: en
dtype: string
- name: ko
dtype: string
- name: ko_annotated
dtype: string
- name: styles
dtype: int64
splits:
- name: train
num_bytes: 255812
num_examples: 640
- name: val
num_bytes: 59640
num_examples: 160
- name: if_test
num_bytes: 29151
num_examples: 80
- name: f_test
num_bytes: 30489
num_examples: 80
download_size: 202991
dataset_size: 375092
---
|
mila-intel/ProtDescribe | ---
configs:
- config_name: ProtDescribe
data_files: "uniprot_sprot_filtered.tsv"
#data_url: https://miladeepgraphlearningproteindata.s3.us-east-2.amazonaws.com/uniprotdata/uniprot_sprot_filtered.tsv
sep: "\t"
default: true
license: apache-2.0
--- |
heliosprime/twitter_dataset_1712970789 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 8616
num_examples: 19
download_size: 8939
dataset_size: 8616
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "twitter_dataset_1712970789"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
v-xchen-v/mmlu_filtered_for_freeform | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: answer
dtype: string
- name: choices
sequence: string
- name: prompt
dtype: string
splits:
- name: test
num_bytes: 2582200
num_examples: 7223
download_size: 1441554
dataset_size: 2582200
---
|
Sofoklis/hairpins_fasta | ---
dataset_info:
features:
- name: number
dtype: int64
- name: name
dtype: string
- name: sequence
dtype: string
- name: spaced_sequence
dtype: string
- name: array
sequence:
sequence: float64
- name: image
dtype: image
splits:
- name: train
num_bytes: 32963259.6
num_examples: 90
- name: test
num_bytes: 3662584.4
num_examples: 10
- name: valid
num_bytes: 6592651.92
num_examples: 18
download_size: 9893272
dataset_size: 43218495.92
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
yzhuang/metatree_space_ga | ---
dataset_info:
features:
- name: id
dtype: int64
- name: X
sequence: float64
- name: y
dtype: int64
splits:
- name: train
num_bytes: 148580
num_examples: 2185
- name: validation
num_bytes: 62696
num_examples: 922
download_size: 195171
dataset_size: 211276
---
# Dataset Card for "metatree_space_ga"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ineoApp/ds_001 | ---
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: bboxes
sequence:
sequence: int64
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': numero facture
'2': fournisseur
'3': date facture
'4': date limite
'5': montant ht
'6': montant ttc
'7': tva
'8': prix tva
'9': addresse
'10': reference
'11': art1 designation
'12': art1 quantite
'13': art1 prix unit
'14': art1 tva
'15': art1 montant ht
'16': art2 designation
'17': art2 quantite
'18': art2 prix unit
'19': art2 tva
'20': art2 montant ht
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 19296675.80952381
num_examples: 16
- name: test
num_bytes: 6030211.19047619
num_examples: 5
download_size: 25277003
dataset_size: 25326887.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Ccerquei/JDE_Full_PQ_Dataset_50 | ---
license: mit
---
|
liuyanchen1015/MULTI_VALUE_mrpc_drop_inf_to | ---
dataset_info:
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: int64
- name: idx
dtype: int64
- name: value_score
dtype: int64
splits:
- name: test
num_bytes: 143269
num_examples: 514
- name: train
num_bytes: 304124
num_examples: 1106
- name: validation
num_bytes: 37057
num_examples: 134
download_size: 327137
dataset_size: 484450
---
# Dataset Card for "MULTI_VALUE_mrpc_drop_inf_to"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gollumeo/french-litterature | ---
language:
- fr
--- |
indicbench/arc_te | ---
dataset_info:
- config_name: ARC-Challenge
features:
- name: answerKey
dtype: string
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- name: id
dtype: string
- name: question
dtype: string
splits:
- name: validation
num_bytes: 222583
num_examples: 299
- name: test
num_bytes: 868049
num_examples: 1172
download_size: 411466
dataset_size: 1090632
- config_name: default
features:
- name: _data_files
list:
- name: filename
dtype: string
- name: _fingerprint
dtype: string
- name: _format_columns
dtype: 'null'
- name: _format_type
dtype: 'null'
- name: _output_all_columns
dtype: bool
- name: _split
dtype: 'null'
splits:
- name: validation
num_bytes: 54
num_examples: 1
- name: test
num_bytes: 54
num_examples: 1
download_size: 6510
dataset_size: 108
configs:
- config_name: ARC-Challenge
data_files:
- split: validation
path: ARC-Challenge/validation-*
- split: test
path: ARC-Challenge/test-*
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
CyberHarem/kagero_azurlane | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of kagero/陽炎/阳炎 (Azur Lane)
This is the dataset of kagero/陽炎/阳炎 (Azur Lane), containing 13 images and their tags.
The core tags of this character are `animal_ears, brown_hair, purple_eyes, twintails, bangs, fang, fox_ears, rabbit_ears, short_hair, ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 13 | 9.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagero_azurlane/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 13 | 7.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagero_azurlane/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 21 | 12.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagero_azurlane/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 13 | 9.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagero_azurlane/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 21 | 14.40 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kagero_azurlane/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kagero_azurlane',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 13 |  |  |  |  |  | looking_at_viewer, 1girl, solo, bare_shoulders, blush, detached_sleeves, open_mouth, wide_sleeves, collarbone, simple_background, :d, full_body, long_sleeves, machinery, turret, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | looking_at_viewer | 1girl | solo | bare_shoulders | blush | detached_sleeves | open_mouth | wide_sleeves | collarbone | simple_background | :d | full_body | long_sleeves | machinery | turret | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------|:--------|:-------|:-----------------|:--------|:-------------------|:-------------|:---------------|:-------------|:--------------------|:-----|:------------|:---------------|:------------|:---------|:-------------------|
| 0 | 13 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
|
mask-distilled-one-sec-cv12/chunk_106 | ---
dataset_info:
features:
- name: logits
sequence: float32
- name: mfcc
sequence:
sequence: float64
splits:
- name: train
num_bytes: 887902224
num_examples: 174372
download_size: 904438425
dataset_size: 887902224
---
# Dataset Card for "chunk_106"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
saibo/bookcorpus_compact_1024_shard8_of_10_meta | ---
dataset_info:
features:
- name: text
dtype: string
- name: concept_with_offset
dtype: string
- name: cid_arrangement
sequence: int32
- name: schema_lengths
sequence: int64
- name: topic_entity_mask
sequence: int64
- name: text_lengths
sequence: int64
splits:
- name: train
num_bytes: 7774340762
num_examples: 61605
download_size: 1711444340
dataset_size: 7774340762
---
# Dataset Card for "bookcorpus_compact_1024_shard8_of_10_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hishab/MegaBNSpeech_Test_Data | ---
language:
- bn
license: cc-by-nc-4.0
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: duration
dtype: float64
- name: category
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 219091915.875
num_examples: 1753
download_size: 214321460
dataset_size: 219091915.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# MegaBNSpeech Test Data
To evaluate the performance of the models, we used four test sets. Two of these were developed as part of the MegaBNSpeech corpus, while the remaining two (Fleurs and Common Voice) are commonly used test sets that are widely recognized by the speech community.
## Use dataset library:
```python
from datasets import load_dataset
dataset = load_dataset("hishab/MegaBNSpeech_Test_Data")
```
## Reported Word error rate (WER) /character error rate (CER) on four test sets using four ASR systems
| Category | Duration (hr) | Hishab BN Fastconformer | Google MMS | OOD-speech |
|-------------------- | -------------- | ------------ | ---------- | ----------- |
| MegaBNSpeech-YT | 8.1 | 6.4/3.39 | 28.3/18.88 | 51.1/23.49 |
| MegaBNSpeech-Tel | 1.9 | ∗40.7/24.38 | ∗59/41.26 | ∗76.8/39.36 |
## Reported Word error rate (WER) /character error rate (CER) on different categories present in Hishab BN FastConformer
| Category | Duration (hr) | Hishab BN FastConformer | Google MMS | OOD-speech |
|-------------------- | -------------- | ------------ | ---------- | ----------- |
| News | 1.21 | 2.5/1.21 | 18.9/10.46 | 52.2/21.65 |
| Talkshow | 1.39 | 6/3.29 | 28/18.71 | 48.8/21.5 |
| Courses | 3.81 | 6.8/3.79 | 30.8/21.64 | 50.2/23.52 |
| Drama | 0.03 | 10.3/7.47 | 37.3/27.43 | 64.3/32.74 |
| Science | 0.26 | 5/1.92 | 20.6/11.4 | 45.3/19.93 |
| Vlog | 0.18 | 11.3/6.69 | 33/22.9 | 57.9/27.18 |
| Recipie | 0.58 | 7.5/3.29 | 26.4/16.6 | 53.3/26.89 |
| Waz | 0.49 | 9.6/5.45 | 33.3/23.1 | 57.3/27.46 |
| Movie | 0.1 | 8/4.64 | 35.2/23.88 | 64.4/34.96 |
|
confit/crema-d | ---
task_categories:
- audio-classification
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: emotion
dtype: string
- name: label
dtype:
class_label:
names:
'0': anger
'1': disgust
'2': fear
'3': happy
'4': neutral
'5': sad
splits:
- name: train
num_bytes: 425762803.75
num_examples: 5209
- name: validation
num_bytes: 91023972.432
num_examples: 1116
- name: test
num_bytes: 91269786.5
num_examples: 1117
download_size: 606141777
dataset_size: 608056562.6819999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
tags:
- audio
- paralinguistics
- multiclass
- emotion
---
|
FreedomIntelligence/Huatuo26M-Lite | ---
license: apache-2.0
task_categories:
- text-classification
- question-answering
- conversational
- text-generation
language:
- zh
tags:
- medical
pretty_name: Huatuo26M_v2
size_categories:
- 100K<n<1M
---
# Huatuo26M-Lite 📚
- ## Table of Contents 🗂
- [Dataset Description](#dataset-description) 📝
- [Dataset Information](#dataset-information) ℹ️
- [Data Distribution](#data-distribution) 📊
- [Usage](#usage) 🔧
- [Citation](#citation) 📖
## Dataset Description 📝
Huatuo26M-Lite is a refined and optimized dataset based on the Huatuo26M dataset, which has undergone multiple purification processes and rewrites. It has more data dimensions and higher data quality. We welcome you to try using it.
## Dataset Information ℹ️
- **Dataset Name:** Huatuo26M-Lite
- **Version:** _[0.0.1]_
- **Size:** _[178k]_
- **Language:** _[Chinese]_
### Abstract 📄
We collected 26 million pieces of original QA data in the medical field, but it was not easy to use and had some risks because it was obtained from Common Crawl. Therefore, we took the following steps based on the original 26 million data: deduplication, cleaning, extraction of high-frequency questions, scoring of high-frequency questions using ChatGPT, and filtering only high-scoring questions. We then used ChatGPT to rewrite the answers to the high-scoring questions, resulting in a completely refined dataset. Please refer to our paper for the specific processing methods.
### Data Collection 🕵️♂️
ur question data was collected from the internet, and we extracted the high-frequency portion. The answers were rewritten by ChatGPT based on the original answers as a reference, and their quality was judged to be better than the original answers through manual evaluation. Therefore, please feel free to use our dataset with confidence.
### Preprocessing/Cleaning 🧹
The dataset has been processed to remove duplicates and cleaned to ensure high-quality data. It was then refined using OpenAI's ChatGPT, which helped in enhancing the overall quality of the dataset.
## Data Distribution 📊
This section provides a visual overview of the distribution of data in the Huatuo26M-Lite dataset.
**Data Categories Bar Chart:** 
This chart represents the distribution of data categories in the dataset.
**Top 20 Associated Diseases Table:**
| topn | disease | nums | ratio |
| ---- | ---------- | ---- | ------- |
| 1 | 白癜风 | 3308 | 1.8615% |
| 2 | 人流 | 2686 | 1.5115% |
| 3 | 感冒 | 2371 | 1.3342% |
| 4 | 癫痫 | 2217 | 1.2476% |
| 5 | 痔疮 | 2134 | 1.2009% |
| 6 | 疼痛 | 1842 | 1.0366% |
| 7 | 咳嗽 | 1799 | 1.0124% |
| 8 | 前列腺炎 | 1564 | 0.8801% |
| 9 | 尖锐湿疣 | 1516 | 0.8531% |
| 10 | 肺癌 | 1408 | 0.7923% |
| 11 | 出血 | 1400 | 0.7878% |
| 12 | 鼻炎 | 1370 | 0.7709% |
| 13 | 肝癌 | 1354 | 0.7619% |
| 14 | 糖尿病 | 1348 | 0.7586% |
| 15 | 过敏性鼻炎 | 1295 | 0.7287% |
| 16 | 发烧 | 1265 | 0.7119% |
| 17 | 乙肝 | 1232 | 0.6933% |
| 18 | 便秘 | 1214 | 0.6832% |
| 19 | 甲亢 | 1178 | 0.6629% |
| 20 | 脱发 | 1173 | 0.6601% |
This table shows the top 20 diseases associated with the data entries in the dataset, along with their respective data entry counts and proportions.
## Usage 🔧
```python
from datasets import load_dataset
dataset = load_dataset("FreedomIntelligence/Huatuo26M-Lite")
```
## Citation 📖
```
@misc{li2023huatuo26m,
title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset},
author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang},
year={2023},
eprint={2305.01526},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
Please note that this dataset is distributed "AS IS" without any warranty, express or implied, from the provider. Users should cite the dataset appropriately and respect any licensing or usage restrictions. |
Dampish/GPT-NEO-PRE-S | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 2891461681
num_examples: 631070
download_size: 667909351
dataset_size: 2891461681
---
|
wizrb47/test-json | ---
license: gpl-3.0
---
|
ronanki/product_material_embeddings | ---
license: apache-2.0
---
|
Joch2010/HP_characters | ---
license: gpl-3.0
task_categories:
- question-answering
language:
- de
- en
- fr
- it
- la
tags:
- legal
pretty_name: HP characters
size_categories:
- 100K<n<1M
--- |
ryanhe/VIP | ---
license: apache-2.0
task_categories:
- video-classification
- image-to-text
language:
- en
pretty_name: VIP
---
# Dataset Card for Video Infilling and Prediction (VIP)
Video Infilling and Prediction (VIP) is a benchmark dataset for assessing the sequential commonsense reasoning capabilities of vision-language models by generating explanations of videos.
[See our EMNLP 2023 paper introducing this work](https://aclanthology.org/2023.emnlp-main.15/)
## Dataset Details
- **Curated by:** Vaishnavi Himakunthala, Andy Ouyang, Daniel Rose, Ryan He, Alex Mei, Yujie Lu, Chinmay Sonar, Michael Saxon, William Wang (UC Santa Barbara)
- **Funded by:** Amazon AWS AI/ML Research Award, AWS Cloud Credit
for Research, NSF REU #2048122
- **Language(s) (NLP):** English
### Dataset Description
- VIP is an inference-time dataset that contains over 1.5k video keyframes and two forms of textual descriptions for each keyframe: an unstructured dense caption and a structured description clearly defining the Focus, Action, Mood, Objects, and Setting (FAMOuS) of each keyframe.
### Dataset Source and Creation
- We use the Youtube-8M dataset to collect videos and follow a pipelined approach to extract keyframes and descriptions for the VIP dataset
- Each description is verified by human annotation
## Uses/Tasks
- We define two new tasks: Video Infilling and Video Prediction.
- Video Infilling: Given 1, 2, or 3 surrounding keyframes, predict the keyframes in between.
- Video Prediction: Given 1, 2, or 3 previous keyframes, predict the keyframes that come after.
- Both of these tasks can be accomplished by using only the keyframe image, only the keyframe descriptions, or using both the descriptions and the images, allowing benchmarking on various VL models.
For more information on the tasks or the dataset collection process, please refer to our paper:
https://arxiv.org/pdf/2305.13903.pdf
If you find this dataset helpful for your work, please cite using this citation:
```
@inproceedings{
himakunthala2023lets,
title={Let's Think Frame by Frame with {VIP}: A Video Infilling and Prediction Dataset for Evaluating Video Chain-of-Thought},
author={Vaishnavi Himakunthala and Andy Ouyang and Daniel Philip Rose and Ryan He and Alex Mei and Yujie Lu and Chinmay Sonar and Michael Saxon and William Yang Wang},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
year={2023},
url={https://openreview.net/forum?id=y6Ej5BZkrR}
}
```
|
warleagle/pco_audio_data_v3 | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 447042388.0
num_examples: 19
download_size: 447020757
dataset_size: 447042388.0
---
# Dataset Card for "pco_audio_data_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SaylorTwift/details_mistralai__Mistral-7B-Instruct-v0.2_private | ---
pretty_name: Evaluation run of mistralai/Mistral-7B-Instruct-v0.2
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).\n\
\nThe dataset is composed of 1 configuration, each one coresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\"SaylorTwift/details_mistralai__Mistral-7B-Instruct-v0.2_private\"\
,\n\t\"extended_mt_bench_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\n\
These are the [latest results from run 2024-04-02T10:38:22.166273](https://huggingface.co/datasets/SaylorTwift/details_mistralai__Mistral-7B-Instruct-v0.2_private/blob/main/results_2024-04-02T10-38-22.166273.json)(note\
\ that their might be results for other tasks in the repos if successive evals didn't\
\ cover the same tasks. You find each in the results and the \"latest\" split for\
\ each eval):\n\n```python\n{\n \"all\": {\n \"single_turn\": 7.7125,\n\
\ \"single_turn_stderr\": 0.25043869736772023,\n \"multi_turn\": 7.4625,\n\
\ \"multi_turn_stderr\": 0.3029098071469836\n },\n \"extended|mt_bench|0\"\
: {\n \"single_turn\": 7.7125,\n \"single_turn_stderr\": 0.25043869736772023,\n\
\ \"multi_turn\": 7.4625,\n \"multi_turn_stderr\": 0.3029098071469836\n\
\ }\n}\n```"
repo_url: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
configs:
- config_name: extended_mt_bench_0
data_files:
- split: 2024_04_02T10_38_22.166273
path:
- '**/details_extended|mt_bench|0_2024-04-02T10-38-22.166273.parquet'
- split: latest
path:
- '**/details_extended|mt_bench|0_2024-04-02T10-38-22.166273.parquet'
- config_name: results
data_files:
- split: 2024_04_02T10_38_22.166273
path:
- results_2024-04-02T10-38-22.166273.parquet
- split: latest
path:
- results_2024-04-02T10-38-22.166273.parquet
---
# Dataset Card for Evaluation run of mistralai/Mistral-7B-Instruct-v0.2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
The dataset is composed of 1 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("SaylorTwift/details_mistralai__Mistral-7B-Instruct-v0.2_private",
"extended_mt_bench_0",
split="train")
```
## Latest results
These are the [latest results from run 2024-04-02T10:38:22.166273](https://huggingface.co/datasets/SaylorTwift/details_mistralai__Mistral-7B-Instruct-v0.2_private/blob/main/results_2024-04-02T10-38-22.166273.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"single_turn": 7.7125,
"single_turn_stderr": 0.25043869736772023,
"multi_turn": 7.4625,
"multi_turn_stderr": 0.3029098071469836
},
"extended|mt_bench|0": {
"single_turn": 7.7125,
"single_turn_stderr": 0.25043869736772023,
"multi_turn": 7.4625,
"multi_turn_stderr": 0.3029098071469836
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
bigbio/n2c2_2011 |
---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_shortname: DUA
pretty_name: n2c2 2011 Coreference
homepage: https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
bigbio_pubmed: False
bigbio_public: False
bigbio_tasks:
- COREFERENCE_RESOLUTION
---
# Dataset Card for n2c2 2011 Coreference
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** COREF
The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
The i2b2/VA corpus contained five concept categories: problem, person, pronoun,
test, and treatment. Each record in the i2b2/VA corpus was annotated by two
independent annotators for coreference pairs. Then the pairs were post-processed
in order to create coreference chains. These chains were presented to an adjudicator,
who resolved the disagreements between the original annotations, and added or deleted
annotations as necessary. The outputs of the adjudicators were then re-adjudicated, with
particular attention being paid to duplicates and enforcing consistency in the annotations.
## Citation Information
```
@article{uzuner2012evaluating,
author = {
Uzuner, Ozlem and
Bodnari, Andreea and
Shen, Shuying and
Forbush, Tyler and
Pestian, John and
South, Brett R
},
title = "{Evaluating the state of the art in coreference resolution for electronic medical records}",
journal = {Journal of the American Medical Informatics Association},
volume = {19},
number = {5},
pages = {786-791},
year = {2012},
month = {02},
issn = {1067-5027},
doi = {10.1136/amiajnl-2011-000784},
url = {https://doi.org/10.1136/amiajnl-2011-000784},
eprint = {https://academic.oup.com/jamia/article-pdf/19/5/786/17374287/19-5-786.pdf},
}
```
|
zelkame/ru-stackoverflow-py | ---
license: mit
---
Предоставлено как есть с целью исследования. Использовать на свой страх и риск.
Данный набор данных содержит вопросы с тегом 'python' из русскоязычного сайта Stack Overflow вместе с соответствующими ответами, помеченными как лучшие.
Набор данных был собран и обработан для использования в моделях обработки естественного языка. Все вопросы касаются программирования на языке Python.
Ответы были отобраны и проверены сообществом Stack Overflow как наиболее полезные и информативные для каждого вопроса.
Набор данных состоит из двух полей. Поле 'Вопрос' содержит оригинальный вопрос, заданный на Stack Overflow. Поле 'Ответ' содержит ответ, помеченный как лучший для этого вопроса на момент сбора.
Данные были очищены от лишней информации, тегов разметки и форматирования. |
joey234/mmlu-international_law-neg-prepend | ---
dataset_info:
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: negate_openai_prompt
struct:
- name: content
dtype: string
- name: role
dtype: string
- name: neg_question
dtype: string
- name: fewshot_context
dtype: string
- name: ori_prompt
dtype: string
- name: neg_prompt
dtype: string
- name: fewshot_context_neg
dtype: string
- name: fewshot_context_ori
dtype: string
splits:
- name: dev
num_bytes: 10115
num_examples: 5
- name: test
num_bytes: 1680951
num_examples: 121
download_size: 145868
dataset_size: 1691066
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: test
path: data/test-*
---
# Dataset Card for "mmlu-international_law-neg-prepend"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zolak/twitter_dataset_78_1713137836 | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
- name: scraped_at
dtype: string
- name: image_urls
dtype: string
splits:
- name: train
num_bytes: 327478
num_examples: 779
download_size: 159638
dataset_size: 327478
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/yellow_pokemon | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of yellow (Pokémon)
This is the dataset of yellow (Pokémon), containing 283 images and their tags.
The core tags of this character are `blonde_hair, long_hair, ponytail, bangs, hat`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 283 | 183.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 283 | 137.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 450 | 236.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 283 | 173.12 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 450 | 297.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yellow_pokemon/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yellow_pokemon',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 6 |  |  |  |  |  | 1girl, :d, long_sleeves, open_mouth, pants, shirt, tongue, tunic, brown_belt, looking_at_viewer, poke_ball, pokemon_(creature), short_hair, blush, boots, character_name, green_eyes, holding_fishing_rod |
| 1 | 5 |  |  |  |  |  | 1girl, blush, long_sleeves, looking_at_viewer, shirt, simple_background, solo, :d, open_mouth, upper_body, green_eyes, white_background, belt, from_side, grey_background, yellow_eyes |
| 2 | 6 |  |  |  |  |  | 1girl, solo, blush, brown_eyes, looking_at_viewer, simple_background, smile, upper_body, white_background, closed_mouth |
| 3 | 6 |  |  |  |  |  | 1girl, flower, pokemon_(creature), smile, one_eye_closed, open_mouth, yellow_eyes |
| 4 | 8 |  |  |  |  |  | 1girl, poke_ball_(basic), smile, solo, androgynous, short_hair, straw_hat, belt, boots, holding_fishing_rod, reverse_trap, simple_background, white_background, yellow_eyes, holding_poke_ball |
| 5 | 6 |  |  |  |  |  | 1boy, 1girl, hetero, solo_focus, nipples, nude, sex, yellow_eyes, open_mouth, penis, blush, cum_in_pussy, medium_breasts, vaginal |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | :d | long_sleeves | open_mouth | pants | shirt | tongue | tunic | brown_belt | looking_at_viewer | poke_ball | pokemon_(creature) | short_hair | blush | boots | character_name | green_eyes | holding_fishing_rod | simple_background | solo | upper_body | white_background | belt | from_side | grey_background | yellow_eyes | brown_eyes | smile | closed_mouth | flower | one_eye_closed | poke_ball_(basic) | androgynous | straw_hat | reverse_trap | holding_poke_ball | 1boy | hetero | solo_focus | nipples | nude | sex | penis | cum_in_pussy | medium_breasts | vaginal |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----|:---------------|:-------------|:--------|:--------|:---------|:--------|:-------------|:--------------------|:------------|:---------------------|:-------------|:--------|:--------|:-----------------|:-------------|:----------------------|:--------------------|:-------|:-------------|:-------------------|:-------|:------------|:------------------|:--------------|:-------------|:--------|:---------------|:---------|:-----------------|:--------------------|:--------------|:------------|:---------------|:--------------------|:-------|:---------|:-------------|:----------|:-------|:------|:--------|:---------------|:-----------------|:----------|
| 0 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | | X | | | | X | | | | X | | | X | | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | |
| 2 | 6 |  |  |  |  |  | X | | | | | | | | | X | | | | X | | | | | X | X | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | |
| 3 | 6 |  |  |  |  |  | X | | | X | | | | | | | | X | | | | | | | | | | | | | | X | | X | | X | X | | | | | | | | | | | | | | | |
| 4 | 8 |  |  |  |  |  | X | | | | | | | | | | | | X | | X | | | X | X | X | | X | X | | | X | | X | | | | X | X | X | X | X | | | | | | | | | | |
| 5 | 6 |  |  |  |  |  | X | | | X | | | | | | | | | | X | | | | | | | | | | | | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
|
davidberenstein1957/spacy_sm_wnut17 | ---
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-CARDINAL
'2': I-CARDINAL
'3': B-DATE
'4': I-DATE
'5': B-EVENT
'6': I-EVENT
'7': B-FAC
'8': I-FAC
'9': B-GPE
'10': I-GPE
'11': B-LAW
'12': I-LAW
'13': B-LOC
'14': I-LOC
'15': B-MONEY
'16': I-MONEY
'17': B-NORP
'18': I-NORP
'19': B-ORDINAL
'20': I-ORDINAL
'21': B-ORG
'22': I-ORG
'23': B-PERCENT
'24': I-PERCENT
'25': B-PERSON
'26': I-PERSON
'27': B-QUANTITY
'28': I-QUANTITY
'29': B-TIME
'30': I-TIME
'31': B-WORK_OF_ART
'32': I-WORK_OF_ART
splits:
- name: train
num_bytes: 40051.2
num_examples: 120
- name: test
num_bytes: 10012.8
num_examples: 30
download_size: 19486
dataset_size: 50064.0
---
# Dataset Card for "spacy_sm_wnut17"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
awettig/github-sample-65536tokens-llama | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 329257888
num_examples: 1256
download_size: 78876374
dataset_size: 329257888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "github-sample-65536tokens-llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ShenaoZ/0.0_idpo_same_3iters_debug_dataset | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: is_better
dtype: bool
splits:
- name: train_prefs_1
num_bytes: 169548771
num_examples: 20378
- name: test_prefs_1
num_bytes: 16517400
num_examples: 2000
download_size: 101627816
dataset_size: 186066171
configs:
- config_name: default
data_files:
- split: train_prefs_1
path: data/train_prefs_1-*
- split: test_prefs_1
path: data/test_prefs_1-*
---
# Dataset Card for "0.0_idpo_same_3iters_debug_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
truongghieu/BK_Regulation | ---
task_categories:
- text-generation
language:
- en
size_categories:
- n<1K
license: apache-2.0
--- |
carlosfuy/sgdhfdde | ---
license: afl-3.0
---
|
weaviate/WeaviateBlogRAG-0-0-0 | ---
license: mit
---
|
ewof/koishi-instruct-metharme | ---
license: apache-2.0
language:
- en
pretty_name: koishi instruct metharme
viewer: false
size_categories:
- 100K<n<1M
---
koishi instruct metharme dataset, currently 414862 lines
- oasst is from ewof/oasst-convo-unfiltered-deduped
- sharegpt (vicuna) is from ewof/sharegpt-instruct-unfiltered-deduped
- dolly is from ewof/dolly-instruct-unfiltered-deduped
- hh-rlhf is from ewof/hh-rlhf-instruct-unfiltered-deduped
- self_instruct is from ewof/self-instruct-unfiltered-deduped
- hf_instruction is from ewof/hf-instruction-unfiltered
- gpteacher is from ewof/gpteacher-unfiltered
- asss is from ewof/asss-unfiltered-deduped
- code_alpaca is from ewof/code-alpaca-instruct-unfiltered
- synthetic_instruct is from ewof/synthetic-instruct-unfiltered-deduped
- flan is from ewof/flan_unfiltered
these each have their own READMEs that explain how i parsed them
- evol instruct code is from nickrosh/Evol-Instruct-Code-80k-v1
- wizard is from ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- airoboros is from jondurbin/airoboros-2.2.1 (i filtered out orca entries since orca has flan prompts and koishi already has flan)
- llamini is from MBZUAI/LaMini-instruction i ran llamini_to_metharme.py then i ran llamini_merge_dedupe.py with koishi_data_metharme.jsonl (generated with merge.py and everything in subsets folder except llamini_data_metharme.jsonl) as k file and llamini_data_metharme.jsonl as lm file
|
WillHeld/blimp | ---
dataset_info:
features:
- name: sentence_good
dtype: string
- name: sentence_bad
dtype: string
- name: two_prefix_prefix_good
dtype: string
- name: two_prefix_prefix_bad
dtype: string
- name: two_prefix_word
dtype: string
- name: field
dtype: string
- name: linguistics_term
dtype: string
- name: UID
dtype: string
- name: simple_LM_method
dtype: bool
- name: one_prefix_method
dtype: bool
- name: two_prefix_method
dtype: bool
- name: lexically_identical
dtype: bool
- name: pairID
dtype: string
- name: feature_name
dtype: string
splits:
- name: train
num_bytes: 15550503
num_examples: 67000
download_size: 4374212
dataset_size: 15550503
---
# Dataset Card for "blimp"
HuggingFace Hub Upload of BLiMP: The Benchmark of Linguistic Minimal Pairs from https://github.com/alexwarstadt/blimp
If you use this dataset in your work, please cite the original authors and paper.
```
@article{warstadt2020blimp,
author = {Warstadt, Alex and Parrish, Alicia and Liu, Haokun and Mohananey, Anhad and Peng, Wei and Wang, Sheng-Fu and Bowman, Samuel R.},
title = {BLiMP: The Benchmark of Linguistic Minimal Pairs for English},
journal = {Transactions of the Association for Computational Linguistics},
volume = {8},
number = {},
pages = {377-392},
year = {2020},
doi = {10.1162/tacl\_a\_00321},
URL = {https://doi.org/10.1162/tacl_a_00321},
eprint = {https://doi.org/10.1162/tacl_a_00321},
abstract = { We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4\%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands. }
}
``` |
bh8648/split_dataset_1 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: page_num
dtype: int64
splits:
- name: train
num_bytes: 659763
num_examples: 212
download_size: 336962
dataset_size: 659763
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "split_dataset_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/0c620523 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 182
num_examples: 10
download_size: 1344
dataset_size: 182
---
# Dataset Card for "0c620523"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
PhaedrusFlow/SPIN_iter0 | ---
license: cc-by-nc-sa-4.0
---
|
eli5_category | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: ELI5-Category
size_categories:
- 100K<n<1M
source_datasets:
- extended|eli5
task_categories:
- text2text-generation
task_ids:
- abstractive-qa
- open-domain-abstractive-qa
dataset_info:
features:
- name: q_id
dtype: string
- name: title
dtype: string
- name: selftext
dtype: string
- name: category
dtype: string
- name: subreddit
dtype: string
- name: answers
struct:
- name: a_id
sequence: string
- name: text
sequence: string
- name: score
sequence: int32
- name: text_urls
sequence:
sequence: string
- name: title_urls
sequence: string
- name: selftext_urls
sequence: string
splits:
- name: train
num_bytes: 166409797
num_examples: 91772
- name: validation1
num_bytes: 13150585
num_examples: 5446
- name: validation2
num_bytes: 4737744
num_examples: 2375
- name: test
num_bytes: 10419098
num_examples: 5411
download_size: 72921829
dataset_size: 194717224
---
# Dataset Card for ELI5-Category
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ELI5-Category homepage](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/)
- **Repository:** [ELI5-Category repository](https://github.com/rexarski/ANLY580-final-project)
- **Point of Contact:** [Jingsong Gao](mailto:jg2109@georgetown.edu)
### Dataset Summary
The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. It's an English-language dataset of questions and answers gathered from the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit where users ask factual questions requiring paragraph-length or longer answers. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original [ELI5 dataset](https://huggingface.co/datasets/eli5).
### Supported Tasks and Leaderboards
- `abstractive-qa`, `open-domain-abstractive-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer.
### Languages
The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
The structure of this dataset is very similar to the original [ELI5 dataset](https://huggingface.co/datasets/eli5). A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by scores they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text.
In addition to the original ELI5 dataset, the data point also has a `category` field. There are 11 common values of `category` in this dataset: `Biology`,`Chemistry`,`Culture`,`Earth Science`,`Economics`,`Engineering`,`Mathematics`,`Other`,`Physics`,`Psychology`,`Technology`, and a special `category`: `Repost` indicates the same question has been asked before.
An example from the ELI5-Category set looks as follows:
```
{'q_id': '5lcm18',
'title': 'Why do old games running on new hardware still have technical issues ?',
'selftext': 'I am playing some mega man games on my Xbox One and experience slowdown when there are a lot of enemies on screen . but the Xbox One is significantly more powerful than the NES , so why is there still slowdown on this hardware ?',
'category': 'Engineering',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['dbuo48e', 'dbusfve'],
'text': ["The XBox is emulating NES hardware and running the emulation at a set speed . If it ran it at as fast as possible , then it would be several times faster than the original NES game and would be unplayable . I ca n't speak for Mega Man exactly , but older games tended to run on a cycle locked to the screen refresh which was a fixed 60Hz or 50Hz . There was only one piece of hardware they ran on , so there was no need to adjust for different hardware speeds .",
"In that case , it 's probably on purpose - they want to emulate the experience as closely as possible , even including the slowdown and sprite flickering . Some emulators let you turn it off , but it 's usually turned on by default . In other cases , like if you 're trying to emulate PS2 games on your PC , the game might just run really slow in general . Even though your PC is way more powerful than a PS2 , it has to \" translate \" from PS2 language to PC language in realtime , which is much more difficult than running PS2 code on the PS2 itself ."],
'score': [13, 3],
'text_urls': [[],[]]},
'title_urls': {'url': []},
'selftext_urls': {'url': []}}
```
### Data Fields
- `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps
- `subreddit`: always `explainlikeimfive`, indicating which subreddit the question came from
- `category`: tag of the question, the possible values are listed above.
- `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens
- `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n`
- `selftext`: either an empty string or an elaboration of the question
- `selftext_urls`: similar to `title_urls` but for `self_text`
- `answers`: a list of answers, each answer has:
- `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps.
- `text`: the answer text with the URLs normalized
- `score`: the number of upvotes - the number of downvotes the answer had received when the dumps were created
- `text_urls`: lists of the extracted URLs for every answer
### Data Splits
In order to avoid having duplicate questions across sets, three non-overlapping subsets of `category` are used in the training, validation and test set. Also, a special validation set contains all the questions in the `Repost` category. A valid retriever-generator model should have consistent performances on both validation sets.
The final split sizes are as follows:
| | Train | Valid | Valid2 |Test |
| ----- | ------ | ----- | ---- | ---- |
| `Biology` | 32769 | | | |
| `Chemistry` | 6633 | | | |
| `Culture` | | 5446 | | |
| `Earth Science` | 677 | | | |
| `Economics` | 5901 | | | |
| `Engineering` | | | | 5411 |
| `Mathematics` | 1912 | | | |
| `Other` | 19312 | | | |
| `Physics` | 10196 | | | |
| `Psychology` | 338 | | | |
| `Technology` | 14034 | | | |
| `Repost` | | | 2375 | |
| **Total** | 91772 | 5446 | 2375 | 5411 |
## Dataset Creation
### Curation Rationale
ELI5-Category was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine the information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well-established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/).
In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period from January 2017 to June 2021.
#### Who are the source language producers?
The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source.
### Annotations
The dataset contains the `category` as an additional annotation for the topics of questions.
#### Annotation process
The dataset is auto-annotated by the tags of posts in the [Reddit forum](https://www.reddit.com/).
#### Who are the annotators?
The annotators are users/administrators of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source.
### Personal and Sensitive Information
The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some questions and answers are about contemporary public figures or individuals who appeared in the news.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset has a similar social impact to the original ELI5 dataset [Social Impact of Dataset](https://huggingface.co/datasets/eli5#social-impact-of-dataset).
### Discussion of Biases
The dataset has similar considerations of biases to the original ELI5 dataset [Discussion of Biases](https://huggingface.co/datasets/eli5#discussion-of-biases).
### Other Known Limitations
The dataset has similar limitations to the original ELI5 dataset [Other Known Limitations](https://huggingface.co/datasets/eli5#other-known-limitations).
## Additional Information
### Dataset Curators
The dataset was initially created by Jingsong Gao, Qinren Zhou, Rui Qiu, during a course project of `ANLY 580`: NLP for Data Analytics at Georgetown University.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
```
@inproceedings{eli5-category,
author = {Jingsong Gao and
Qingren Zhou and
Rui Qiu},
title = {{ELI5-Category:} A categorized open-domain QA dataset},
year = {2021}
}
```
### Contributions
Thanks to [@jingshenSN2](https://github.com/jingshenSN2), [@QinrenZhou](https://github.com/QinrenZhou), [@rexarski](https://github.com/rexarski) for adding this dataset. |
dkshjn/processed_distilabel-intel-orca-dpo-pairs-v2 | ---
dataset_info:
features:
- name: system
dtype: string
- name: input
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: generations
sequence: string
- name: order
sequence: string
- name: labelling_model
dtype: string
- name: labelling_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_labelling_response
dtype: string
- name: rating
sequence: float64
- name: rationale
dtype: string
- name: status
dtype: string
- name: original_chosen
dtype: string
- name: original_rejected
dtype: string
- name: chosen_score
dtype: float64
- name: in_gsm8k_train
dtype: bool
- name: formatted_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: formatted_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 209153276
num_examples: 12859
download_size: 103496030
dataset_size: 209153276
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "processed_distilabel-intel-orca-dpo-pairs-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916076 | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- squad_v2
eval_info:
task: extractive_question_answering
model: deepset/bert-base-uncased-squad2
metrics: ['bertscore']
dataset_name: squad_v2
dataset_config: squad_v2
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. |
bene-ges/wikipedia_ru | ---
license: cc-by-sa-4.0
---
|
omontalbano/github-issues-2 | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
dtype: 'null'
- name: assignees
sequence: 'null'
- name: milestone
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 235272
num_examples: 100
download_size: 112192
dataset_size: 235272
---
# Dataset Card for "github-issues-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rbeauchamp/diffusion_db_dedupe_from50k_train | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: seed
dtype: uint32
- name: step
dtype: uint16
- name: cfg
dtype: float32
- name: sampler
dtype: string
- name: width
dtype: uint16
- name: height
dtype: uint16
- name: user_name
dtype: string
- name: timestamp
dtype: timestamp[ns, tz=UTC]
- name: image_nsfw
dtype: float32
- name: prompt_nsfw
dtype: float32
- name: __index_level_0__
dtype: int64
- name: image_path
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 22139609531.30241
num_examples: 34537
download_size: 21346107309
dataset_size: 22139609531.30241
---
# Dataset Card for "diffusion_db_dedupe_from50k_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.